How to store number of times a button was clicked? - javascript

I have hosted a basic webpage on amazon S3. The page is implemented in HTML, CSS and Javascript. I wish to record the number of times a button (which is present on the page) was clicked. Also since S3 support static web hosting only and considering my requirement needs server side scripting, is this question even valid? I do not need a fancy solution, I just need to record the number of times a button was clicked and store it in some simple text file residing in the bucket.
By number of times the button was clicked , I mean in all how many times was this button clicked. This web page will be accessed by many users. I want a cumulative number of clicks that occurred. So if i access the web page today and click the button, the number of click becomes 1, if i do the same tomorrow, the number of clicks becomes 2
EDIT: Scenario
Consider the scenario. I have three users A, B and C. Suppose for one week, 'A' visits the website 3 times a day and clicks the button 4 times in all. B visits the website only once and clicks it 2 times. C visits it twice and clicks the button 1 time. So the total number i should be seeing by the end of the week is 7 (4 + 2 + 1). Ill add it in the edit.

This cannot be accomplished totally on the client (web browser) because it would need the ability to "read the current value" and then "write the new value" somewhere. This opens security holes because anyone reading the Javascript could modify its behavior.
Instead, you need to be able to trigger some server-side code that will increment the count without giving any additional capabilities.
The simplest option (depending upon your abilities) would probably be:
Create an Amazon DynamoDB table to store the count
Create an AWS Lambda function that increments the count in some database
Create an API using AWS API Gateway that calls the Lambda function
DynamoDB pricing is only about half a cent per hour at the slowest speed (probably sufficient for your need) and 25c per GB/month. That's not going to cost much.
You'd possibly fit within the free usage tier for Lambda and the (first year) free tier for API Gateway.
The Lambda function would merely be an Update expression to increment the value.
Or, quick and dirty...
If all of this sounds too complex, you could simply update DynamoDB directly from JavaScript (see DynamoDB for Javascript – cheatsheet for examples). The downside is you'd need to provide some security credentials to the code, but you could limit them to only being able to call the Update function on a specific table. Not as secure, but the worst thing they could do is update your value to some very strange numbers (whereas the server version would only allow one increment at a time).

Frankly, you'll probably want to start getting smarter about recording clicks, by dividing it into time periods (eg hour, day) and possibly by User or demographics, so you might end up recording more details about each click and then calculating totals separately.
Guess what... this is known as website analytics!
You might be better off simply implementing a service like Google Analytics that gives you all of this functionality with very little effort.

Well, as you mentioned, you will gonna need a server-side language to do this properly and save your data in a database, but still you can use some tricks to do this with only javascript, one of them is storing your click counts in local storage, so at least until user not removed the storage data, your value is valid!
I create a simple example.
Javascript :
function countClicks(){
console.log("Counting Start...");
var counts = localStorage.getItem('click-counts');//You can use
if (counts!==null){
var newClick = parseInt(counts) + 1;
localStorage.setItem('click-counts', newClick);
}
else{
localStorage.setItem('click-counts', "1");
}
document.getElementById("showCounts").innerHTML = counts;
}
HTML :
<button onclick="countClicks()" >
Click!
</button>
<div>
<p id="showCounts"></p>
</div>
You can try it here :
https://jsfiddle.net/emilvr/35v60xt5/21/
The better option is using some third-party services like Firebase :
https://firebase.google.com/docs/database/web/start

You have to use client side code for check times of button clicked and
store this value into your database for future use :
/*------------// By Using JQuery //----
$(function() {
var i = 0;
$('button').click(function() {
i++;
$("#count").val(i);
// run ajax to save this value dynamically into database
});
});
--------------------------------*/
// By using javascript
var i = 0;
function countClick() {
i++;
//alert('here');
document.getElementById("count").value = i ;
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.min.js"></script>
<button onClick="countClick()">Click Me</button><br>
Total Count<input type="text" id="count" disabled >

Related

fastest way to Filter, Order and split in pages method

I want to get data from a database, to show on a page. There is a huge amount of rows in the table, so I'm using pages to avoid having to scroll forever.
I have functionnalities to search words (no specific columns), order by any column, and obviously change the page size and which page I am on.
I could, in theory, just ask the database for everything (SELECT * FROM myTable), send it to my html view, and work through the data entirely in javascript. The problem is, there is so much data that this is extremely slow using my structure (page controller calls my main logic, which calls a webservice, which calls the database), sometimes waiting up to 20 seconds for the original load of the page. After it's loaded, the javascript is usually fast.
Or, I could do most of that work in the controller, using Linq. I could also do the work in the webservice (it's mine), still in Linq. Or, I could straight away use WHERE, ORDER BY, COUNT, and a bunch of dynamic SQL requests so that I get instantly what I want from the database. But any of those forces me to refresh the page every time one of the parameters changes.
So I'm wondering about performance. For example, which is faster between:
var listObjects = ExecuteSQL("SELECT * FROM myTable");
return listObjects.Where(x => x.field == word).OrderBy(x => x.field);
and
var listObjects = ExecuteSQL("SELECT * FROM myTable WHERE field = :param1 ORDER BY field", word);
return listObjects;
And in what specific situations would using the different methods I've mentioned be better or worse?
No.
You want to do the work of selecting a block (pagefull) of data on your dataserver. That's it's job; it knows how to do it best.
So, forget the ExecuteSQL. You are pretty much shutting down everything's ability to help you. Try LINQ:
var page = (from m in MyTable
where m.field == param1
orderby m.field
select m)
.Skip((nPage-1)*pageLength).Take(pageLength);
That will generate the exact SQL to tell the Data Server to return just the rows you want.

Sortable Userlists best practice

I'm trying to do a webservice where multiple Users can be logged in at the same time. On the Nodejs server there is a unsorted array with all the users. And a Database with all users.
Every User can always see every User online in a HTML Bootstrap Table, with different columns for Username, Id, online since.... and there are also lists for Groups that include online and offline Users. The important part is, that the Table should be updated like every 3-5 seconds.
I want the Users to be able to sort the Online Users Table by clicking on the Tableheader of a Column. What is the best practice to do that?
I currently can only think of two different solutions, both don't seem perfect to me.
1. Use Bootstrap sorting
I save the information in which way the User wanted the list to be sorted.
Then I receive the unsorted Data and fill the Table with it, after which I will trigger a click on the header and sort the Table the same way again.
But if I do it this way I think the User will always notice that the Table is refilled and then sorted againm if done every 3-5 Seconds.
2. Keep sorted lists on the Server
Keep all the different sorted lists of the Users on my Server at all Time and let the server sort them new every 3-5 Seconds.
Then when the client requests it, send the sorted list he currently wants to the client and fill the Table HTML.
But this way I think it would use quite some resources from my server, because it also has to sort some mixed online/offline Users for groups which would be many different tables I had to constantly save and reorder on my server.
Are there any better ways to achieve many sortable Userlists for the clientside?
The important thing about the UI is to reduce flicker and any kind of lag. First off try testing sorting on the user end before the data is displayed in the table. You don't want to trigger the click events because that might make a flicker effect where the data comes in, is displayed, sorted, then displayed again. If for some reason the sorting is taking too long this could result in lag or choppiness on the ui so test it out and see how it feels. I would only look to the server side if the client side isn't performing well. Check your CPU and RAM to see how best to handle that. The sorting on the fly might be doable with your setup or keeping it in RAM may be an option if you have some to spare.
Serverside stored in a site-wide or thread-engine variable in ram. If you can get away with it the thread-engine variable will be the fastest option but the cost would be SORTEDDATA_BYTES * WEB_THREADS.
Array.prototype.keySort = function(k, descending) {
var z = descending === true ? -1 : 1;
this.sort(function(a, b) {
return a[k] > b[k] ? 1*z : a[k] < b[k] ? -1*z : 0;
});
return this;
};
var sortedJSON = {
UsernameAsc: JSON.stringify(data.keySort("Username")),
UsernameDesc: JSON.stringify(data.keySort("Username", true)),
IdAsc: JSON.stringify(data.keySort("Id")),
IdDesc: JSON.stringify(data.keySort("Id", true))
};

How to phase out the json loading process to multiple steps to achieve improved user experience?

I am using a web service that serves travel related data from third party sources, this data is converted to JSON and
is used to formulate the output based on search criteria a user.
If the web service subscribes to multiple third party service providers, the application receives thousands of potential search results for some searches.Some of these JSON files created for these search results are as [high as 2-4 MB][1] which causes considerable delay in attempting to loading the json results.
The whole json result set is required for further sorting and filtering operations on the search results by the users.Moving the sort and filtering operations to the back-end is not a possibility as for now.
For small and medium JSON result sets the current implementation works out well but large JSON result sets cause performance degradation.
How could I phasing out the JSON loading process to multiple steps to achieve improved user experience even with very large JSON result sets?
Any leads on how I can overcome this problem would be much appreciated.
I did find some a way to solve this issue so i though might as well post it for other who may find this answer useful.
The web pagination mechanism will automatically improve responsiveness of the system, user experience, and may reduce clutter on the page. Unless the returning result set is guaranteed to be very small, any web application with search capabilities must have pagination. For instance, if the result set is less then 30 rows, pagination may be optional. However, if it's bigger then 100 rows, pagination is highly recommended, and if it's bigger then 500 rows, pagination is practically required. There are a lot of different ways to implement the pagination algorithm. Depending on the method used, both performance and the user experience will be affected.
Some Alternatives on phasing out the Json loading processes to multiple steps
If the JSON is large, you could always contemplate a "paging" metaphor, where you download the first x records, and then make subsequent requests for the next "pages" of server data. Given that additional server records can presumably be added between requests, we will have to think carefully about that implementation, but it's probably easy to do.
Another way of making the JSON more efficient, is to limit the data returned in the initial request. For example, the first request could return just identifiers or header information for all or a reasonable subset of the records, and then you could have subsequent requests for additional details (e.g. the user goes to a detail screen).
A particular permutation of the prior point would be if the JSON is currently including any large data elements (e.g. Base64-encoded binary data such as images or sound files). By getting those out of the initial response, that will address many issues. And if you could return URLs for this binary data, rather than the data itself, that would definitely lend itself for lazy loading (as well as giving you the ability to easy download the binary information rather than a Base64 encoding which is 33% larger). And if you're dealing with images, you might want to also contemplate the idea of thumbnails v large images, handling the latter, in particular, in a lazy-loading model.
If adopting the first approach.....
If adopting the first approach which seems more feasible given the current development scenario, you first need to change your web search result processing back end code so that in response to a request, it only delivers n records, starting at at a particular record number. You probably also need to return the total number of records on the server so you can give the user some visual cues as to how much there is to scroll through.
Second, you need to update your web app to support this model. Thus, in your UI, as you're scrolling through the results, you probably want your user interface to respond to scrolling events by showing either
(a) actual results if you've already fetched it; or
(b) some UI that visually indicates that the record in question is being retrieved and then asynchronously request the information from the server.
You can continue downloading and parsing the JSON in the background while the initial data is presented to the user. This will simultaneously yield the benefit that most people associate with "lazy loading" (i.e. you don't wait for everything before presenting the results to the user) and "eager loading" (i.e. it will be downloading the next records so they're already ready for the user when they go there).
General Implementation steps to follow for first approach
Figure out how many total rows of data there are once merge process is complete.
Retrieve the first n records;
Send request for subsequent n records and repeat this process in the background until full result set is received.
If the user scrolls to a record, you show it if you've got it, but if not, provide visually cue that you're going to get that data for them asynchronously.
when the data comes in merge JSON update the UI, if appropriate.
If by any channce filters and sorting options are applied on web end they should be disabled until full result set is received.
General Implementation steps to follow to generating page links
After choosing the best approach to retrieve the result set and traverse it, the application needs to create the actual page links for users to click. Below is a generic function pseudo-code, which should work with any server-side technology. This logic should reside in your application, and will work with both database-driven and middle-tier pagination algorithms.
The function takes three (3) parameters and returns HTML representing page links.The parameters are query text, staring location row number, and total number of the result set rows. The algorithm is clever enough to generate appropriate links based on where the user is in the navigation path.
Note: I set the default of 50 rows per page, and a page window to 10. This means that only 10 (or fewer) page links will be visible to the user.
private String getSearchLinks(String query, int start, int total) {
// assuming that initial page = 1, total = 0, and start is 0
String result = "";
int start = 0;
if (total == 0) { return ""; // no links }
Int page_size = 50; // number of rows per page
//page window - number of visible pagers per page
Int window = 10;
int pages = ceil(total / page_size );
result = "Pages:";
int current_page = (start / page_size ) + 1;
//numeric value of current page ex. if start is 51 : 51/50 =
// 1 + 1 = 2
Int left_link_count = ((current_page - 1) > (window / 2))
? (window / 2 - 1) : (current_page - 1);
Int pageNo = current_page - left_link_count;
if (pageNo > 1) { // show first page and if there are more
// links on the left
result += "1 .. ";
result += "«";
}
for (int i = 0; i < window-1; i++) {
if (pageNo > pages) {
break;
}
else if (pageNo == current_page) {
result += "" + pageNo + "";
}
else {
result += "pageNo";
}
pageNo++;
} // end for
if ((pageNo - 1) < pages) {
result += "»";
}
result += "
Showing"+((start > total)?total+1:start+1)+
" - "+(((start + 50)>total)?total:(start + 50))+"
of Total:"+total;
return result;
}
This logic does not care how the viewable portion of the result set is generated. Whether it is on the database side or on the application server's side, all this algorithm needs is a "total" number of results (that can be per-cached after the first retrieval), and the indicator ("start") containing which row number was the first on the last page user was looking at. This algorithm also shows the first page link if the user is on a page beyond the initial page window (for example, page 20), and correctly accounts for the result sets with a number of rows not enough for 10 pages. (for example, only 5 pages)
The main "for loop" generates the links and correctly computes the "start" parameter for the next page. The Query string and Total are always the same values.
Points to note
In a portal where internal and external data is used there would be
an unavoidable performance overhead because the system will have to
wait until all external hotel information is fully received and
merged and sorted with local hotel details. Only then would we be
able to know the exact result count.
Deciding how many additional requests are required would be somewhat
simplified if a standard result count per page is used.
This change would provide the user with a quick and uniform page load
time for all search operations and it would be populated with the
most significant result block of the search on the first instance.\
Additional page requests could be handled through asynchronous
requests.
Not only the hotel count but the content sent also needs to be
checked and optimized for performance gains.
Some instances for similar hotel counts for the generated JSON object
size varies considerably depending on the third party suppliers
subscribed too.
Once the successive JSON messages are received these JSON blocks
would need to be merged. Prior to the merging the user may need to be
notified new results are available and if he/she wishes to merge the
new results or work with existing primary result set.
Useful Resources :*
Implementing search result pagination in a web application
How big is too big for JSON?

Improving Twitter's typeahead.js performance with remote data using Django

I have a database with roughly 1.2M names. I'm using Twitter's typeahead.js to remotely fetch the autocomplete suggestions when you type someone's name. In my local environment this takes roughly 1-2 seconds for the results to appear after you stop typing (the autocomplete doesn't appear while you are typing), and 2-5+ seconds on the deployed app on Heroku (using only 1 dyno).
I'm wondering if the reason why it only shows the suggestions after you stop typing (and a few seconds delay) is because my code isn't as optimized?
The script on the page:
<script type="text/javascript">
$(document).ready(function() {
$("#navPersonSearch").typeahead({
name: 'people',
remote: 'name_autocomplete/?q=%QUERY'
})
.keydown(function(e) {
if (e.keyCode === 13) {
$("form").trigger('submit');
}
});
});
</script>
The keydown snippet is because without it my form doesn't submit for some reason when pushing enter.
my django view:
def name_autocomplete(request):
query = request.GET.get('q','')
if(len(query) > 0):
results = Person.objects.filter(short__istartswith=query)
result_list = []
for item in results:
result_list.append(item.short)
else:
result_list = []
response_text = json.dumps(result_list, separators=(',',':'))
return HttpResponse(response_text, content_type="application/json")
The short field in my Person model is also indexed. Is there a way to improve the performance of my typeahead?
I don't think this is directly related Django, but I may be wrong. I can offer some generic advice for this kind of situations:
(My money is on #4 or #5 below).
1) What is an average "ping" from your machine to Heroku? If it's far, that's a little bit extra overhead. Not much, though. Certainly not much when compared to then 8-9 seconds you are referring to. The penalty will be larger with https, mind you.
2) Check the value of waitLimitFn and rateLimitWait in your remote dataset. Are they the default?
3) In all likelyhood, the problem is database/dataset related. First thing to check is how long it takes you to establish a connection to the database (do you use a connection pool?).
4) Second thing: how long it takes to run the query. My bet is on this point or the next. Add debug prints, or use NewRelic (even the free plan is OK). Have a look at the generated query and make sure it is indexed. Have your DB "explain" the execution plan for such a query and make it is uses the index.
5) Third thing: are the results large? If, for example, you specify "J" as the query, I imagine there will be lots of answers. Just getting them and streaming them to the client will take time. In such cases:
5.1) Specify a minLength for your dataset. Make it at least 3, if not 4.
5.2) Limit the result set that your DB query returns. Make it return no more than 10, say.
6) I am no Django expert, but make sure the way you use your model in Django doesn't make it load the entire table into memory first. Just sayin'.
HTH.
results = Person.objects.filter(short__istartswith=query)
result_list = []
for item in results:
result_list.append(item.short)
Probably not the only cause of your slowness but this horrible from a performance point of view: never loop over a django queryset. To assemble a list from a django queryset you should always use values_list. In this specific case:
results = Person.objects.filter(short__istartswith=query)
result_list = results.values_list('short', flat=True)
This way you are getting the single field you need straight from the db instead of: getting all the table row, creating a Person instance from it and finally reading the single attribute from it.
Nitzan covered a lot of the main points that would improve performance, but unlike him I think this might be directly related to Django (at at least, sever side).
A quick way to test this would be to update your name_autocomplete method to simply return 10 random generated strings in the format that Typeahead expects. (The reason we want them random is so that Typeahead's caching doesn't skew any results).
What I suspect you will see is that Typeahead is now running pretty quick and you should start seeing results appear as soon as your minLength of string has been typed.
If that is the case then we will need to into what could be slowing the query up, my Python skills are non-existent so I can't help you there sorry!
If that isn't the case then I would maybe consider doing some logging of when $('#navPersonSearch') calls typeahead:initialized and typeahead:opened to see if they bring up anything odd.
You can use django haystack, and your server side code would be roughly like:
def autocomplete(request):
sqs = SearchQuerySet().filter(content_auto=request.GET.get('q', ''))[:5] # or how many names you need
suggestions = [result.first_name for result in sqs]
# you have to configure typeahead how to process returned data, this is a simple example
data = json.dumps({'q': suggestions})
return HttpResponse(data, content_type='application/json')

How do i deduct a percentage from a number and assign the difference to the innerHTML attribute using Javascript or Jquery?

Im customizing a payment gateway solution using php and jquery. Before i pass the data(ie. customer address, tele, amount of purchase, etc.), I want to deduct 2.9% from the purchase/product amount.(which would be my personal service fee) and then pass the remaining balance to the payment gateway for processing. How would i do this using PHP?
var $adjustedamount;
var $mypercentage = 0.029;
function computemypercentage($original_price){
$adjustedamount = ($original_price += ($original_price * $mypercentage));
return $adjustedamount;
}
document.getElementById("trueamount").innerHTML= computemypercentage(29.99);
As I stated in the comments already, please ensure ANY calculations about money are done on the server side. JavaScript is great for providing a better user experience, but not for critical business logic.
The reason for this is that it is simply too easy to modify those values. Pull up any modern browser and open the developer tools. You can change form values and JavaScript at will, making anything coming from the client untrustworthy by default.
As for assigning the value to some element... your code should work fine: http://jsfiddle.net/yPC3K/

Categories