JavaScript - The best debounce time delay [duplicate] - javascript

Lets say we have a simple example as below.
<input id="filter" type="text" />
<script>
function reload() {
// get data via ajax
}
$('#filter').change($.debounce(250,reload));
</script>
What we're doing is introducing a small delay so that we reduce the number of calls to reload whilst the user is typing text into the input.
Now, I realise that this will depend on a case by case basis but is there an accepted wisdom of how long the debounce delay should be, given an average (or maybe that should be lowest common denominator) typing/interaction speed. I generally just play around with the value until it "feels" right, but I may not represent a typical user. Has anyone done any studies on this?

As you hinted at, the answer depends on a number of factors - not all of them subjective.
In general the reason for making use of a debounce operation can be summed up as having one of two purposes:
Reducing the cost of providing dynamic interactive elements (where cost can be computational, IO, network or latency and may be dictated by the client or server).
Reducing visual "noise" to avoid distracting the user with page updates while they are busy.
Reaction Times
One important number to keep in mind is 250ms - this represents the (roughly) median reaction time of a human and is generally a good upper bound within which you should complete any user interface updates to keep your site feeling responsive. You can view some more information on human reaction times here.
In the former case, the exact debounce interval is going to depend on what the cost of an operation is to both parties (the client and server). If your AJAX call has an end to end response time of 100ms then it may make sense to set your debounce to 150ms to keep within that 250ms responsiveness threshold.
On the other hand, if your call generally takes 4000ms to run, you may be better off setting a longer debounce on the actual call and instead using a first-layer debounce to show a loading indicator (assuming that your loading indicator doesn't obscure your text input).
$('#filter').change($.debounce(250, show_loading));
$('#filter').change($.debounce(2000, reload));
Backend Capacity
It is also important to keep in mind the performance cost of these requests on your backend. In this case, a combination of average typing speed (about 44 words per minute, or roughly 200 characters per minute) and knowledge of your user base size and backend capacity can enable you to select a debounce value which keeps backend load manageable.
For example: if you have a single backend capable of handling 10 requests per second and peak active user base of 30 (using this service), you should select your debounce period such that you avoid exceeding 10 requests per second (ideally with a margin of error). In this case, we have 33.3% of the capacity required to handle one input per user per second, so we ideally would serve at most one request per user every 3 seconds, giving us our 3000ms debounce period.
Frontend Performance
The final aspect to keep in mind is the cost of processing on the client side. Depending on the amount of data you're moving and the complexity of your UI updates, this may be negligible or significant. One thing you want to try and ensure is that your user interface remains responsive to user input. That doesn't necessarily mean that it always needs to be able to react, however while a user is interacting with it, it should react rapidly to them (60FPS is generally the objective here).
In this case, your objective should be to debounce at a rate which prevents the user interface from becoming sluggish or unresponsive while the user is interacting with it. Again, statistics are a good way to derive this figure, but keep in mind that different types of input require different amounts of time to complete.
For example, transcribing a sentence of short words is generally a lot faster than entering a single long and complex word. Similarly, if a user has to think about what they are entering they will tend to type slower. The same applies for the use of special characters or punctuation.
Subjective Answer
In practice, I've used debounce periods which range from 100ms for data that is exceptionally quick to retrieve and presents very little impact on performance through to 5000ms for things that were more costly.
In the latter case, pairing a short, low-cost debounce period to present the user with feedback and the longer period for actual computational work tends to strike a good balance between user experience and the performance cost of wasted operations.
One notable thing I try to keep in mind when selecting these values is that, as someone who works with a keyboard every day, I probably type faster than most of my user base. This can mean that things which feel smooth and natural to me are jarring for someone who types slower, so it's a good idea to do some user testing or (better yet) gather metrics and use those to tune your interface.

I'd like to offer a succinct answer regarding search text input.
I usually do 300ms, which just feels right considering both saving hardware resources and providing a good enough user experience.
--
An interesting thought...
Let's take an example from one of the master's: Google.
If you notice (you have to be a quick typist), Google actually has very little or no debounce time for the first couple characters (2 I think), but after that it increases the debounce time. Their main goal, obviously, is to give an instantaneous feel and balance their UI with use cases. I don't know what data or studies they've done, but they've done them and are a good example when a search input is the primary function of a site.
With that, I'd say this is an excellent user experience, albeit extra complexity and programming time. Google needs it. A less frequently used search bar could go without extra complexity, perhaps.

Related

Caching information from API queries - Limited to 10 per 10s

relatively new to databases here (and dba).
I've been recently looking into Riot Games' APIs, however now realising that you're limited to 10 calls per 10 seconds, I need to change my front-end code that was originally just loading all the information with lots of and lots of API calls into something that uses a MySQL database.
I would like to collect ranked data about each player and list them (30+ players) in an ordered list of ranking. I was thinking, as mentioned in their Rate Limiting Page, "caching" data when GET-ing it, and then when needing that information again, check if it is still relevant - if so use it, if not re-GET it.
Is the idea of adding a time of 30 minutes (the rough length of a game) in the future to a column in a table, and when calling check whether server time is ahead of the saved time. Is this the right approach/idea of caching - If not, what is the best practice of doing so?
Either way, this doesn't solve the problem of loading 30+ values for the first time, when no previous calls have been made to cache.
Any advice would be welcome, even advice telling me I'm doing completely the wrong thing!
If there is more information needed I can edit it in, let me know.
tl;dr What's best practice to get around Rate-Limiting?
Generally yes, most of the large applications simply put guesstimate rate limits, or manual cache (check DB for recent call, then go to API if its an old call).
When you use large sites like op.gg or lolKing for Summoner look ups, they all give you a "Must wait X minutes before doing another DB check/Call", I also do this. So yes, giving an estimated number (like a game length) to handle your rate limit is definitely a common practice that I have observed within the Riot Developer community. Some people do go all out and implement actual caching though with actual caching layers/frameworks, but you don't need to do that with smaller applications.
I recommend building up your app's main functionality first, submit it, and get it approved for a higher rate limit as well. :)
Also you mentioned adjusting your front-end code for calls, make sure your API calls are in server-side code for security concerns.

How to securely measure user's time

I want to measure the time it takes for a user to complete a task (answer a quiz). I want to measure it accurately, without the network lag. Meaning, if I measure on the server side the time between 2 requests, it won't be the real time it took the user, because the network time is factored in.
But on the other hand, if I measure in javascript and post the timestamps to the server, the user will be able to see the code, and cheat by sending false timestamps, no?
How can I get the timestamps in javascript and make sure the user doesn't fake it?
Generally in client side code, any question that starts off with "How to securely..." is answered with "Not possible". Nothing, not even putting variables in a closure (because I, the evil cheating user could just change the code on my end and send it back to you).
This is the kind of validation that should be performed server side, even with the disadvantage of network latency.
The trick here would be to measure the time using JavaScript, but also keep track of it using server-side code. That way, you can rely on the timestamps received by the client as long as you enforce a maximum difference between calculated times. I'd say a few seconds should be good enough. However, by doing so, you are creating an additional vector for failure.
Edit: A user could potentially tweak his or her time in their favor by up to the maximum enforced difference if they are able to take advantage of the (lack of) network lag.
I faced same problem while designing an online examination portal for my project.
I went for a hybrid approach.
Get time from server as user loads the page, and starts timer based on javascript. Record the start time in your database.
Let the timer run on client side for some time, say 30 seconds.
Refresh timer by making a AJAX call to server for timer reset as per the time that has passed already.
NOTE: try to use external javascript and obfuscate the code of timer to make guessing difficult.
This way you may not prevent user completely from modifying timer, but you can limit max possible error to 30s.

Monitoring specific js events and the time it takes to execute them as part of automated testing?

I am trying to find a reliable way of testing a webapp in order to obtain certain performance metrics. The webapp uses mostly Javascript for its various actions/requests etc.
What I want to be able to do is measure the time between two particular events, for example the amount of time it takes for something to appear to a user after I login to the site - currently I can approximate this using Selenium Webdriver as follows:
WebElement element4 = driver.findElement(By.linkText("Login"));
element4.click();
long start = System.currentTimeMillis();
WebElement element5 = wait3.until(ExpectedConditions.visibilityOfElementLocated(By.linkText("New")));
long finish = System.currentTimeMillis();
long time = finish - start;
System.out.println("Time taken for New element to load: " + time + "ms");
Or using:
WebElement element6 = driver.findElement(By.linkText("New"));
in place of
wait3.until(ExpectedConditions.visibilityOfElementLocated(By.linkText("New")));
However the results this will give are not often accurate. What I am wondering is if there is a way to do this more accurately using Selenium, or if there is another tool I can use to reliably obtain this data in a manageable format?
Thanks
How much accuracy do you need? There will always be an error margin, you have to define what is the acceptable margin for your scenario.
In this case there might be a small overhead e.g. due to Selenium itself (especially if you are running a remote WebDriver). I will assume there could be other random external factors influencing it as well.
That said, you could improve accuracy by running a "calibration" test that would help you to determine the error margin inherent to your own test environment / setup.
Your calibration could go like:
Open the page being tested
Wait until everything is loaded
Measure how long wait.until() takes for elements which you know already exist
Repeat (3) multiple times to get some statistical significance
At the end you have a list of measurements. Get e.g. the median of it and you have an idea of how much you have to subtract later from the latency you measure when you run your real tests. Or else, you can simply add the error margin and standard deviation as an extra information which you would attach to your performance test results.
Note that in case of Selenium, it will also make a difference which method you use for finding an element on the page (e.g. finding by ID might be faster than finding by "linkText"), so you might want enhance step 3 above to measure the wait.until() error for all the different element finding mechanisms.

What are the best practices for making online high score lists in JavaScript based games? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
[I know there have been similar questions about preventing cheating on high score lists, but no answer didn't really help me for JavaScript based games, so please try to think about my question, before telling me about similar posts. I ask about best practices because the JavaScript is always visible for the user and therefore it is not possible to prevent cheating completly, I just want to make it harder.]
I'm developing a JavaScript based game that works in the browser. I want to make a high score list that contains the user name and the score of all users. To achieve that the browser sends the username and the score to my server (via AJAX).
Submitting fake scores to this list would be fairly easy: One could take a look at the AJAX requests and then make an own AJAX request with a faked score. Using something like a token that has to be send with the other data is pointless, as it will be easy to discover.
My only approach, that would prevent cheating, would be to send a description of every user action to the server and calculate the score there. But this is not really practicable as it would be too much for the server.
I accepted an answer, but in case anyone has other ideas about how to make cheating harder, please create another answer!
I like to play cheat the cheater - something like using a token to authenticate the score that changes every time the update is called... but I accept the cheat score that gets posted using a duplicate token. Then I display that cheat score to only the cheater, so it appears that it worked, but now the cheater is seeing his results in a sandbox.
You pretty much answered your own question. If you want to really make it harder for users to cheat, send game log to the server, where you'll calculate the score.
You don't have to send all the events, just ones that affect result score.
There are some techniques, though, that may help you:
include signature in your request. Something like MD5(secret_key + params). Although, "secret key" will have to be in JS source, it will effectively protect you from simple request interception (see Tamper Data and Charles)
if it's a multiplayer game, accept scores calculated by clients and compare them. Cheaters will be pretty visible (assuming that the majority of users are honest).
you can set a score cap, an "unreachable" result. Everyone who posts score higher than this is a cheater. For example, speed typing game: no one can type correct text at 1500 chars/minute, even 700 is pretty damn hard (though achievable).
On score submit:
Request some token from the server, this should be time based and only valid for about 2 seconds
Only accept submits that include a valid hash of this token, some salt and the score.
This prevents manual tampering with the request as it would timeout the score. If you want to account for high-latency give it a little more time until the timeout.
The hashing function:
Scramble the hashing function inside packed code (http://dean.edwards.name/packer/ really produces nasty to read code) if you use jQuery or some other library just drop the hashing functionality inside the library file and it gets pretty bad to find, escpecially if you use a function name like "h" :)
Handling the score-variable itself:
Well everybody with a debugging console can change the variable on runtime when doing this but if you encapsulate your whole Javascript inside a function and call it nothing is in the global namespace and it's much harder to get to the variables:
(function() {
//your js code here
})();
I have had lots of thoughts about it and, eventually, decided to only have local individual highscores, so cheating is not really beneficial for player and not harmful to others. Yet my game is just a simple minesweeper, and there were people who complained about the lack of competitive table.
Option 2, is approach taken by WebSudoku - to show your place "among the people of internet". So you will not see any other results, and people wont see your results - but you can compare yourself to crowd.
p.s: And seriously - any kid with Firebug/WebInspector can easily hack your JS game and, eventually, to be able to reach very high score.
If you are relying on the client to send the final score to the server, then there is no way (afaik) to prevent a genius from cheating. But I think you might be able to prevent stupid people (and honest people) from cheating, so that only geniuses and their friends will dominate your leaderboards.
There are two ways I can think of
1.) "security through obscurity."
Come up with an algorithm that transforms simple scores into something else (and to transform them back). Then obfuscate it. Complicate it. Write a function that multiplies it by q and divides it by ralph. Apply a bunch of functions to it, and among the 5-15 functions that do random stuff to it, include one that multiplies the number by 19 ( a prime number ). On your server, check to make sure every incoming number (or letter) is divisible by 19, and decode
You have to write a bunch of complex code that transforms simple scores into something crazy-looking. You have to write a series of functions in the least-efficient, most spaghetti-code fashion possible. Use
One thing you cold do is to have a set of disallowed values. I.e., perhaps all points awarded are even. If anyone tries to submit an odd number, they are obviously cheating (and very stupid).
2.) time
You should be able to know when the user started the game. You should have a session started and record when they requested the page. Then you should also be able to tell when they submitted their score. And you should also know what the time series is for max points. I.e. can you get 5 points per minute, 100 per minute, minute^3, etc... If user submits more points than are possible during that time, they are cheating.
You could also strike a balance between server and client processing and make the client should send progress update every x minutes by ajax. And if it fails to report, you assume it's been compromised (much like in Bond movies, when he's infiltrating the enemy's lair and he snaps some guard's neck. When the guard doesn't respond to his next 10-minutely check-in, the alarms will go off).
If you've ever played Zynga Poker, you've probably seen what happens when someone at the table has a slow internet connection.
Depending on the nature of the game, you could use other players to verify the results. In simple games this works great, on others you have to be clever and develop many aspects around this feature. E.g. sometimes is possible to replay and verify results based on logged actions. This trick works specially well for Human versus AI, as long as the model is deterministic.
Another option is redefining the score concept to be more user-centric, this is pretty easy to implement, but tends to be hard to devise, and only applies to a few categories of games.
Purely speculative approaches are also possible, it's sometimes pretty easy to know when some parameters don't fit. It would not avoid cheating, but would moderate it a lot.
The most complicated part is getting a small enough replay log, but since most data isn't random (except for player actions, which, actually aren't that random because depend on the game) it's essentially a matter of getting the design right.
Also, if gameplay is extended enough, for action games and the like you can get a lot of compression from doing some approximation, merging (e.g. motion vectors), and clipping uninteresting stuff.
Ideally you would send your entire event log to the server for checking. Perhaps you can implement a heuristic so you can easily determine if the score is within a set of bounds. For instance, if the total game time is 5 seconds you might expect a much lower score than with a much longer game time.
Alternatively, you could choose to manually check the event log for really high scores (the overall top-X, which should be fairly stable).
You will need a seeded random number generator if you're doing anything with randomness (like random events). Which might be tricky if you hadn't already thought of it.
You can find many more resources but it really just boils down to server-side checking. JavaScript is not unique in this, but likely easiest to exploit because you not only see the client-server communication but also the client-side source code!
HTML5 Multiplayer Game Security Solutions
http://lanyrd.com/2011/jsconf/sfggb/
Games like Starcraft only record the mouse clicks and key presses. The actual commands are then simulated. I expect 'Worms Armageddon' to do something similar but their random events (like the bounciness of bananas) aren't seeded properly so in the instant replay you might get a different result.
You could imagine something similar for MMORPGs. The server calculates your position based on the keypresses, the client merely tries to give a good early interpretation but you may warp around when you're lagging because the server will place you elsewhere on the map because it didn't get the keypress events timely.
If you attack something, the server will check if you're close enough and how much damage you can expect to deal with current stats and equipment.
Record key points in game, then score is submitted with these key points. When people look high scores, they can also see overview of played game, if it looks like it is impossible to play like that without cheating, then people can report these suspicious scores to admins.
I used a system using a time based request having 3 parameters
req number, curr time, score
The req number is returned from server in the response to the update score request , each time this is a new random value.
The curr time is calculated not from computer clock but from start of game and is synced with server using an ajax request.
Update score request is sent after short intervals (around 30 sec max).
Following checks are applied on the server
Time is within 10 seconds range from the server clock.
there has been not more than 40 seconds since the req number was sent.
the score change sent after 30 seconds is possible (within 2 x humanly possible range)
Score is updated only if the above checks are passed or the user gets a disconnection message :(
This is simpler than most methods and works out to eliminate all casual hackers (well, unless they read this and want to go to the trouble of updating score quickly or making a script of their own).
If not cheating is more important than the game itself, try to construct and present your game in a way that it looks like finding the solution to a math problem. So the server will give an instance of the problem to the client (example A: a chess board about to be won in 3 moves, example B: a geometry dash randomly generated level) and the user will have to solve it and post back a solution (example A: the winning moves, example b: the exact timestamps and intensity of jumps to avoid obstacles)
With this approach, it is key that the server doesn't send the same level twice, or else the cheater can plan and "design" his solution in advance. Also, the game information must be randomly generated in the server and not sent via seed, or else the cheater can fake the seed and design his solution with time.
The given time for valid submissions must be also tracked in the server so that they will only have "playing" time and no "designing" time. If the cheater is good enough to design a solution as fast as honest players can win the game, then they are talented enough to win the game honestly and deserve their points.
Back in the server, you will need to check that the submitted solution is valid for that instance.
Of course this approach requires lots of extra work: More instances of games (ideally infinite and non repeating), server side generation, server side validation of submissions, time caps, etc.
Note: I know these approach was already suggested in multiple solutions some years ago, I wanted to add my humble contribution.

What is the space efficiency of a directed acyclic word graph (dawg)? and is there a javascript implementation?

I have a dictionary of keywords that I want to make available for autocomplete/suggestion on the client side of a web application. The ajax turnaround introduces too much latency, so it would nice to store the entire word list on the client.
The list could be hundreds of thousands of words, maybe a couple of million. I did a little bit of research, and it seams that a dawg structure would provide space and lookup efficiency, but I can't find real world numbers.
Also, feel free to suggest other possibilities for achieving the same functionality.
I have recently implemented DAWG for a wordgame playing program. It uses a dictionary consisting of 2,7 million words from Polish language. Source plain text file is about 33MB in size. The same word list represented as DAWG in binary file takes only 5MB. Actual size may vary, as it depends on implementation, so number of vertices - 154k and number of edges - 411k are more important figures.
Still, that amount of data is far too big to handle by JavaScript, as stated above. Trying to process several MB of data will hang JavaScript interpreter for a few minutes, effectively hanging whole browser.
My mind cringes at the two facts "couple of million" and "JavaScript". JS is meant to shuffle little pieces of data around, not megabytes. Just imagine how long users would have to wait for your page to load!
There must be a reason why AJAX turnaround is so slow in your case. Google serves billion of AJAX requests every day and their type ahead is snappy (just try it on www.google.com). So there must be something broken in your setup. Find it and fix it.
Your solution sounds practical, but you still might want to look at, for example, jQuery's autocomplete implementation(s) to see how they deal with latency.
A couple of million words in memory (in JavaScript in a Browser)? That sounds big regardless of what kind of structure you decide to store it in. Your might consider other kinds of optimizations instead, like loading subsets of your wordlist based on the characters typed.
For example, if the user enters "a" then you'd start retrieving all the words that start with "a". Then you could optimize your wordlist by returning more common words first, so the more likely ones will match up "instantly" while less common words may load a little slower.
from my undestanding, DAWGs are good for storing and searching for words, but not when you need to generate lists of matches. Once you located the prefix, you will have to browser thru all its children to reconstruct the words which start with this prefix.
I agree with others, you should consider server-side search.

Categories