Creating Permanent Loop in JavaScript - javascript

I wrote an Adobe AIR app that behaves like this:
User logs in and a permanent loop is created using setTimeout. This loop performs an HTTP request, compares a json md5 string that is returned to a global variable. If these two values differ, the dom is updated with new content. When the user performs another action such as sending a reply or deleting a message, a silent update is performed and this "pauses" the loop. It's basically like a simple email client.
The way I'm doing it is unreliable and causes memory leaks. I plan on rewriting it from the ground up, and I don't want to end up in the same boat that I'm in now. If anyone could give me examples of how they would do it or give me any advice, it would be greatly appreciated. Thanks in advance!

You shouldn't poll that often but use a technique known as "long polling" or "COMET". Basically you send a request which will stay open until there's a response due to updated data etc. or a timeout. After some response was received, you immediately send a new request.
This saves lots of bandwidth and server load as it drastically reduces the amount of requests sent.

Related

Race condition with a variable changing in time

I have an array in memory (nodejs server side) that I am updating every 10s and a client that do a request every 10s also. The request parses the array to get it in a specific string format. Also, the update process is in a setInterval function.
I want to run a stress test to that endpoint. I thought that if I move the process of parsing the array to string to the place where I am updating the array, then service only will return a string (like a cache) and stress test will not be a problem to pass.
Here, my problem is: if the time required to update my array and parsing is so long until reach the assignation of a new value for the string cached, then client will receive a non correct value from the service because it continues updating. So my question is how can I be sure client will receive the correct value always. That is, how can I avoid race condition in this context.
The good news is; unless you have spawned a worker or another process Node is single threaded. So it is quite impossible for Node (under normal circumstances) to encounter race conditions.
However, from your description it sounds like your are concerned about the asynchronous nature of your http requests.
Client makes a request to the server
Server begins work
10 seconds pass (server is still working)
Client makes another request to the server using outdated information since server isn't done working.
Server returns old data but, at this point it is too late.
Fortunately, there is more good news. Javascript has a TON of built in support for asynchronous programming. Usually you would wrap your requests in a promise to avoid such pitfalls. Resulting in a process that looks like this:
Client makes a request to the server
Server begins work
Client waits until server comes back before continuing
Server finishes work and returns data to client
Client send another request to the server (ad-infinitum)
You can also make your Promises look like synchronous programming that you're used to via the new(-ish) async functions. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/async_function

Defending a non-idempotent post operation against being rapidly called in Node.js?

Short question: assuming a non-idempotent post operation, how do you defend your post request handlers in node.js from being called multiple times before they can respond, and hence cause data corruption?
Specific case: I have a matching API, which takes about 2-3 seconds to return (due to having to run through a large userbase). There are a number of operations where user can simply double call this within the same second (this is a bug, but not under my control, and therefore answering this part does not constitue an answer to the root question). Under these conditions, multiple matches are selected for the user, which is not desirable. Desirable outcome for this would be for all of these rapid requests to have the same end result.
Specific constrains:
node.js / express / sequelize.
If we add a queue, every single user's request will be on top of all other users' request, which might have drastic implications during heavy traffic.
I propose a solution where the server gracefully responds to the same* request and hence no changes on the client are required.
* First we need to establish what constitutes a request to be considered as "same". When you plan this kind of graceful, sync, response you would put a increasing counter into the clients request and this counter is the unique attribute that defines a request as same. But since you might not have access to the client and have no such counter you could define that requests to be the same if their post-body + url are the same (and could throw in that the user needs to be the same too).
So for every user you immediately save the request when it reaches your server. An easy way would be to hash the url + post-body, say with SHA-256 and save it in an object like this:
requests[user][hashOfRequest] = null
That null will be replaced by a response object once your server has calculated it. Now you process the request.
If the client sends the same* request again you can easily find out by checking your requests[user][hashOfRequest].
If the server has finished the processing it will contain a response object which you just send back to the client. If its still empty you need to wait for the processing of your server (of the first request) to finish. Maybe using an event listener or other task sync patterns.
Once the server has finished the first request it will generate the response and save it in requests[user][hashOfRequest]=response and emit the event, so that potential waiting clients will get the response too.
No more double processing and connection drops from clients, where the response does not reach them, are also handled by this pattern.
Of course you should clean up the responses hash table after a time that fits your (client) scenario. Maybe 10 minutes after the request was put into the hash table.
You can push all your requests into a queue. In this case all your responses will have to wait for the preceding ones to finish.
The other solution is to use sequelize transactions, but that would cause lock_wait_timeout errors in DB.
Try to use a transaction.
If you put all of your SQL commands into a transaction, I think the requests will be separated.
The solution I've settled on eventually was a variation on #Festo's general approach, as follows:
adding a unique key constraint for the matches
Each parallel requests attempts to create a new match; all but the first one will fail to do so due to the constraint
if the constraint insert fails, app just pulls the match already in the database, adds it to the rest of the matches, and returns it
This makes it impossible to spam-create new matches via rapidly calling the API. However, I am not satisfied with this, on account of this approach not generalizing to eg: non-deterministic idempotent operations (eg if the matches would be generated randomly, consequent calls would return different matches, and therefore a simple constraint check would be insufficient).
All my internet points for an answer which does not requre client-side ticket management (#Sushant), nor queues, and can handle non-deterministic idempotent functions.
Under my APIs I use express-jwt and token based authentication for all REST authentication. I keep these tokens valid for only one request. Once used token will get blacklisted.
So even if your client issues multiple requests only first one will be accepted. Others can throw error 409 Conflict. Once first request is processed new API token will be send back along with response, may be in headers.
Your client will have to keep updating the token from each response. If you are using AngularJS at client thats pretty easy using interceptors

Is it bad idea to make an AJAX post call every 2 secs?

If I make an AJAX $.post call (with jQuery) to a php file for updating a certain parameter/number, does it considered bad practise, dangerous or similar?
$.post(file.php, {var:var}, function(data){
// something
}, json);
It would be a single user on a single page updating a number by clicking on an object. For example if user A is updating a certain number by clicking on an object user B should see this update immediately without reloading the page.
It depends on 3 main factors:
How many users will you have at any given time?
How much data is being sent per request on average?
Given 1 and 2, is your sever set up to handle that kind of action?
I have a webapp that's set up to handle up to 10-20k users simultaneously, makes a request each time the user changes a value on their page (could be more than 1 req per second), and it sends roughly 1000 bytes on each request. I get an average of 10ms response time, however that's with node js. Originally I started the project in PHP but it turned out to be too slow for my needs.
I don't think web-sockets is the right tool for what you're doing, since you don't need the server to send to the client, and a constant connection can be much more expensive than sending a request every few seconds.
Just be sure to do lots of testing and then you can make judgements on whether it'll work out or not for your specific needs.
tl;dr - It's not a good idea if your server can't handle it. Otherwise, there's nothing wrong with it.
Another solution could be, to cache user actions in local storage/variables, and send them all at once every 10-15 seconds or so, then clear the cache, when sending was successful.
In this case you should also validate the data in local storage to prevent tampering.

How to validate remote data in a client?

I'm designing a client-server system, and i need to understand how to check if the client's data is correct when they send operations and requests. In this particular case, i've got a browser and a javascript client that gets data from longpolling and updates a series of objects wich get binded to html elements, pretty much MVVM.
The steps are something like this:
start polling
get full data
convert the json into a javascript object
update every html object tied to the data
The user can fire an event at any time and works with the latest updated local model.
user fires event
event + full data(all objects converted to json) is sent
Problems are: It's very rough and possibly slow, heavy on the client and the server.
My objectives are to reduce the data transfer to a minimum, and avoid client side corruption/attacks.
How should i go about this?
My objectives are to reduce the data transfer to a minimum
Send only the data that's changed, but the highest cost in AJAX is the request, so unless you are sending a lot of data, it may not make any noticeable difference.
and avoid client side corruption/attacks
Impossible. Your code is running in a browser, the user can do whatever they want.
My objectives are to reduce the data transfer to a minimum
Some things to try:
Reduce the number or frequency of client events that send an update
Send only what data has changed
Compress the data you send
bundle several events into a single request
and avoid client side corruption/attacks.
To avoid attacks, you need to validate all input on the server. You should write your validator without knowledge of the client. You can assume nothing about what combination of data you can get--instead you should assume that someone is hand-crafting requests with a text editor and sending them with CURL.
To avoid corruption (really a "lost update"), use conditional PUTs or POSTs with the if-none-match or if-unmodified-since headers.

How to make auto-updating (ajax) counter correctly? Or how to disable network log?

I'm trying to make auto-reload counter (for ex.: Messages [num]).
So, I just in setTimeout(); getting JSON code from test_ajax.php. I think it's not correctly..
Can I send info by server (I think not, but suddenly I something don't know..)?
Why I think that's not correctly: because when I'm looking in my chrome network log (F12 -> network tab), I see a lot of requests (to test_ajax.php), but when, I'm visiting vk.com (great example for ajax) or facebook.com, I don't see any requests while something will not change.
So, what's incorrectly in my solution (or what's bad..)?
UPD: Sorry, vk.com sending requests to q%NUM%.queue.vk.com every 25s, but until 25s last request's status is "Pending". When someone, for example, sending me a message it immediately display it. And request has parameter "wait" which equals 25. This delay in requests doing on server side.. But how?
Ajax counter can be done in easy just include below files
index.html
counter.php (ajax file)
necessary images
JS file (for jquery paging call)
download link: https://docs.google.com/open?id=0B5dn0M5-kgfDcE0tOVBPMkg2bHc
What you are looking for is called COMET (also sometimes called Reverse AJAX) techniques.
Doing what you want to do, e.g. regular polls, is one way of doing it.
A lot is actually happening on the server side; to avoid recreating new connections on every poll, some servlet containers like Jetty started to implement techniques like Continuation which basically maintain a two-way connection open.
In the Java world, with Servlet 3, you have asynchronous calls as part of the specs.

Categories