On ajax error, cache and try later - javascript

I have a mobile project where I have to send ajax-requests one after the other. The project is using a mobile internet connection (egde, 3G), so it can happen that I lost the connection and I have to cache the failed request (in the localStorage), check at intervals for a valid connection and try again the request.
At the same time other requests come in (from the Browser), so I have to cache the requests in a queue and send them whole in a row.
Sorry for my bad Englisch, I hope you can understand my problem.
Any suggestions? Are there any libraries for my problem?

May be you can use below logic.
1. Create a array which will hold status of your ajax request.
2. Once you make a request add particular request to array and it results(response recieved) to false.
3. Once you recieve response from that request update the array and its results(response recieved) as true.
4. Read this array after particular time and send request again for false once.

Related

Defending a non-idempotent post operation against being rapidly called in Node.js?

Short question: assuming a non-idempotent post operation, how do you defend your post request handlers in node.js from being called multiple times before they can respond, and hence cause data corruption?
Specific case: I have a matching API, which takes about 2-3 seconds to return (due to having to run through a large userbase). There are a number of operations where user can simply double call this within the same second (this is a bug, but not under my control, and therefore answering this part does not constitue an answer to the root question). Under these conditions, multiple matches are selected for the user, which is not desirable. Desirable outcome for this would be for all of these rapid requests to have the same end result.
Specific constrains:
node.js / express / sequelize.
If we add a queue, every single user's request will be on top of all other users' request, which might have drastic implications during heavy traffic.
I propose a solution where the server gracefully responds to the same* request and hence no changes on the client are required.
* First we need to establish what constitutes a request to be considered as "same". When you plan this kind of graceful, sync, response you would put a increasing counter into the clients request and this counter is the unique attribute that defines a request as same. But since you might not have access to the client and have no such counter you could define that requests to be the same if their post-body + url are the same (and could throw in that the user needs to be the same too).
So for every user you immediately save the request when it reaches your server. An easy way would be to hash the url + post-body, say with SHA-256 and save it in an object like this:
requests[user][hashOfRequest] = null
That null will be replaced by a response object once your server has calculated it. Now you process the request.
If the client sends the same* request again you can easily find out by checking your requests[user][hashOfRequest].
If the server has finished the processing it will contain a response object which you just send back to the client. If its still empty you need to wait for the processing of your server (of the first request) to finish. Maybe using an event listener or other task sync patterns.
Once the server has finished the first request it will generate the response and save it in requests[user][hashOfRequest]=response and emit the event, so that potential waiting clients will get the response too.
No more double processing and connection drops from clients, where the response does not reach them, are also handled by this pattern.
Of course you should clean up the responses hash table after a time that fits your (client) scenario. Maybe 10 minutes after the request was put into the hash table.
You can push all your requests into a queue. In this case all your responses will have to wait for the preceding ones to finish.
The other solution is to use sequelize transactions, but that would cause lock_wait_timeout errors in DB.
Try to use a transaction.
If you put all of your SQL commands into a transaction, I think the requests will be separated.
The solution I've settled on eventually was a variation on #Festo's general approach, as follows:
adding a unique key constraint for the matches
Each parallel requests attempts to create a new match; all but the first one will fail to do so due to the constraint
if the constraint insert fails, app just pulls the match already in the database, adds it to the rest of the matches, and returns it
This makes it impossible to spam-create new matches via rapidly calling the API. However, I am not satisfied with this, on account of this approach not generalizing to eg: non-deterministic idempotent operations (eg if the matches would be generated randomly, consequent calls would return different matches, and therefore a simple constraint check would be insufficient).
All my internet points for an answer which does not requre client-side ticket management (#Sushant), nor queues, and can handle non-deterministic idempotent functions.
Under my APIs I use express-jwt and token based authentication for all REST authentication. I keep these tokens valid for only one request. Once used token will get blacklisted.
So even if your client issues multiple requests only first one will be accepted. Others can throw error 409 Conflict. Once first request is processed new API token will be send back along with response, may be in headers.
Your client will have to keep updating the token from each response. If you are using AngularJS at client thats pretty easy using interceptors

avoid HTTP GET after successful HTTP DELETE done through angular $resource

Looking at the developer tools of my browser I noticed my application is doing an unnecessary HTTP GET request after a successful delete operation done through $resource.delete.
On the angular documentation for resource I can see
"Success callback is called with (value, responseHeaders) arguments, where the value is the populated resource instance or collection object. The error callback is called with (httpResponse) argument."
so it looks like that is why is doing the request.
My issue, though, is that this happens on successful delete operations, so the GET request always returns an empty 200 OK.
I'd like to avoid having this extra HTTP GET request on successful delete operations; does anybody know how can I achieve this?
I do want to use a success callback function, but I don't need the value of the deleted object (in fact there is no value since the HTTP GET returns no content).
Probably, this is how you have implemented your code. Angular doesn't make any GET requests after the DELETE requests until specifically made.
You may want to verify again yourself, else I would request you to show your requests.
Hope this helps!!

How to regulate sending requests to server?

I need to send many requests to server 50-100 requests to load data, each response has at least 0.5KB and at most 7KB of data.
I send the requests using ajax as following: (code is simplified)
for (var i=0; i<elements.length; i++) {
var element = elements[i];
// make ajax call with element as parameter and update page to show data for element
}
This works for my needs, because I dont need data to come from server in order, and it works most of the time. But sometimes the last few elements dont get loaded and I get communication link failure error in my chrome javascript console.
I am assuming that the server got overloaded, how can I regulate sending requests to make sure I get a response for each request in the shortest time possible?
Notes:
I use Spring MVC in the backend
I use ExtJS Ajax to make the requests
Try using seperate loops for your data uploading process. Overloading is the only cause for that Communication failure.
I solved this by recursively calling the each request, this way only one request is sent to the server at the time, and overload is avoided.

Regarding multiple ajax request for same function

My Aim:
To get response from ajax and show response from ajax as "First come first serve basis".
Technology: ruby on rails + ajax (javascript)
Explanation:
In image below 5 request are shown. 1st and 2nd request is re-run with same 4th and 5th request.
Third request should Ideally take time all other request should take less than a second.
I wish to get response from server via ajax independent of request sent.
In sort, If 3rd request complete in 4.49 second and 5th request take 0.5 second. 5th Request should not wait for third request. Is it possible ? How?
Kindly help me !
From the Ruby 1.9.x Web Servers Booklet
WEBrick is implemented as a single process multi threaded server. Nothing prevents you from starting several WEBricks, each listening on its own port and load balancing between them via and external load balancer. But the server itself does not provide any multiprocessing features of its own.
If you want to process several requests in parallel, you may need to chose a different server or server setup.

Ajax requests and racing conditions (client and server side)

Let's imagine the situation that we've sent two similar (almost similar) async ajax requests to server one by one. Because of lag in network, the second request was executed before first request.
Ajax request #1: /change/?object_id=1&position=5
Ajax request #2: /change/?object_id=1&position=6
In result, we have object_id=1 position set to position=5, but we want position=6 because Ajax request #2 was executed after Ajax request #1 by us.
What is the best practice to avoid this on server side and client side?
Are you worried about racing conditions from the same client or from multiple clients?
If from the same client, I would think the safest bet would be to include a unix timestamp in the ajax request and log this value on the server. If a request comes with a timestamp that is older than the last logged value, ignore the request (or send a warning back to the browser).
I'm not sure how you would handle multiple clients with unsynchronized clocks...
For situations like this, I usually put a check in my success handler to make sure that the value being returned is still the one that I want. This will require sending up the parameter you're searching across in the results object.
For example:
var query = $('input').val();
$.get('/search', { query: query }, function(res) {
if(res.query == $('input').val()) {
//show search results
}
});
I don't know the particulars of your use case, but this general pattern should help.
On the server :
Build a request table to map request id to timestamp
Log any request to the server, expect all requests come with timestamp
If any request comes out of order (e.g. position 6 comes before 5)
Check the request table, if it is an earlier request (timestamp) then do not process the request and send an ignore flag
If it comes in order
This is fine, proceed as usual and no need to send any ignore order
On the client:
When request comes back in, check the ignore flag. If it is there. Don't do anything to the client
Otherwise proceed as usual by processing the data
Note that this implementation that I suggested requires you to send back and forth data (such as JSON) and not the presentation code (such as HTML fragment) as you would need to check for the ignore flag on the client side.
This answer is similar to what #Farray suggestion of using timestamp.

Categories