At the risk of getting roasted for not posting code, what is the best way for getting around the 6 concurrent call limitation for ajax requests?
So currently I'm working with an application that can have up to 40 or so ajax requests on page load. For background, most of these requests are for various graphs, hidden behind tabs. There are some requests that can be triggered by the user (such as updating the name of an entity without refreshing the page). This means that with the limitation on concurrent requests the user won't be able to change anything until there's only 5 other requests running, and that's an unacceptable user experience.
This may mean that the app is structured badly, but most of the things loading are not required right away.
Anyway, I've looked a bit into fetch() and webworkers but can't find any information on whether these would help get around the limitation.
One solution would be to put resources on different subdomains, but this makes the backend API unnecessarily complicated (and it's a browser issue, not a server issue).
I've considered these approaches:
delay requests until the user actively needs them (IMO this is a bad user experience because they will have to wait a little bit a lot, which is annoying)
create a queuing system that leaves open one spot for user initiated requests (I'm not sure how to implement this, but it should be doable)
restructure the API so that more data is returned per request (this again is mainly a backend solution that feels a little dirty and unRESTful. Also it won't necessarily improve the load time)
chaining calls such as with Multiple Async AJAX Calls Best Practice (however given there are an unpredictable number of calls on unrelated endpoints so I don't think this is all that practical here)
webworkers? (again, not sure if this could help, since this is used to multithread js)
fetch()? (I can't find info on whether this is subject to the same limitation)
This is very much opinion based.
40 requests is not unreasonable but depending on your server and site setup it can take quite a while.
With that many calls I would bundle some of them together in a initializePage=X call. This does involve some serverside work.
Delay requests is not necessarily bad, depending on your estimated time to deliver. If possible you could present some kind of animation or "expected result" until the response ticks in, to keep the user entertained. The same applies to Queing your requests.
Restructuring your code to return everything in a bundle could also greatly speed up your site if you run a lot of initialization on your server (like security checks).
If performance is still a concern you can look into connections that provide faster results such as EventSource or WebSocket. Such a faster connection also allows for a more flexible approach to chaining. EventSource, for instance, supports events, so you could set several events on a single, bundled request and fire them as the server returns data.
Webworkers is not the answer, as the problem here is connection speed and concurrent connection limits.
I don't think we can answer this question directly. Several of the solutions you have mentioned are viable but vary by level of effort. If you are willing to adjust architecture you can consider a GraphQL approach which can wrap the bundling for you. I would also say that you can maintain REST but have a special proxy service that bundles data for you. I'd also say, don't let RESTfullness dictate or force how you develop.
Also, delaying requests until the user needs them seems like the appropriate choice to me. It's the basis for why we have "above the fold" CSS styling and infinite scrolling. Load what is needed right now first and defer the stuff that might not actually matter when it needs to be.
Concurrency of AJAX calls would come into picture if these requests are called from one thread. If WebWorker is used with AJAX then no issues at all, reason being each instance of webworker will be isolated, in a thread that is not in the main thread.
I would call that as JaxWeb and I will be pushing a git repo in coming week where you may find pure JS code that takes care of it. This is being tested right now, but yeah it does solve the problem.
Example:
Add below code in JaxWeb.js
onmessage = function (e) {
var JaxWeb = function (e) {
return {
requestChannel: {},
get_csrf_token: function () {
return this._csrf_token;
},
set_csrf_token: function (_csrf_token = null) {
this._csrf_token = _csrf_token;
},
prepare: (function ( e ) {
this.requestChannel = new XMLHttpRequest();
this.requestChannel.onreadystatechange = function () {
if (this.readyState == 4 && this.status == 200) {
postMessage(JSON.parse(this.responseText));
}
};
this.requestChannel.open(e.data.method, e.data.callname, true);
this.requestChannel.setRequestHeader("X-CSRF-TOKEN", e.data.token);
var postData = '';
if (e.data.data)
postData = JSON.stringify(e.data.data);
this.requestChannel.send(postData);
})(e)
}
};
return JaxWeb(e);
}
Usage:
jaxWebGetServerResponse = function () {
var wk2 = new Worker('path_to_jaxweb_js/JaxWeb.js');
wk2.postMessage({
"callname": '<url end point>',
"method": '<your http method>',
"data": ''
});
wk2.onmessage = function (serverResponse) {
//
//process results
//with data that is received from server
}
};
//Invoke the function
jaxWebGetServerResponse();
Related
I'm dealing with a promblem for a couple of days, and I'm really hoping, you could help me.
It's a node.js based API using sequelize for MySQL.
On certain API calls the code starts SQL transactions which lock certain tables, and if I send multiple requests to the API simultaneously, I got LOCK_WAIT_TIMEOUT errors.
var SQLProcess = function () {
var self = this;
var _arguments = arguments;
return sequelize.transaction(function (transaction) {
return doSomething({transaction: transactioin});
})
.catch(function (error) {
if (error && error.original && error.original.code === 'ER_LOCK_WAIT_TIMEOUT') {
return Promise.delay(Math.random() * 1000)
.then(function () {
return SQLProcess.apply(self, _arguments);
});
} else {
throw error;
}
});
};
My problem is, the simultaneously running requests lock each other for a long time, and my request returns after a long-long time (~60 seconds).
I hope I could explain it clear and understandable, and you could offer me some solution.
This may not be a direct answer to your question, but maybe by looking at why you had this problem would also help.
1) What does that doSomething() do? Anyway we can do some improvements there?
First, a transaction that take 60 sec is suspicious.. If you lock a table for that long, chances are the design should be revisited. Given a typical db operation runs 10 - 100 ms.
Ideally, all the data preparation should be done outside of the transaction, including data read from database. And the transaction should be really for only transactional operations.
2) Is it possible to use mysql stored procedure?
True, the stored procedure for mysql is not compiled, as PL/SQL for Oracle. But it is still running on the database server. If your application is really complicated and contain a lot of back and force network traffic between database and your node application in that transaction, and considering there are so many layer of javascript calls, it could really slowing things down. If 1) doesn't save you a lot of time, consider using mysql stored procedure.
The drawback of this approach, obviously, is that it is harder to maintain the codes in both nodejs and mysql.
If 1) and 2) are definitely not possible, you may consider some kind of flow control or queuing tool. Either your app make sure the 2nd request doesn't go until the first one finishes, or your have some 3rd party queuing tools to handle that. Seems you don't need any parallelism in running those requests anyway.
The main reason for deadlocks is poor database design. Without further information about your database design and which exact queries might or might not lock each other it is impossible to give you a specific solution for your problem.
However I can give you a general advice/approach to solve this issue:
I would make sure that your database is normalized at least into Third Normal Form or, if that still isnt enough even further. There might be tools to automate this process for you.
Aside from reducing the likelihood of deadlocks this also helps keeping your data consistent, which is always a good thing.
Keep your transactions as slim as possible. If you are inserting new rows into your tables and update other tables accordingly you might want to use a Trigger rather than another SQL statement to do so. The same applies to reading rows and values. Such things can be done before or after your transaction.
Choose the correct Isolation Level. Possible isolation levels are:
READ_UNCOMMITTED
READ_COMMITTED
REPEATABLE_READ
SERIALIZABLE
Sequelize's official documentation describes how you can set the isolation level and lock/unlock transactions by yourself.
As I said, without further insight about your database and query design thats all I can do for you right now.
Hope this helps.
One could simply encapsulate number of synchronous requests as an asynchronous request.
The "func" parameter within the below code could for example contain multiple synchronous requests in order. This should give you more power over data contrasting the use of the DOM as a medium to act on the data. (Is there another way?, it has been a while since I used javaScript)
function asyncModule(func)
{
"use strict";
var t, args;
t = func.timeout === undefined ? 1 : func.timeout;
args = Array.prototype.slice.call(arguments, 1);
setTimeout(function () {
func.apply(null, args);
}, t);
}
Now something must be wrong with my reasoning because here is what the specs says:
Synchronous XMLHttpRequest outside of workers is in the process of being removed from the web platform as it has detrimental effects to the end user's experience. (This is a long process that takes many years.) Developers must not pass false for the async argument when the JavaScript global environment is a document environment. User agents are strongly encouraged to warn about such usage in developer tools and may experiment with throwing an InvalidAccessError exception when it occurs. # https://xhr.spec.whatwg.org/
I would think you would want to avoid async in requests at all costs and instead wrapp sync requests within async function.
Here is the main question along with the follow up.
Is there something wrong with the example I gave?
If not then:
How is forcing requests to be async the right solution?
It goes without saying that you have freedom to debunk any of my "claims" if they are simply wrong or half truths. I am confused over this, I give you that.
Keep in mind that I am testing javaScript in terminal, not in the browser. I used the webserver within GO programming language and everything seems to be working fine. It is not until I test the code within the browser that I get hint for this spec.
This answer has been edited.
Yes I my reasoning was faulty!
There are two angles to think about.
What does async actually mean in javascript?
Can one async call stall another async call?
Async in javascript doesn't mean script will be running in a interleaved/alternating processes with more then one callstack. It can be more like a global timed defer/postpone command that will fully take over once it get its chance. This means async call can be blocking and the nonblocking "async:true" part is only a "trick" based on how xhttprequest is implemented.
This means encapsulating a synchrounous request within setTimeout could be waiting for a failed request that ends up blocking other unrelated async requests where as "async:true" feature would only execute based on its state value.
This means older browser support requires you to chain requests or to use DOM as a medium when you need to do multiple requests that depend on another..Ugh...
Lucky for us, Javascript has threads now. Now we can simply use threads to get clean encapsulation of multiple correlated requests in sync. (or any other background tasks)
In short:
The browser shouldn't have any problems of running request in sync if it is within a worker. Browsers have yet to become OS, but they are closer.
P.S. This answer is more or less because of trial and error. I made some test cases around firefox and observed async request do halt other async requests. I am simply extrapolating from that observation. I will not accept my own answer in case I am still missing something.
EDIT (Again..)
Actually, it might be possible to use xhttp.timeout along with xhttp.ontimeout. See Timeout XMLHttpRequest
This means you could recover from bad requests if you abstract setTimeout and use it as a schedular.
// Simple example
function runSchedular(s)
{
setTimeout(function() {
if (s.ptr < callQue.length) {
// Handles rescheduling if needed by pushing the que.
s = s.callQue[s.ptr++](s);
} else {
s.ptr = 0;
s.callQue = [];
s.t = 200;
}
runSchedular(s);
}, s.t);
}
I'm wondering if there's a way to cause JavaScript to wait for some variable-length code execution to finish before continuing using events and loops. Before answering with using timeouts, callbacks or referencing this as a duplicate, hear me out.
I want to expose a large API to a web worker. I want this API to feel 'native' in the sense that you can access each member using a getter which gets the information from the other thread. My initial idea was to compile the API and rebuild the entire object on the worker. While this works (and was a really fun project), it's slow at startup and cannot show changes made to the API without it being sent to the worker again after modification. Observers would solve part of this, and web workers transferrable objects would solve all, but they aren't adopted widely yet.
Since worker round-trip calls happen in a matter of milliseconds, I think stalling the thread for a few milliseconds may be an alright solution. Of course I would think about terminating in cases where calls take too long, but I'm trying to create a proof of concept first.
Let's say I want to expose the api object to the worker. I would define a getter for self.api which would fetch the first layer of properties. Each property would then be another getter and the process would continue until the final object is found.
worker.js
self.addEventListener('message', function(event) {
self.dataRecieved = true;
self.data = event.data; // would actually build new getters here
});
Object.defineProperty(self, 'api', {
get: function() {
self.dataRecieved = false;
self.postMessage('request api first-layer properties');
while(!self.dataRecieved);
return self.data; // whatever properties were received from host
}
});
For experimentation, we'll do a simple round-trip with no data processing:
index.html (only JS part)
var worker = new Worker("worker.js");
worker.onmessage = function() {
worker.postMessage();
};
If onmessage would interrupt the loop, the script should theoretically work. Then the worker could access objects like window.document.body.style on the fly.
My question really boils down to: is there a way to guarantee that an event will interrupt an executing code block?
From my understanding of events in JavaScript, I thought they did interrupt the current thread. Does it not because it's executing a blank statement over and over? What if I generated code to be executed and kept doing that until the data returned?
is there a way to guarantee that an event will interrupt an executing code block
As #slebetman suggests in comments, no, not in Javascript running in a browser's web-worker (with one possible exception that I can think of, see suggestion 3. below).
My suggestions, in decreasing order of preference:
Give up the desire to feel "native" (or maybe "local" might be a better term). Something like the infinite while loop that you suggest also seems to be very much fighting agains the cooperative multitasking environment offered by Javascript, including when thinking about a single web worker.
Communication between workers in Javascript is asynchronous. Perhaps it can fail, take longer than just a few milliseconds. I'm not sure what your use case is, but my feeling is that when the project grows, you might want to use those milliseconds for something else.
You could change your defined property to return a promise, and then the caller would do a .then on the response to retrieve the value, just like any other asynchronous API.
Angular Protractor/Webdriver has an API that uses a control flow to simulate a synchronous environment using promises, by always passing promises about. Taking the code from https://stackoverflow.com/a/22697369/1319998
browser.get(url);
var title = browser.getTitle();
expect(title).toEqual('My Title');
By my understanding, each line above adds a promise to the control flow to execute asynchronously. title isn't actually the title, but a promise that resolves to the title for example. While it looks like synchronous code, the getting and testing all happens asynchronously later.
You could implement something similar in the web worker. However, I do wonder whether it will be worth the effort. There would be a lot of code to do this, and I can't help feeling that the main consequence would be that it would end up harder to write code using this, and not easier, as there would be a lot of hidden behaviour.
The only thing that I know of that can be made synchronous in Javascript, is XMLHttpRequest when setting the async parameter to false https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest#Parameters. I wonder if you could come up with some sort of way to request to the server that maintains a connection with the main thread and pass data along that way. I have to say, my instinct is that this is quite an awful idea, and would be much slower than just requesting data from the main thread.
For what I know, there is not something native in JS to do this but it is relatively easy to do something similar. I made one some time ago for myself: https://github.com/xpy/whener/blob/master/whener.js .
You use it like when( condition, callback ) where condition is a function that should return true when your condition is met, and callback is the function that you want to execute at that time.
I have the following problem:
Breeze fetches metadata (23.4KB)
Breeze fetches lookups (4.5MB)
Right after lookups are downloaded, the browser will become unresponsive for about 30 seconds.
After this, everything works like a charm.
Why does breeze not use timeouts to inform the UI?
Firefox complains about long script operation, unresponsive, etc. The task manager (shows Firefox/Chrome/etc) as unresponsive.
Am I doing something wrong, or this is by design?
If this is by design, can i use a 'Web Worker' to do all the heavy operations and then return the whole model or something?
I tried something like this:
var test = function (name) {
return Q.fcall(function () {
setTimeout(function () {
toastr.success(name); // Notify me
return EntityQuery.from(name)
.using(manager).execute()
}, 1000) // This should be zero
});
};
var primeData = function (name) {
return test('Languages')
.then(test('dummy1'))
.then(test('dummy2'))
.then(test('dummy3'))
.then(test('dummy4'))
};
However the notifications seem to be poping up all at the same time, indicating that
return EntityQuery.from(name)
.using(manager).execute()
does not return when entity constuction finishes but when the JSON data for this entity arrived.
EDIT
Answer with webWorker provided here : BreezeJs with dedicated web worker
I think I see your point. Breeze hogs the UI thread while processing those thousands of arriving entities. If Breeze could somehow realize how much work it was doing, and would be doing, it could throw a timeout in there to give the UI a chance to breathe.
I'm not sure how safe that would be as Breeze would have to pick a moment that didn't leave the cache in an unstable state from someone's perspective.
I believe you can make this easier on yourself by breaking the one giant Lookups call into several smaller ones. You could still async await completion of all the smaller lookup promises if that is critical to your app. The fact that they are independent promise callbacks should give you the relief you seek.
Please try that and let us know how it works for you.
P.S.: You also have a cool opportunity here to optionally cache these lookups in local storage (indexdb) so you don't have to download them everytime. You'd need a versioning scheme of course and some plumbing so this lies in your future once things are looking good.
I'm wondering what the consensus is on how many simultaneous asynchronous ajax requests is generally allowable.
The reason I ask, is I'm working on a personal web app. For the most part I keep my requests down to one. However there are a few situations where I send up to 4 requests simultaneously. This causes a bit of delay, as the browser will only handle 2 at a time.
The delay is not a problem in terms of usability, for now. And it'll be awhile before I have to worry about scalability, if ever. But I am trying to adhere to best practices, as much as is reasonable. What are your thoughts? Is 4 requests a reasonable number?
I'm pretty sure the browser limits the number of connections you can have anyway.
If you have Firefox, type in about:config and look for network.http.max-connections-per-server and that will tell you your maximum. I'm almost positive that this will be the limit for AJAX connections as well. I think IE is limited to 2. I'm not sure about Chrome or Opera.
Edit:
In Firefox 23 the preference with name network.http.max-connections-per-server doesn't exist, but there is a network.http.max-persistent-connections-per-server and the default value is 6.
That really depends on if it works like that properly. If the logic of the application is built that 4 simultaneos requests make sense, do it like this. If the logic isn't disturbed by packing multiple requests into one request you can do that, but only if it won't make the code more complicated. Keep it as simple and straight forward until you have problems, then you can start to optimize.
But you can ask yourself if the design of the app can be improved that there is no need for multiple requests.
Also check it on a really slow connection. Simultaneous http requests are not necessarily executed on the server in the proper order and they might also return in a different order. That might give problems you'll experience only on slower lines.
It's tough to answer without knowing some details. If you're just firing the requests and forgetting about them, then 4 requests could be fine, as could 20 - as long as the user experience isn't harmed by slow performance. But if you have to collect information back from the service, then coordinating those responses could get tricky. That may be something to consider.
The previous answer from Christian has a good point - check it on a slow connection. Fiddler can help with that as it allows you to test slow connections by simulating different connection speeds (56K and up).
You could also consider firing a single async request that could contain one or more messages to a controlling service, which could then hand the messages off to the appropriate services, collect the results and then return back to the client. Having multiple async requests being fired and then returning at different times could present a choppy experience for the user as each response is then rendered on the page at different times.
In my experience 1 is the best number, but I'll agree there may be some rare situations that might require simultaneous calls.
If I recall, IE is the only browser that still limits connections to 2. This causes your requests to be queue, and if your first 2 requests take longer than expected or timeout, the other two requests will automatically fail. In some cases you also get the annoying "allow script to continue" dialog in IE.
If your user can't really do anything until all 4 requests come back (especially with IE's bogged down JavaScript performance) I would create a transport object that contains the data for all requests and then a returning transport object that can be parsed and delegated on return.
I'm not an expert in networking, but probably four wouldn't be much of a problem for a small to medium application, however, the larger it gets the higher the server load which could eventually cause problems. This really doesn't answer your questions, but here is a suggestion. If delay is not a problem why don't you use an queue.
var request = []//a queue of the requests to be sent to the server
request[request.length] = //whatever you want to send to the server
startSend();
function startSend(){//if nothing is in the queue go ahead and send this one
if(request.length===1){
send();
}
}
function send(){//the ajax call to the server using the first request in queue
var sendData = request[0];
//code to send the data
//then when you get the response (I can't remember exactly the code for it)
//send it to a function to process the data
}
function process(data){
request.splice(0,1);
if(request.length>0){//check to see if you need to do another ajax call
send();
}
//process data
}
This is probably isn't the best way to do it, but that's the idea you could probably modify it to do 2 requests instead of just one. Also, maybe you could modify it to send as many requests as their are in the queue as one request. Then the server splits them up and processes each one and sends the data back. All at once or even as it gets it since the server can flush the data several times. You just have to make sure your are parsing the response text correctly.