I have unpleasant situation when one of my "long" response in some way blocks another AJAX requests.
I call simultaneously 3 different resources:
var list = ['/api/filters','/api/criteria/brands','/api/criteria/genders']
list.forEach(function(item){$.post(item)})
On server side I could see the following times in logfile:
GET /api/filters 304 51ms
GET /api/criteria/genders 200 1ms
GET /api/criteria/brands 200 0ms
Thats look cool for me, but in browser the picture is absolutely different.
picture with google chrome network tab
So it looks like browser wait for answer on first ( long request ) and only afterwards receive last 2 results.
What could be the reason for this behavior?
Every browser just handles a specific amount of simultaneous requests at a time. If you fire 10 ajax requests at the same time, the browser put them on a stack and handles one after the other.
You can find more information about the concurrent requests (because that includes images, javascript, etc as well) in browsern in this question.
The node server runs is single threaded and any piece of code that uses CPU cycles blocks the entire process.
As a result, if GET /api/filters does a lot of CPU intensive computations, it will block any other requests till it completes. Adding some more information about what it actually does can help in putting together a better answer.
In case you have IO operations in there, then try to make them asynchronous. That will allow node to serve the other URLs while the first one is doing IO.
Related
It helps me understand things by using real world comparison, in this case fastfood.
In java, for synchronous blocking I understand that each request processed by a thread can only be completed one at a time. Like ordering through a drive through, so if im tenth in line I have to wait for the 9 cars ahead of me. But, I can open up more threads such that multiple orders are completed simultaneously.
In javascript you can have asynchronous non-blocking but single threaded. As I understand it, multiple requests are made, and those request are immediately accepted, but the request is processed by some background process at some later time before returning. I don't understand how this would be faster. If you order 10 burgers at the same time the 10 requests would be put in immediately but since there is only one cook (single thread) it still takes the same time to create the 10 burgers.
I mean I understand the reasoning, of why non blocking async single thread "should" be faster for somethings, but the more I ask myself questions the less I understand it which makes me not understand it.
I really dont understand how non blocking async single threaded can be faster than sync blocking multithreaded for any type of application including IO.
Non-blocking async single threaded is sometimes faster
That's unlikely. Where are you getting this from?
In multi-threaded synchronous I/O, this is roughly how it works:
The OS and appserver platform (e.g. a JVM) work together to create 10 threads. These are data structures represented in memory, and a scheduler running at the kernel/OS level will use these data structures to tell one of your CPU cores to 'jump to' some point in the code to run the commands it finds there.
The datastructure that represents a thread contains more or less the following items:
What is the location in memory of the instruction we were running
The entire 'stack'. If some function invokes a second function, then we need to remember all local variables and the point we were at in that original method, so that when the second method 'returns', it knows how to do that. e.g. your average java program is probably ~20 methods deep, so that's 20x the local vars, 20 places in code to track. This is all done on stacks. Each thread has one. They tend to be fixed size for the entire app.
What cache page(s) were spun up in the local cache of the core running this code?
The code in the thread is written as follows: All commands to interact with 'resources' (which are orders of magnitude slower than your CPU; think network packets, disk access, etc) are specified to either return the data requested immediately (only possible if everything you asked for is already available and in memory). If that is impossible, because the data you wanted just isn't there yet (let's say the packet carrying the data you want is still on the wire, heading to your network card), there's only one thing to do for the code that powers the 'get me network data' function: Wait until that packet arrives and makes its way into memory.
To not just do nothing at all, the OS/CPU will work together to take that datastructure that represents the thread, freeze it, find another such frozen datastructure, unfreeze it, and jump to the 'where did we leave things' point in the code.
That's a 'thread switch': Core A was running thread 1. Now core A is running thread 2.
The thread switch involves moving a bunch of memory around: All those 'live' cached pages, and that stack, need to be near that core for the CPU to do the job, so that's a CPU loading in a bunch of pages from main memory, which does take some time. Not a lot (nanoseconds), but not zero either. Modern CPUs can only operate on the data loaded in a nearby cachepage (which are ~64k to 1MB in size, no more than that, a thousand+ times less than what your RAM sticks can store).
In single-threaded asynchronous I/O, this is roughly how it works:
There's still a thread of course (all things run in one), but this time the app in question doesn't multithread at all. Instead, it, itself, creates the data structures required to track multiple incoming connections, and, crucially, the primitives used to ask for data work differently. Remember that in the synchronous case, if the code asks for the next bunch of bytes from the network connection then the thread will end up 'freezing' (telling the kernel to find some other work to do) until the data is there. In asynchronous modes, instead the data is returned if available, but if not available, the function 'give me some data!' still returns, but it just says: Sorry bud. I have 0 new bytes for you.
The app itself will then decide to go work on some other connection, and in that way, a single thread can manage a bunch of connections: Is there data for connection #1? Yes, great, I shall process this. No? Oh, okay. Is there data for connection #2? and so on and so forth.
Note that, if data arrives on, say, connection #5, then this one thread, to do the job of handling this incoming data, will presumably need to load, from memory, a bunch of state info, and may need to write it.
For example, let's say you are processing an image, and half of the PNG data arrives on the wire. There's not a lot you can do with it, so this one thread will create a buffer and store half of the PNG inside it. As it then hops to another connection, it needs to load the ~15% of the image it alrady got, and add onto that buffer the 10% of the image that just arrived in a network packet.
This app is also causing a bunch of memory to be moved around into and out of cache pages just the same, so in that sense it's not all that different, and if you want to handle 100k things at once, you're inevitably going to end up having to move stuff into and out of cache pages.
So what is the difference? Can you put it in fry cook terms?
Not really, no. It's all just data structures.
The key difference is in what gets moved into and out of those cache pages.
In the case of async it is exactly what the code you wrote wants to buffer. No more, no less.
In the case of synchronous, it's that 'datastructure representing a thread'.
Take java, for example: That means at the very least the entire stack for that thread. That's, depending on the -Xss parameter, about 128k worth of data. So, if you have 100k connections to be handled simultaneously, that's 12.8GB of RAM just for those stacks!
If those incoming images really are all only about 4k in size, you could have done it with 4k buffers, for only 0.4GB of memory needed at most, if you handrolled that by going async.
That is where the gain lies for async: By handrolling your buffers, you can't avoid moving memory into and out of cache pages, but you can ensure it's smaller chunks. and that will be faster.
Of course, to really make it faster, the buffer for storing state in the async model needs to be small (not much point to this if you need to save 128k into memory before you can operate on it, that's how large those stacks were already), and you need to handle so many things at once (10k+ simultaneous).
There's a reason we don't write all code in assembler or why memory managed languages are popular: Handrolling such concerns is tedious and error-prone. You shouldn't do it unless the benefits are clear.
That's why synchronous is usually the better option, and in practice, often actually faster (those OS thread schedulers are written by expert coders and tweaked extremely well. You don't stand a chance to replicate their work) - that whole 'by handrolling my buffers I can reduce the # of bytes that need to be moved around a ton!' thing needs to outweigh the losses.
In addition, async is complicated as a programming model.
In async mode, you can never block. Wanna do a quick DB query? That could block, so you can't do that, you have to write your code as: Okay, fire off this job, and here's some code to run when it gets back. You can't 'wait for an answer', because in async land, waiting is not allowed.
In async mode, anytime you ask for data, you need to be capable of dealing with getting half of what you wanted. In synchronized mode, if you ask for 4k, you get 4k. The fact that your thread may freeze during this task until the 4k is available is not something you need to worry about, you write your code as if it just arrives as you ask for it, complete.
Bbbuutt... fry cooks!
Look, CPU design just isn't simple enough to put in terms of a restaurant like this.
You are mentally moving the bottleneck from your process (the burger orderer) to the other process (the burger maker).
This will not make your application faster.
When considering the single-threaded async model, the real benefit is that your process is not blocked while waiting for the other process.
In other words, do not associate async with the word fast but with the word free. Free to do other work.
I need some advice on handling an issue programmatically.
I have a web interface on PHP and Javascript(jQuery), on which authorized user needs to control multiple distant entities by performing some actions.
Each entity consists of 10 steps and the progress is represented by a progress bar on which each step is a web service call which needs around 0.5-1,5 to execute. So for the first service for example to be completed 10 different web services are called and when each is fulfilled progress bar proceeds by 10%. Each action is performed with an ajax request.
The problem is, that in the same interface, i have to give the option to the user to control simultaneouly around 800 different entities, and each of them consists of 10 steps which make approximately 10x800 = 8000 ajax calls.
Performing 800 requests per step doesn't seem a good idea, because browser struggles to serve them but often hangs due to the excessive load.
I ve thought to make some kind of limited batch action, but haven't settled which option would serve me better.
For instance, shall i use a counter and perform a setTimeout/setInterval every X number of calls? Shall i abandon this approach and use javascript workers?
I ve read similar threads over stack overflow, suggesting for example handling them on the server side. This doesn't seem good option in my case, because on the one hand there has to be a flow on the progress, on the other hand, performing on the server side 800 requests (~0,5-1,5sec) would mean that the user would have to wait without any information at lest ~6 minutes
Also other suggest using $.when but i doubt whether this would serve this case as well, since i need to limit the total batch, and not just the response of each request.
Say A sends a message to B and waits for a callback and then A probably sends new messages and B also sends many messages to A.
What I mean other message exchanges happen during this time before callback happens.
Does it create a race condition or block other message sending until the first callback is complete or does it enforce the order of callbacks so that say the callbacks of message 1,2,3,4,5 always arrive in the same order as the message was sent out?
Assistance would be much appreciated.
Well, the question involves a number of concepts - so hard to answer fully. I will try to t least give partial response or insight.
If you had provided details, why it matters for your purpose - it could help to better target the answer.
One of the advantages of nodejs is that it is a single-threaded, event-driven, non-blocking I/O model - which means there is almost no or minimum blocking (at least theoretically). See conceptual model here
However, some trivial blocking should happen due to transport, consistency etc [?]. But it shouldn't be a problem as this will extremely insignificant and happens in all programs no matter what language it uses.
Secondly about sockets. The concept of socket considers thatit can be blocking or non-blocking depending on your purpose. Blocking and Non-blocking sockets
Blocking doesnt necessarily mean it is bad.
Thirdly, even if there is no blocking, still events dont really happen in parallel. I mean even if A and B send messages to each other very frequently - there is a time gap between them - although trivial for humans. That difference can even be expressed even in millionth of second. Can you really send over million messages in a second? So, even if callback has some impact - you should ignore it for the purpose of your program. Also, even if they occur at the same time, javascript can do one thing at a time - so at the end when you receive, you should do them one at a time. For example, if you want to display or alert a message, they will be one at a time.
As to ordering of the messages, Node.js is a single event loop. So, my understanding is it runs a non-stop loop and waits for events, and emits information in the order the events occur. For example Understanding nodejs event loop
while(new Date().getTime() < now + 1000) { // do nothing }
So, for your purpose, I would say unless B sends a message between A sending a message and server receiving it, you should receive a callback before anything else. Simply ordering happens in the order the nodejs server receives it. Hope it helps.
I have a script that pings a series of urls with a GET method. I only want to ping them each once and do not expect a response. My script works in Chrome and Safari, but Firefox won't complete the later requests.
Is there a way to trigger Firefox to make a series of calls (five, to be precise) once each, and not care if they fail? It seems that Firefox won't complete the series of requests when the first ones fail.
I'm working in javascript and jQuery, with a little jQuery.ajax() thrown in. I've searched, to no avail and have reached the limit of my beginner's skill set. Any insight would be appreciated.
(If you're interested in the full scope, there's code at jquery-based standalone port knocker)
Thank you.
Update:
After further research, I believe the issue is that Firefox isn't handling the calls truly asynchronously. I have versions of code making the pings with img calls, iframe url calls, and ajax calls to work in Chrome and Safari, but in Firefox they're not behaving as I need them to.
Our server monitoring for the knock sequence should see requests come sequentially to ports 1, 2, 3, 4, 5 (as it does when using Chrome or Safari) but in Firefox, no matter which method I've tried, I see the first attempt ping port 1 twice, then port 2, and on subsequent attempts I only see it ping port 1. My status updates appear as expected, but the server isn't receiving the calls in the order it needs them. It seems that Firefox is retrying failed calls rather than executing each one once, in sequence, which is what I need it to do.
Here is a sample of my script using a simple jquery.ajax call method. It works in Safari and Chrome, but doesn't achieve the desired result in Firefox. While all my code runs and I can see the status updates (generated with the jquery.append function), the request aren't sent once each, sequentially to my server.
<script src="http://code.jquery.com/jquery-latest.js"></script>
<script type="text/javascript">
$(document).ready(function(){
$('button').click(function(){
$('#knocks').append('<p>Knocking...</p>');
setTimeout(function(){
$.ajax({url: 'https://example.sample.com:1111'});
$('#knocks').append("<p>Knock 1 of 5 complete...</p>");
}, 500);
setTimeout(function(){
$.ajax({url: 'https://example.sample.com:2222'});
$('#knocks').append("<p>Knock 2 of 5 complete...</p>");
}, 3500);
setTimeout(function(){
$.ajax({url: 'https://example.sample.com:3333'});
$('#knocks').append("<p>Knock 3 of 5 complete...</p>");
}, 6500);
setTimeout(function(){
$.ajax({url: 'https://example.sample.com:4444'});
$('#knocks').append("<p>Knock 4 of 5 complete...</p>");
}, 9500)
setTimeout(function(){
$.ajax({url: 'https://example.sample.com:5555'});
$('#knocks').append("<p>Knock 5 of 5 complete...</p>");
}, 12000);
setTimeout(function(){
$('#knocks').append("<p>Knocking is complete... <br>Proceed to site: <a href='http://example-url.sample-url.com'>http://example-url.sample-url.com</a></p>");
}, 13000);
});
});
</script>
Seeing there's no real answer to your question and you'd probably want to move on, I though I'd give you some suggestions as a starting point.
For your function calls to truly execute in a sequential order (or synchronous, in-order, blocking,...) you will have to make sure you issue all the subsequent function calls (AJAX requests in your case) once the proceeding requests completed (either succeed, or failed, in which case you might not want to proceed with the next in-order call and issue a completely separate response).
The way you're doing it now isn't considered synchronous, instead it is actually asynchronous, delayed (or 'in the background' with a timeout). This might cause all kinds of problems when you expect your AJAX calls to execute synchronously (blocking) at your server end. From browsers re-issuing failed or timed-out requests (for various reasons, depending on their feature set and how they handle failed requests, caching,...) to preemptively issuing requests and caching results when some pre-fetchers are enabled (or however they're calling it in FF), then re-issuing them again if pre-fetcher failed. I believe this is similar to what you observed in Firefox and might be the main culprit for this unexpected behavior. As you can't control what features end user enables or disables in their browser, or what new features they implement in future versions, you can't really expect your server calls to execute asynchronous by delaying their calls with setTimeout, even if they appear to be doing so in other browsers (probably because of your server responding fast enough for them to appear as such).
In your code, the second call would only appear to be executing synchronously (waiting for the first one to complete) for up to half a second, the third request for up to 3 seconds and a half, and so on. But even if setTimeout was blocking execution (which it doesn't), which external request would it be waiting for? The first one, or the second one? I think you get what I'm trying to say and why your code doesn't work as expected.
Instead, you should either issue subsequent AJAX calls through your server's response (which is actually the point of using AJAX, otherwise there's no need for it), or preferably, create an external listener function that will handle these calls according to the status and/or return values of your previous external calls. If you need to handle failed requests as well and continue execution regardless, then the external listener (with preset execution stack timeout) is the way to go, as you obviously wouldn't be able to depend on response of failed requests.
You see, browsers have no problems issuing multiple concurrent requests and delaying them with setTimout doesn't stop pre-fetchers to try and cache their responses for later use either. It also doesn't issue requests in a blocking manner, the next one waiting for the previous one to finish, as you expected them to. Most will be happy to utilize up to a certain number of concurrent connection (~10 on client machines, a lot more on servers) in an effort to speed-up the download and/or page rendering process, and some obviously have even more advanced caching mechanism in place for this very same reason, Firefox being merely one of them.
I hope this clears things up a bit and you'll be able to rewrite your code to work as expected. As we have no knowledge of how your server-side code is supposed to work, you'll have to write it yourself, though. There are however plenty of threads on SE discussing similar techniques that you might decide on using, and you can always ask another question if you get stuck and we'll be glad to help.
Cheers!
I'm wondering what the consensus is on how many simultaneous asynchronous ajax requests is generally allowable.
The reason I ask, is I'm working on a personal web app. For the most part I keep my requests down to one. However there are a few situations where I send up to 4 requests simultaneously. This causes a bit of delay, as the browser will only handle 2 at a time.
The delay is not a problem in terms of usability, for now. And it'll be awhile before I have to worry about scalability, if ever. But I am trying to adhere to best practices, as much as is reasonable. What are your thoughts? Is 4 requests a reasonable number?
I'm pretty sure the browser limits the number of connections you can have anyway.
If you have Firefox, type in about:config and look for network.http.max-connections-per-server and that will tell you your maximum. I'm almost positive that this will be the limit for AJAX connections as well. I think IE is limited to 2. I'm not sure about Chrome or Opera.
Edit:
In Firefox 23 the preference with name network.http.max-connections-per-server doesn't exist, but there is a network.http.max-persistent-connections-per-server and the default value is 6.
That really depends on if it works like that properly. If the logic of the application is built that 4 simultaneos requests make sense, do it like this. If the logic isn't disturbed by packing multiple requests into one request you can do that, but only if it won't make the code more complicated. Keep it as simple and straight forward until you have problems, then you can start to optimize.
But you can ask yourself if the design of the app can be improved that there is no need for multiple requests.
Also check it on a really slow connection. Simultaneous http requests are not necessarily executed on the server in the proper order and they might also return in a different order. That might give problems you'll experience only on slower lines.
It's tough to answer without knowing some details. If you're just firing the requests and forgetting about them, then 4 requests could be fine, as could 20 - as long as the user experience isn't harmed by slow performance. But if you have to collect information back from the service, then coordinating those responses could get tricky. That may be something to consider.
The previous answer from Christian has a good point - check it on a slow connection. Fiddler can help with that as it allows you to test slow connections by simulating different connection speeds (56K and up).
You could also consider firing a single async request that could contain one or more messages to a controlling service, which could then hand the messages off to the appropriate services, collect the results and then return back to the client. Having multiple async requests being fired and then returning at different times could present a choppy experience for the user as each response is then rendered on the page at different times.
In my experience 1 is the best number, but I'll agree there may be some rare situations that might require simultaneous calls.
If I recall, IE is the only browser that still limits connections to 2. This causes your requests to be queue, and if your first 2 requests take longer than expected or timeout, the other two requests will automatically fail. In some cases you also get the annoying "allow script to continue" dialog in IE.
If your user can't really do anything until all 4 requests come back (especially with IE's bogged down JavaScript performance) I would create a transport object that contains the data for all requests and then a returning transport object that can be parsed and delegated on return.
I'm not an expert in networking, but probably four wouldn't be much of a problem for a small to medium application, however, the larger it gets the higher the server load which could eventually cause problems. This really doesn't answer your questions, but here is a suggestion. If delay is not a problem why don't you use an queue.
var request = []//a queue of the requests to be sent to the server
request[request.length] = //whatever you want to send to the server
startSend();
function startSend(){//if nothing is in the queue go ahead and send this one
if(request.length===1){
send();
}
}
function send(){//the ajax call to the server using the first request in queue
var sendData = request[0];
//code to send the data
//then when you get the response (I can't remember exactly the code for it)
//send it to a function to process the data
}
function process(data){
request.splice(0,1);
if(request.length>0){//check to see if you need to do another ajax call
send();
}
//process data
}
This is probably isn't the best way to do it, but that's the idea you could probably modify it to do 2 requests instead of just one. Also, maybe you could modify it to send as many requests as their are in the queue as one request. Then the server splits them up and processes each one and sends the data back. All at once or even as it gets it since the server can flush the data several times. You just have to make sure your are parsing the response text correctly.