How does node process concurrent requests? - javascript

I have been reading up on nodejs lately, trying to understand how it handles multiple concurrent requests. I know NodeJs is a single threaded event loop based architecture, and at a given point in time only one statement is going to be executing, i.e. on the main thread and that blocking code/IO calls are handled by the worker threads (default is 4).
Now my question is, what happens when a web server built using NodeJs receives multiple requests? I know that there are lots of similar questions here, but haven't found a concrete answer to my question.
So as an example, let's say we have following code inside a route like /index:
app.use('/index', function(req, res, next) {
console.log("hello index routes was invoked");
readImage("path", function(err, content) {
status = "Success";
if(err) {
console.log("err :", err);
status = "Error"
}
else {
console.log("Image read");
}
return res.send({ status: status });
});
var a = 4, b = 5;
console.log("sum =", a + b);
});
Let's assume that the readImage() function takes around 1 min to read that Image.
If two requests, T1, and T2 come in concurrently, how is NodeJs going to process these request ?
Does it going to take first request T1, process it while queueing the request T2? I assume that if any async/blocking stuff is encountered like readImage, it then sends that to a worker thread (then some point later when async stuff is done that thread notifies the main thread and main thread starts executing the callback?), and continues by executing the next line of code?
When it is done with T1, it then processes the T2 request? Is that correct? Or it can process T2 in between (meaning whilethe code for readImage is running, it can start processing T2)?
Is that right?

Your confusion might be coming from not focusing on the event loop enough. Clearly you have an idea of how this works, but maybe you do not have the full picture yet.
Part 1, Event Loop Basics
When you call the use method, what happens behind the scenes is another thread is created to listen for connections.
However, when a request comes in, because we're in a different thread than the V8 engine (and cannot directly invoke the route function), a serialized call to the function is appended onto the shared event loop, for it to be called later. ('event loop' is a poor name in this context, as it operates more like a queue or stack)
At the end of the JavaScript file, the V8 engine will check if there are any running theads or messages in the event loop. If there are none, it will exit with a code of 0 (this is why server code keeps the process running). So the first Timing nuance to understand is that no request will be processed until the synchronous end of the JavaScript file is reached.
If the event loop was appended to while the process was starting up, each function call on the event loop will be handled one by one, in its entirety, synchronously.
For simplicity, let me break down your example into something more expressive.
function callback() {
setTimeout(function inner() {
console.log('hello inner!');
}, 0); // †
console.log('hello callback!');
}
setTimeout(callback, 0);
setTimeout(callback, 0);
† setTimeout with a time of 0, is a quick and easy way to put something on the event loop without any timer complications, since no matter what, it has always been at least 0ms.
In this example, the output will always be:
hello callback!
hello callback!
hello inner!
hello inner!
Both serialized calls to callback are appended to the event loop before either of them is called. This is guaranteed. That happens because nothing can be invoked from the event loop until after the full synchronous execution of the file.
It can be helpful to think of the execution of your file, as the first thing on the event loop. Because each invocation from the event loop can only happen in series, it becomes a logical consequence, that no other event loop invocation can occur during its execution; Only when the previous invocation is finished, can the next event loop function be invoked.
Part 2, The inner Callback
The same logic applies to the inner callback as well, and can be used to explain why the program will never output:
hello callback!
hello inner!
hello callback!
hello inner!
Like you might expect.
By the end of the execution of the file, two serialized function calls will be on the event loop, both for callback. As the Event loop is a FIFO (first in, first out), the setTimeout that came first, will be be invoked first.
The first thing callback does is perform another setTimeout. As before, this will append a serialized call, this time to the inner function, to the event loop. setTimeout immediately returns, and execution will move on to the first console.log.
At this time, the event loop looks like this:
1 [callback] (executing)
2 [callback] (next in line)
3 [inner] (just added by callback)
The return of callback is the signal for the event loop to remove that invocation from itself. This leaves 2 things in the event loop now: 1 more call to callback, and 1 call to inner.
Now callback is the next function in line, so it will be invoked next. The process repeats itself. A call to inner is appended to the event loop. A console.log prints Hello Callback! and we finish by removing this invocation of callback from the event loop.
This leaves the event loop with 2 more functions:
1 [inner] (next in line)
2 [inner] (added by most recent callback)
Neither of these functions mess with the event loop any further. They execute one after the other, the second one waiting for the first one's return. Then when the second one returns, the event loop is left empty. This fact, combined with the fact that there are no other threads currently running, triggers the end of the process, which exits with a return code of 0.
Part 3, Relating to the Original Example
The first thing that happens in your example, is that a thread is created within the process which will create a server bound to a particular port. Note, this is happening in precompiled C++ code, not JavaScript, and is not a separate process, it's a thread within the same process. see: C++ Thread Tutorial.
So now, whenever a request comes in, the execution of your original code won't be disturbed. Instead, incoming connection requests will be opened, held onto, and appended to the event loop.
The use function, is the gateway into catching the events for incoming requests. Its an abstraction layer, but for the sake of simplicity, it's helpful to think of the use function like you would a setTimeout. Except, instead of waiting a set amount of time, it appends the callback to the event loop upon incoming http requests.
So, let's assume that there are two requests coming in to the server: T1 and T2. In your question you say they come in concurrently, since this is technically impossible, I'm going to assume they are one after the other, with a negligible time in between them.
Whichever request comes in first, will be handled first by the secondary thread from earlier. Once that connection has been opened, it's appended to the event loop, and we move on to the next request, and repeat.
At any point after the first request is added to the event loop, V8 can begin execution of the use callback.
A quick aside about readImage
Since its unclear whether readImage is from a particular library, something you wrote or otherwise, it's impossible to tell exactly what it will do in this case. There are only 2 possibilities though, so here they are:
It's entirely synchronous, never using an alternate thread or the event loop
function readImage (path, callback) {
let image = fs.readFileSync(path);
callback(null, image);
// a definition like this will force the callback to
// fully return before readImage returns. This means
// means readImage will block any subsequent calls.
}
It's entirely asynchronous, and takes advantage of fs' async callback.
function readImage (path, callback) {
fs.readFile(path, (err, data) => {
callback(err, data);
});
// a definition like this will force the readImage
// to immediately return, and allow exectution
// to continue.
}
For the purposes of explanation, I'll be operating under the assumption that readImage will immediately return, as proper asynchronous functions should.
Once the use callback execution is started, the following will happen:
The first console log will print.
readImage will kick off a worker thread and immediately return.
The second console log will print.
During all of this, its important to note, these operations are happening synchronously; No other event loop invocation can start until these are finished. readImage may be asynchronous, but calling it is not, the callback and usage of a worker thread is what makes it asynchronous.
After this use callback returns, the next request has probably already finished parsing and was added to the event loop, while V8 was busy doing our console logs and readImage call.
So the next use callback is invoked, and repeats the same process: log, kick off a readImage thread, log again, return.
After this point, the readImage functions (depending on how long they take) have probably already retrieved what they needed and appended their callback to the event loop. So they will get executed next, in order of whichever one retrieved its data first. Remember, these operations were happening in separate threads, so they happened not only in parallel to the main javascript thread, but also parallel to each other, so here, it doesn't matter which one got called first, it matters which one finished first, and got 'dibs' on the event loop.
Whichever readImage completed first will be the first one to execute. So, assuming no errors occured, we'll print out to the console, then write to the response for the corresponding request, held in lexical scope.
When that send returns, the next readImage callback will begin execution: console log, and writing to the response.
At this point, both readImage threads have died, and the event loop is empty, but the thread that holds the server port binding is keeping the process alive, waiting for something else to add to the event loop, and the cycle to continue.
I hope this helps you understand the mechanics behind the asynchronous nature of the example you provided.

For each incoming request, node will handle it one by one. That means there must be order, just like the queue, first in first serve. When node starts processing request, all synchronous code will execute, and asynchronous will pass to work thread, so node can start to process the next request. When the asynchrous part is done, it will go back to main thread and keep going.
So when your synchronous code takes too long, you block the main thread, node won't be able to handle other request, it's easy to test.
app.use('/index', function(req, res, next) {
// synchronous part
console.log("hello index routes was invoked");
var sum = 0;
// useless heavy task to keep running and block the main thread
for (var i = 0; i < 100000000000000000; i++) {
sum += i;
}
// asynchronous part, pass to work thread
readImage("path", function(err, content) {
// when work thread finishes, add this to the end of the event loop and wait to be processed by main thread
status = "Success";
if(err) {
console.log("err :", err);
status = "Error"
}
else {
console.log("Image read");
}
return res.send({ status: status });
});
// continue synchronous part at the same time.
var a = 4, b = 5;
console.log("sum =", a + b);
});
Node won't start processing the next request until finish all synchronous part. So people said don't block the main thread.

There are a number of articles that explain this such as this one
The long and the short of it is that nodejs is not really a single threaded application, its an illusion. The diagram at the top of the above link explains it reasonably well, however as a summary
NodeJS event-loop runs in a single thread
When it gets a request, it hands that request off to a new thread
So, in your code, your running application will have a PID of 1 for example. When you get request T1 it creates PID 2 that processes that request (taking 1 minute). While thats running you get request T2 which spawns PID 3 also taking 1 minute. Both PID 2 and 3 will end after their task is completed, however PID 1 will continue listening and handing off events as and when they come in.
In summary, NodeJS being 'single threaded' is true, however its just an event-loop listener. When events are heard (requests), it passes them off to a pool of threads that execute asynchronously, meaning its not blocking other requests.

You can simply create child process by shifting readImage() function in a different file using fork().
The parent file, parent.js:
const { fork } = require('child_process');
const forked = fork('child.js');
forked.on('message', (msg) => {
console.log('Message from child', msg);
});
forked.send({ hello: 'world' });
The child file, child.js:
process.on('message', (msg) => {
console.log('Message from parent:', msg);
});
let counter = 0;
setInterval(() => {
process.send({ counter: counter++ });
}, 1000);
Above article might be useful to you .
In the parent file above, we fork child.js (which will execute the file with the node command) and then we listen for the message event. The message event will be emitted whenever the child uses process.send, which we’re doing every second.
To pass down messages from the parent to the child, we can execute the send function on the forked object itself, and then, in the child script, we can listen to the message event on the global process object.
When executing the parent.js file above, it’ll first send down the { hello: 'world' } object to be printed by the forked child process and then the forked child process will send an incremented counter value every second to be printed by the parent process.

The V8 JS interpeter (ie: Node) is basically single threaded. But, the processes it kicks off can be async, example: 'fs.readFile'.
As the express server runs, it will open new processes as it needs to complete the requests. So the 'readImage' function will be kicked off (usually asynchronously) meaning that they will return in any order. However the server will manage which response goes to which request automatically.
So you will NOT have to manage which readImage response goes to which request.
So basically, T1 and T2, will not return concurrently, this is virtually impossible. They are both heavily reliant on the Filesystem to complete the 'read' and they may finish in ANY ORDER (this cannot be predicted). Note that processes are handled by the OS layer and are by nature multithreaded (in a modern computer).
If you are looking for a queue system, it should not be too hard to implement/ensure that images are read/returned in the exact order that they are requested.

Since there's not really more to add to the previous answer from Marcus - here's a graphic that explains the single threaded event-loop mechanism:

Related

Node.js - Does iterating with a for loop ensure my callbacks are called one after another in order?

Apologies if the title is a little undescriptive - however I often come across this problem and wonder what the correct way to handle this situation is:
I have an array/some list, I want to iterate through and run call some methods that have callbacks to subsequent steps. Would all the callbacks be processed? And would they be done so in order:
To be more specific, here is an example:
1 - I've created this array called files containing the paths of some dmg files in a folder:
var files = []
walker.on('file', function(root, stat, next) {
if (stat.name.indexOf(".dmg") > -1) {
files.push(root + '/' + stat.name);
}
next();
});
2 - I then want to iterate through, upload something, then after the upload send a message to a RabbitMQ queue:
for (var bk = 0; bk < files.length; bk ++) {
var uploader = client.uploadFile(params);
uploader.on('error', function (err) {
console.error("unable to upload:", err.stack);
});
uploader.on('progress', function () {
console.log("progress", uploader.progressMd5Amount,
uploader.progressAmount, uploader.progressTotal);
});
uploader.on('end', function () {
console.log("done uploading");
//Now send the message to RabbitMQ
myRabbitMQObject.then(function (conn) {
return conn.createChannel();
}).then(function (ch) {
return ch.assertQueue(q).then(function (ok) {
return ch.sendToQueue(q, new Buffer("Some message with path from the files array"));
});
}).catch(console.warn);
});
}
Now the bit I'm always unsure of is if I placed the block of code under 2 around a for loop - as that has callbacks inside, are these guaranteed to get called?
In this example I don't really care about the order - however if I did care about the order, will having it in a for loop ensure the uploading and rabbitmq messages are sent one after another?
I hope the question makes sense.
Any advice appreciated.
Thanks.
Your for loop runs synchronously. In your code, what that means is that it will execute the line:
var uploader = client.uploadFile(params);
one after another starting all the uploads. It won't wait for the first one to finish before it starts all of them. So, think of your for loop as initiating a whole bunch of asynchronous operations.
Then, sometime later, one by one, in no guaranteed order, each of your uploads will finish. They will essentially all by "in flight" at the same time. Each of your rabbitMQ operations will happen whenever their corresponding upload finishes. The for loop will long since be over at that point and the MQ operations will be in no particular order.
Your current code has no way of telling when everything is done.
I have an array/some list, I want to iterate through and run call some methods that have callbacks to subsequent steps. Would all the callbacks be processed? And would they be done so in order?
All the events will get triggered and you event handler callbacks will get called. They will not be done in any guaranteed order.
Now the bit I'm always unsure of is if I placed the block of code under 2 around a for loop - as that has callbacks inside, are these guaranteed to get called?
Yes. Your callbacks will get called. The for launches each upload and, at some point, they will all trigger their events which will call your event handler callbacks.
In this example I don't really care about the order - however if I did care about the order, will having it in a for loop ensure the uploading and rabbitmq messages are sent one after another?
The for loop will ensure that the uploads are started in sequence. But, the finish order is not guaranteed so therefore the rabbitmq messages that you send upon finish may be in any order. If you want the rabbitmq messages to be sent in a particular order or want the uploads to be sequenced, then you need more/different code to make that happen.
This is my understanding. it might help you
Iterating over list of files and calling upload function is synchronous. It will maintain order
Updating RabbitMQ on successful completion of upload does not guarantee order as it is asynchronous and completion of upload depends on size of file and network latency during upload.

NodeJs/expressjs : Run lengthy code in a callback [duplicate]

This question already has answers here:
Long-running computations in node.js
(3 answers)
Closed 8 years ago.
Callbacks are asynchronous , So does that mean that if I run a lengthy computation in a callback it wont affect my main thread ?
For example:
function compute(req,res){ // this is called in an expressjs route.
db.collection.find({'key':aString}).toArray(function(err, items) {
for(var i=0;i<items.length;i++){ // items length may be in thousands.
// Heavy/lengthy computation here, Which may take 5 seconds.
}
res.send("Done");
});
}
So, the call to database is ascnchronous. Does that mean the for loop inside the callback will NOT block the main thread ?
And if it is blocking, How may I perform such things in an async way?
For the most part, node.js runs in a single thread. However, node.js allows you to make calls that execute low-level operations (file reads, network requests, etc.) which are handled by separate threads. As such, your database call most likely happens on a separate thread. But, when the database call returns, we return back to the main thread and your code will run in the main thread (blocking it).
The way to get around this is to spin up a new thread. You can use cluster to do this. See:
http://nodejs.org/api/cluster.html
Your main program will make the database call
When the database call finishes, it will call fork() and spin up a new thread that runs your-calculations.js and sends an event to it with any input data
your-calculations.js will listen for an event and do the necessary processing when it handles the event
your-calculations.js will then send an event back to the main thread when it has finished processing (it can send any output data back)
If the main thread needs the output data, it can listen for the event that your-calculations.js emits
If you can't do, or don't want to use a thread, you can split up the long computation with setImmediates. e.g. (writing quickly on my tablet so may be sloppy)
function compute(startIndex, max, array, partialResult, callback) {
var done = false;
var err = null;
var stop = startIndex+100; // or some reasonable amount of calcs...
if (stop >= max) {
stop = max;
done = true;
}
// do calc from startIndex to stop, using partialResult as input
if (done)
callback(err, result);
else
process.setImmediate ( go look this part up or I'll edit tomorrow)...
But the idea is you call youself again with start += 100.
}
In between every 100 calculations node will have time to process other requests, handle other callbacks, etc. Of course, if they trigger another huge calculation evedntually things will grind to a halt.

Process chain of functions without UI block

I need to perform several functions in my JavaScript/jQuery, but I want to avoid blocking the UI.
AJAX is not a viable solution, because of the nature of the application, those functions will easily reach the thousands. Doing this asynchroniously will kill the browser.
So, I need some way of chaining the functions the browser needs to process, and only send the next function after the first has finished.
The algorithm is something like this
For steps from 2 to 15
HTTP:GET amount of items for current step (ranges somewhere from a couple of hundred to multiple thousands)
For every item, HTTP:GET the results
As you see, I have two GET-request-"chains" I somehow need to manage... Especially the innermost loop crashes the browser near to instantly, if it's done asynchroniously - but I'd still like the user to be able to operate the page, so a pure (blocking) synchronous way will not work.
You can easily do this asynchronously without firing all requests at once. All you need to do is manage a queue. The following is pseudo-code for clarity. It's easily translatable to real AJAX requests:
// Basic structure of the request queue. It's a list of objects
// that defines ajax requests:
var request_queue = [{
url : "some/path",
callback : function_to_process_the_data
}];
// This function implements the main loop.
// It looks recursive but is not because each function
// call happens in an event handler:
function process_request_queue () {
// If we have anything in the queue, do an ajax call.
// Otherwise do nothing and let the loop end.
if (request_queue.length) {
// Get one request from the queue. We can either
// shift or pop depending on weather you prefer
// depth first or breadth first processing:
var req = request_queue.pop();
ajax(req.url,function(result){
req.callback(result);
// At the end of the ajax request process
// the queue again:
process_request_queue();
}
}
}
// Now get the ball rolling:
process_request_queue();
So basically we turn the ajax call itself into a pseudo loop. It's basically the classic continuation passing style of programming done recursively.
In your case, an example of a request would be:
request_queue.push({
url : "path/to/OUTER/request",
callback : function (result) {
// You mentioned that the result of the OUTER request
// should trigger another round of INNER requests.
// To do this simply add the INNER requests to the queue:
request_queue.push({
url : result.inner_url,
callback : function_to_handle_inner_request
});
}
});
This is quite flexible because you not only have the option of processing requests either breadth first or depth first (shift vs pop). But you can also use splice to add stuff to the middle of the queue or use unshift vs push to put requests at the head of the queue for high priority requests.
You can also increase the number of simultaneous requests by popping more than one request per loop. Just be sure to only call process_request_queue only once per loop to avoid exponential growth of simultaneous requests:
// Handling two simultaneous request channels:
function process_request_queue () {
if (request_queue.length) {
var req = request_queue.pop();
ajax(req.url,function(result){
req.callback(result);
// Best to loop on the first request.
// The second request below may never fire
// if the queue runs out of requests.
process_request_queue();
}
}
if (request_queue.length) {
var req = request_queue.pop();
ajax(req.url,function(result){
req.callback(result);
// DON'T CALL process_request_queue here
// We only want to call it once per "loop"
// otherwise the "loop" would grow into a "tree"
}
}
}
You could make that ASYNC and use a small library I wrote some time ago that will let you queue function calls.

What are the proper use cases for process.nextTick in Node.js?

I have seen process.nextTick used in a few places and can't quite tell what it's being used for.
https://github.com/andrewvc/node-paperboy/blob/master/lib/paperboy.js#L24
https://github.com/substack/node-browserify/blob/master/index.js#L95
What are the main/proper use cases of process.nextTick in Node.js? The docs basically say it's a more optimized way of doing setTimeout, but that doesn't help much.
I used to do a lot of ActionScript, so the idea of "waiting until the next frame" to execute code makes sense on some level - if you're running an animation you can have it update every frame rather than every millisecond for example. It also makes sense when you want to coordinate setting a bunch of variables - you change the variables in frame 1, and apply the changes in frame 2. Flex implemented something like this in their component lifecycle.
My question is, what should I be using this for in server-side JavaScript? I don't see any places right off the bat where you'd need this kind of fine-tuned performance/flow control. Just looking for a point in the right direction.
process.nextTick puts a callback into a queue. Every callback in this queue will get executed at the very beginning of the next tick of the event loop. It's basically used as a way to clear your call stack. When the documentation says it's like setTimeout, it means to say it's like using setTimeout(function() { ... }, 1) in the browser. It has the same use cases.
One example use case would be, you create a constructor for some object that needs events bound to it. However, you can't start emitting events right away, because the code instantiating it hasn't had time to bind to events yet. Your constructor call is above them in the call stack, and if you continue to do synchronous things, it will stay that way. In this case, you could use a process.nextTick before proceeding to whatever you were about to do. It guarantees that the person using your constructor will have time enough to bind events.
Example:
var MyConstructor = function() {
...
process.nextTick(function() {
self._continue();
});
};
MyConstructor.prototype.__proto__ = EventEmitter.prototype;
MyConstructor.prototype._continue = function() {
// without the process.nextTick
// these events would be emitted immediately
// with no listeners. they would be lost.
this.emit('data', 'hello');
this.emit('data', 'world');
this.emit('end');
};
Example Middleware using this constructor
function(req, res, next) {
var c = new MyConstructor(...);
c.on('data', function(data) {
console.log(data);
});
c.on('end', next);
}
It simply runs your function at the end of the current operation before the next I/O callbacks. Per documentation you can use it run your code after the callers synchronous code has executed, potentially if you can use this to give your API/library user an opportunity to register event handlers which need to be emitted ASAP. Another use case is to ensure that you always call the callbacks with asynchronously to have consistent behaviours in different cases.
In the past process.nextTick would be have been used provide an opportunities for I/O events to be executed however this is not the behaviour anymore and setImmediate was instead created for that behaviour. I explained a use case in the answer of this question.
"Every callback in this queue will get executed at the very beginning of the next tick of the event loop" is not correct. Actually, nextTick() runs right after completing the current phase and before starting the next phase. Minute details are important!
A function passed to process.nextTick() is going to be executed on the current iteration of the event loop, after the current operation ends. This means it will always execute before setTimeout and setImmediate.
Understanding setImmediate()

Do I need to be concerned with race conditions with asynchronous Javascript?

Suppose I load some Flash movie that I know at some point in the future will call window.flashReady and will set window.flashReadyTriggered = true.
Now I have a block of code that I want to have executed when the Flash is ready. I want it to execute it immediately if window.flashReady has already been called and I want to put it as the callback in window.flashReady if it has not yet been called. The naive approach is this:
if(window.flashReadyTriggered) {
block();
} else {
window.flashReady = block;
}
So the concern I have based on this is that the expression in the if condition is evaluated to false, but then before block() can be executed, window.flashReady is triggered by the external Flash. Consequently, block is never called.
Is there a better design pattern to accomplish the higher level goal I'm going for (e.g., manually calling the flashReady callback)? If not, am I safe, or are there other things I should do?
All Javascript event handler scripts are handled from one master event queue system. This means that event handlers run one at a time and one runs until completion before the next one that's ready to go starts running. As such, there are none of the typical race conditions in Javascript that one would see in a multithreaded language where multiple threads of the language can be running at once (or time sliced) and create real-time conflict for access to variables.
Any individual thread of execution in javascript will run to completion before the next one starts. That's how Javascript works. An event is pulled from the event queue and then code starts running to handle that event. That code runs by itself until it returns control to the system where the system will then pull the next event from the event queue and run that code until it returns control back to the system.
Thus the typical race conditions that are caused by two threads of execution going at the same time do not happen in Javascript.
This includes all forms of Javascript events including: user events (mouse, keys, etc..), timer events, network events (ajax callbacks), etc...
The only place you can actually do multi-threading in Javascript is with the HTML5 Web Workers or Worker Threads (in node.js), but they are very isolated from regular javascript (they can only communicate with regular javascript via message passing) and cannot manipulate the DOM at all and must have their own scripts and namespace, etc...
While I would not technically call this a race condition, there are situations in Javascript because of some of its asynchronous operations where you may have two or more asynchronous operations in flight at the same time (not actually executing Javascript, but the underlying asynchronous operation is running native code at the same time) and it may be unpredictable when each operation will complete relative to the others. This creates an uncertainty of timing which (if the relative timing of the operations is important to your code) creates something you have to manually code for. You may need to sequence the operations so one runs and you literally wait for it to complete before starting the next one. Or, you may start all three operations and then have some code that collects all three results and when they are all ready, then your code proceeds.
In modern Javascript, promises are generally used to manage these types of asynchronous operations.
So, if you had three asynchronous operations that each return a promise (like reading from a database, fetching a request from another server, etc...), you could manually sequence then like this:
a().then(b).then(c).then(result => {
// result here
}).catch(err => {
// error here
});
Or, if you wanted them all to run together (all in flight at the same time) and just know when they were all done, you could do:
Promise.all([a(), b(), c()])..then(results => {
// results here
}).catch(err => {
// error here
});
While I would not call these race conditions, they are in the same general family of designing your code to control indeterminate sequencing.
There is one special case that can occur in some situations in the browser. It's not really a race condition, but if you're using lots of global variables with temporary state, it could be something to be aware of. When your own code causes another event to occur, the browser will sometimes call that event handler synchronously rather than waiting until the current thread of execution is done. An example of this is:
click
the click event handler changes focus to another field
that other field has an event handler for onfocus
browser calls the onfocus event handler immediately
onfocus event handler runs
the rest of the click event handler runs (after the .focus() call)
This isn't technically a race condition because it's 100% known when the onfocus event handler will execute (during the .focus() call). But, it can create a situation where one event handler runs while another is in the middle of execution.
JavaScript is single threaded. There are no race conditions.
When there is no more code to execute at your current "instruction pointer", the "thread" "passes the baton", and a queued window.setTimeout or event handler may execute its code.
You will get better understanding for Javascript's single-threading approach reading node.js's design ideas.
Further reading:
Why doesn't JavaScript support multithreading?
It is important to note that you may still experience race conditions if you eg. use multiple async XMLHttpRequest. Where the order of returned responses is not defined (that is responses may not come back in the same order they were send). Here the output depends on the sequence or timing of other uncontrollable events (server latency etc.). This is a race condition in a nutshell.
So even using a single event queue (like in JavaScript) does not prevent events coming in uncontrollable order and your code should take care of this.
Sure you need. It happens all the time:
<button onClick=function() {
const el = document.getElementById("view");
fetch('/some/api').then((data) => {
el.innerHTML = JSON.stringify(data);
})
}>Button 1</button>
<button onClick=function() {
const el = document.getElementById("view");
fetch('/some/other/api').then((data) => {
el.innerHTML = JSON.stringify(data);
})
}>Button 2</button>
Some people don't view it as a race condition.
But it really is.
Race condition is broadly defined as "the behavior of an electronic, software, or other system where the output is dependent on the sequence or timing of other uncontrollable events".
If user clicks these 2 buttons in a brief period, the output is not guaranteed to depend of the order of clicking. It depends on which api request will be resolved sooner. Moreover, the DOM element you're referencing can be removed by some other event (like changing route).
You can mitigate this race condition by disabling button or showing some spinner when loading operation in progress, but that's cheating. You should have some mutex/counter/semaphore at the code level to control your asynchronous flow.
To adapt it to your question, it depends on what "block()" is. If it's a synchronous function, you don't need to worry. But if it's asynchronous, you have to worry:
function block() {
window.blockInProgress = true;
// some asynchronous code
return new Promise(/* window.blockInProgress = false */);
}
if(!window.blockInProgress) {
block();
} else {
window.flashReady = block;
}
This code makes sense you want to prevent block from being called multiple times. But if you don't care, or the "block" is synchronous, you shouldn't worry. If you're worried about that a global variable value can change when you're checking it, you shouldn't be worried, it's guaranteed to not change unless you call some asynchronous function.
A more practical example. Consider we want to cache AJAX requests.
fetchCached(params) {
if(!dataInCache()) {
return fetch(params).then(data => putToCache(data));
} else {
return getFromCache();
}
}
So happens if we call this code multiple times? We don't know which data will return first, so we don't know which data will be cached. The first 2 times it will return fresh data, but the 3rd time we don't know the shape of response to be returned.
Yes, of course there are race conditions in Javascript. It is based on the event loop model and hence exhibits race conditions for async computations. The following program will either log 10 or 16 depending on whether incHead or sqrHead is completed first:
const rand = () => Math.round(Math.random() * 100);
const incHead = xs => new Promise((res, rej) =>
setTimeout(ys => {
ys[0] = ys[0] + 1;
res(ys);
}, rand(), xs));
const sqrHead = xs => new Promise((res, rej) =>
setTimeout(ys => {
ys[0] = ys[0] * ys[0];
res(ys);
}, rand(), xs))
const state = [3];
const foo = incHead(state);
const bar = sqrHead(state);
Promise.all([foo, bar])
.then(_ => console.log(state));

Categories