I have been reading up on nodejs lately, trying to understand how it handles multiple concurrent requests. I know NodeJs is a single threaded event loop based architecture, and at a given point in time only one statement is going to be executing, i.e. on the main thread and that blocking code/IO calls are handled by the worker threads (default is 4).
Now my question is, what happens when a web server built using NodeJs receives multiple requests? I know that there are lots of similar questions here, but haven't found a concrete answer to my question.
So as an example, let's say we have following code inside a route like /index:
app.use('/index', function(req, res, next) {
console.log("hello index routes was invoked");
readImage("path", function(err, content) {
status = "Success";
if(err) {
console.log("err :", err);
status = "Error"
}
else {
console.log("Image read");
}
return res.send({ status: status });
});
var a = 4, b = 5;
console.log("sum =", a + b);
});
Let's assume that the readImage() function takes around 1 min to read that Image.
If two requests, T1, and T2 come in concurrently, how is NodeJs going to process these request ?
Does it going to take first request T1, process it while queueing the request T2? I assume that if any async/blocking stuff is encountered like readImage, it then sends that to a worker thread (then some point later when async stuff is done that thread notifies the main thread and main thread starts executing the callback?), and continues by executing the next line of code?
When it is done with T1, it then processes the T2 request? Is that correct? Or it can process T2 in between (meaning whilethe code for readImage is running, it can start processing T2)?
Is that right?
Your confusion might be coming from not focusing on the event loop enough. Clearly you have an idea of how this works, but maybe you do not have the full picture yet.
Part 1, Event Loop Basics
When you call the use method, what happens behind the scenes is another thread is created to listen for connections.
However, when a request comes in, because we're in a different thread than the V8 engine (and cannot directly invoke the route function), a serialized call to the function is appended onto the shared event loop, for it to be called later. ('event loop' is a poor name in this context, as it operates more like a queue or stack)
At the end of the JavaScript file, the V8 engine will check if there are any running theads or messages in the event loop. If there are none, it will exit with a code of 0 (this is why server code keeps the process running). So the first Timing nuance to understand is that no request will be processed until the synchronous end of the JavaScript file is reached.
If the event loop was appended to while the process was starting up, each function call on the event loop will be handled one by one, in its entirety, synchronously.
For simplicity, let me break down your example into something more expressive.
function callback() {
setTimeout(function inner() {
console.log('hello inner!');
}, 0); // †
console.log('hello callback!');
}
setTimeout(callback, 0);
setTimeout(callback, 0);
† setTimeout with a time of 0, is a quick and easy way to put something on the event loop without any timer complications, since no matter what, it has always been at least 0ms.
In this example, the output will always be:
hello callback!
hello callback!
hello inner!
hello inner!
Both serialized calls to callback are appended to the event loop before either of them is called. This is guaranteed. That happens because nothing can be invoked from the event loop until after the full synchronous execution of the file.
It can be helpful to think of the execution of your file, as the first thing on the event loop. Because each invocation from the event loop can only happen in series, it becomes a logical consequence, that no other event loop invocation can occur during its execution; Only when the previous invocation is finished, can the next event loop function be invoked.
Part 2, The inner Callback
The same logic applies to the inner callback as well, and can be used to explain why the program will never output:
hello callback!
hello inner!
hello callback!
hello inner!
Like you might expect.
By the end of the execution of the file, two serialized function calls will be on the event loop, both for callback. As the Event loop is a FIFO (first in, first out), the setTimeout that came first, will be be invoked first.
The first thing callback does is perform another setTimeout. As before, this will append a serialized call, this time to the inner function, to the event loop. setTimeout immediately returns, and execution will move on to the first console.log.
At this time, the event loop looks like this:
1 [callback] (executing)
2 [callback] (next in line)
3 [inner] (just added by callback)
The return of callback is the signal for the event loop to remove that invocation from itself. This leaves 2 things in the event loop now: 1 more call to callback, and 1 call to inner.
Now callback is the next function in line, so it will be invoked next. The process repeats itself. A call to inner is appended to the event loop. A console.log prints Hello Callback! and we finish by removing this invocation of callback from the event loop.
This leaves the event loop with 2 more functions:
1 [inner] (next in line)
2 [inner] (added by most recent callback)
Neither of these functions mess with the event loop any further. They execute one after the other, the second one waiting for the first one's return. Then when the second one returns, the event loop is left empty. This fact, combined with the fact that there are no other threads currently running, triggers the end of the process, which exits with a return code of 0.
Part 3, Relating to the Original Example
The first thing that happens in your example, is that a thread is created within the process which will create a server bound to a particular port. Note, this is happening in precompiled C++ code, not JavaScript, and is not a separate process, it's a thread within the same process. see: C++ Thread Tutorial.
So now, whenever a request comes in, the execution of your original code won't be disturbed. Instead, incoming connection requests will be opened, held onto, and appended to the event loop.
The use function, is the gateway into catching the events for incoming requests. Its an abstraction layer, but for the sake of simplicity, it's helpful to think of the use function like you would a setTimeout. Except, instead of waiting a set amount of time, it appends the callback to the event loop upon incoming http requests.
So, let's assume that there are two requests coming in to the server: T1 and T2. In your question you say they come in concurrently, since this is technically impossible, I'm going to assume they are one after the other, with a negligible time in between them.
Whichever request comes in first, will be handled first by the secondary thread from earlier. Once that connection has been opened, it's appended to the event loop, and we move on to the next request, and repeat.
At any point after the first request is added to the event loop, V8 can begin execution of the use callback.
A quick aside about readImage
Since its unclear whether readImage is from a particular library, something you wrote or otherwise, it's impossible to tell exactly what it will do in this case. There are only 2 possibilities though, so here they are:
It's entirely synchronous, never using an alternate thread or the event loop
function readImage (path, callback) {
let image = fs.readFileSync(path);
callback(null, image);
// a definition like this will force the callback to
// fully return before readImage returns. This means
// means readImage will block any subsequent calls.
}
It's entirely asynchronous, and takes advantage of fs' async callback.
function readImage (path, callback) {
fs.readFile(path, (err, data) => {
callback(err, data);
});
// a definition like this will force the readImage
// to immediately return, and allow exectution
// to continue.
}
For the purposes of explanation, I'll be operating under the assumption that readImage will immediately return, as proper asynchronous functions should.
Once the use callback execution is started, the following will happen:
The first console log will print.
readImage will kick off a worker thread and immediately return.
The second console log will print.
During all of this, its important to note, these operations are happening synchronously; No other event loop invocation can start until these are finished. readImage may be asynchronous, but calling it is not, the callback and usage of a worker thread is what makes it asynchronous.
After this use callback returns, the next request has probably already finished parsing and was added to the event loop, while V8 was busy doing our console logs and readImage call.
So the next use callback is invoked, and repeats the same process: log, kick off a readImage thread, log again, return.
After this point, the readImage functions (depending on how long they take) have probably already retrieved what they needed and appended their callback to the event loop. So they will get executed next, in order of whichever one retrieved its data first. Remember, these operations were happening in separate threads, so they happened not only in parallel to the main javascript thread, but also parallel to each other, so here, it doesn't matter which one got called first, it matters which one finished first, and got 'dibs' on the event loop.
Whichever readImage completed first will be the first one to execute. So, assuming no errors occured, we'll print out to the console, then write to the response for the corresponding request, held in lexical scope.
When that send returns, the next readImage callback will begin execution: console log, and writing to the response.
At this point, both readImage threads have died, and the event loop is empty, but the thread that holds the server port binding is keeping the process alive, waiting for something else to add to the event loop, and the cycle to continue.
I hope this helps you understand the mechanics behind the asynchronous nature of the example you provided.
For each incoming request, node will handle it one by one. That means there must be order, just like the queue, first in first serve. When node starts processing request, all synchronous code will execute, and asynchronous will pass to work thread, so node can start to process the next request. When the asynchrous part is done, it will go back to main thread and keep going.
So when your synchronous code takes too long, you block the main thread, node won't be able to handle other request, it's easy to test.
app.use('/index', function(req, res, next) {
// synchronous part
console.log("hello index routes was invoked");
var sum = 0;
// useless heavy task to keep running and block the main thread
for (var i = 0; i < 100000000000000000; i++) {
sum += i;
}
// asynchronous part, pass to work thread
readImage("path", function(err, content) {
// when work thread finishes, add this to the end of the event loop and wait to be processed by main thread
status = "Success";
if(err) {
console.log("err :", err);
status = "Error"
}
else {
console.log("Image read");
}
return res.send({ status: status });
});
// continue synchronous part at the same time.
var a = 4, b = 5;
console.log("sum =", a + b);
});
Node won't start processing the next request until finish all synchronous part. So people said don't block the main thread.
There are a number of articles that explain this such as this one
The long and the short of it is that nodejs is not really a single threaded application, its an illusion. The diagram at the top of the above link explains it reasonably well, however as a summary
NodeJS event-loop runs in a single thread
When it gets a request, it hands that request off to a new thread
So, in your code, your running application will have a PID of 1 for example. When you get request T1 it creates PID 2 that processes that request (taking 1 minute). While thats running you get request T2 which spawns PID 3 also taking 1 minute. Both PID 2 and 3 will end after their task is completed, however PID 1 will continue listening and handing off events as and when they come in.
In summary, NodeJS being 'single threaded' is true, however its just an event-loop listener. When events are heard (requests), it passes them off to a pool of threads that execute asynchronously, meaning its not blocking other requests.
You can simply create child process by shifting readImage() function in a different file using fork().
The parent file, parent.js:
const { fork } = require('child_process');
const forked = fork('child.js');
forked.on('message', (msg) => {
console.log('Message from child', msg);
});
forked.send({ hello: 'world' });
The child file, child.js:
process.on('message', (msg) => {
console.log('Message from parent:', msg);
});
let counter = 0;
setInterval(() => {
process.send({ counter: counter++ });
}, 1000);
Above article might be useful to you .
In the parent file above, we fork child.js (which will execute the file with the node command) and then we listen for the message event. The message event will be emitted whenever the child uses process.send, which we’re doing every second.
To pass down messages from the parent to the child, we can execute the send function on the forked object itself, and then, in the child script, we can listen to the message event on the global process object.
When executing the parent.js file above, it’ll first send down the { hello: 'world' } object to be printed by the forked child process and then the forked child process will send an incremented counter value every second to be printed by the parent process.
The V8 JS interpeter (ie: Node) is basically single threaded. But, the processes it kicks off can be async, example: 'fs.readFile'.
As the express server runs, it will open new processes as it needs to complete the requests. So the 'readImage' function will be kicked off (usually asynchronously) meaning that they will return in any order. However the server will manage which response goes to which request automatically.
So you will NOT have to manage which readImage response goes to which request.
So basically, T1 and T2, will not return concurrently, this is virtually impossible. They are both heavily reliant on the Filesystem to complete the 'read' and they may finish in ANY ORDER (this cannot be predicted). Note that processes are handled by the OS layer and are by nature multithreaded (in a modern computer).
If you are looking for a queue system, it should not be too hard to implement/ensure that images are read/returned in the exact order that they are requested.
Since there's not really more to add to the previous answer from Marcus - here's a graphic that explains the single threaded event-loop mechanism:
I need to perform several functions in my JavaScript/jQuery, but I want to avoid blocking the UI.
AJAX is not a viable solution, because of the nature of the application, those functions will easily reach the thousands. Doing this asynchroniously will kill the browser.
So, I need some way of chaining the functions the browser needs to process, and only send the next function after the first has finished.
The algorithm is something like this
For steps from 2 to 15
HTTP:GET amount of items for current step (ranges somewhere from a couple of hundred to multiple thousands)
For every item, HTTP:GET the results
As you see, I have two GET-request-"chains" I somehow need to manage... Especially the innermost loop crashes the browser near to instantly, if it's done asynchroniously - but I'd still like the user to be able to operate the page, so a pure (blocking) synchronous way will not work.
You can easily do this asynchronously without firing all requests at once. All you need to do is manage a queue. The following is pseudo-code for clarity. It's easily translatable to real AJAX requests:
// Basic structure of the request queue. It's a list of objects
// that defines ajax requests:
var request_queue = [{
url : "some/path",
callback : function_to_process_the_data
}];
// This function implements the main loop.
// It looks recursive but is not because each function
// call happens in an event handler:
function process_request_queue () {
// If we have anything in the queue, do an ajax call.
// Otherwise do nothing and let the loop end.
if (request_queue.length) {
// Get one request from the queue. We can either
// shift or pop depending on weather you prefer
// depth first or breadth first processing:
var req = request_queue.pop();
ajax(req.url,function(result){
req.callback(result);
// At the end of the ajax request process
// the queue again:
process_request_queue();
}
}
}
// Now get the ball rolling:
process_request_queue();
So basically we turn the ajax call itself into a pseudo loop. It's basically the classic continuation passing style of programming done recursively.
In your case, an example of a request would be:
request_queue.push({
url : "path/to/OUTER/request",
callback : function (result) {
// You mentioned that the result of the OUTER request
// should trigger another round of INNER requests.
// To do this simply add the INNER requests to the queue:
request_queue.push({
url : result.inner_url,
callback : function_to_handle_inner_request
});
}
});
This is quite flexible because you not only have the option of processing requests either breadth first or depth first (shift vs pop). But you can also use splice to add stuff to the middle of the queue or use unshift vs push to put requests at the head of the queue for high priority requests.
You can also increase the number of simultaneous requests by popping more than one request per loop. Just be sure to only call process_request_queue only once per loop to avoid exponential growth of simultaneous requests:
// Handling two simultaneous request channels:
function process_request_queue () {
if (request_queue.length) {
var req = request_queue.pop();
ajax(req.url,function(result){
req.callback(result);
// Best to loop on the first request.
// The second request below may never fire
// if the queue runs out of requests.
process_request_queue();
}
}
if (request_queue.length) {
var req = request_queue.pop();
ajax(req.url,function(result){
req.callback(result);
// DON'T CALL process_request_queue here
// We only want to call it once per "loop"
// otherwise the "loop" would grow into a "tree"
}
}
}
You could make that ASYNC and use a small library I wrote some time ago that will let you queue function calls.
Based on chrome developer tools a breakpoints I think I'm dealing with a scope issue I can figure out. Is it the way I define the function? The script below is an include js file and the array ' timeStamp I want available for use in other functions without having to call my loadData function everytime.
The timeStamp array goes undefined once it leaves the for loop before it even leaves the function.
var timeStamp = []; // Want this array to be global
function loadData (url){
$.getJSON(url, function(json) {
for (var i=0;i<json.length;i++){
timeStamp.push(json[i].TimeStamp);
}
console.log(inputBITS); //returns the value
});
console.log(inputBITS); //undefined
}
Thank you for anyhelp
It looks like the issue is that getJSON is asynchronous. When it executes and finishes and your code continues on, it indicates only the START of the networking operation to retrieve the data. The actual networking operation does not complete until some time later.
When it does complete, the success handler is called (as specified as the second argument to your getJSON() call) and you populate the timeStamp array. ONLY after that success handler has been called is the timeStamp array valid.
As such, you cannot use the timeStamp array in code that immediately follows the getJSON() call (it hasn't been filled in yet). If other code needs the timeStamp array, you should call that code from the success handler or use some other timing mechanism to make sure that the code that uses the timeStamp array doesn't try to use it until AFTER the success handler has been called and the timeStamp array has been populated.
It is possible to make some Ajax calls be synchronous instead of asynchronous, but that is generally a very bad idea because it locks up the browser during the entire networking operation which is very unfriendly to the viewer. It is much better to fix the coding logic to work with asynchronous networking.
A typical design pattern for an ajax call like this is as follows:
function loadData (url){
$.getJSON(url, function(json) {
// this will execute AFTER the ajax networking finishes
var timeStamp = [];
for (var i=0;i<json.length;i++) {
timeStamp.push(json[i].TimeStamp);
}
console.log(timeStamp);
// now call other functions that need timeStamp data
myOtherFunc(timeStamp);
});
// this will execute when the ajax networking has just been started
//
// timeStamp data is NOT valid here because
// the ajax call has not yet completed
// You can only use the ajax data inside the success handler function
// or in any functions that you call from there
}
And here's another person who doesn't understand basic AJAX...
getJSON is asynchronous. Meaning, code keeps running after the function call and before the successful return of the JSON request.
You can "fix" this by forcing the request to be synchronous with an appropriate flag, but that's a really bad idea for many reasons (the least of which is that you're violating the basic idea of AJAX). The best way is to remember how AJAX works and instead put all your code that should be executed when the AJAX returns, in the right place.
I'm creating a website that will feature news articles. These articles will appear in two columns at the bottom of the page. There will be a button at the bottom to load additional news stories. That means that I need to be able to specify what news story to load. Server-side, I'm simply implementing this with a LIMIT clause in my SQL statement, supplying the :first parameter like so:
SELECT *
FROM news
ORDER BY `date` DESC
LIMIT :first, 1
This means that, client-side, I need to keep track of how many news items I've loaded. I've implemented this by having the function to load new information be kept in an object with a property holding the number of items loaded. I'm worried that this is somehow a race condition that I am not seeing, though, where my loadNewInformation() will be called twice before the number is incremented. My code is as follows:
var News = {
newInfoItems: 0,
loadNewInformation: function(side) {
this.newInfoItems += 1;
jQuery.get(
'/api/news/'+ (this.newInfoItems - 1),
function(html) {
jQuery('div.col'+side).append(html);
}
);
}
}
On page load, this is being called in the following fashion:
News.loadNewInformation('left');
News.loadNewInformation('right');
I could have implemented this in such a way that the success handler of a first call made another AJAX request for the second, which clearly would not be a race condition...but this seems like sloppy code. Thoughts?
(Yes, there is a race condition.)
Addressing Just the JavaScript
All JavaScript code on a page (excluding Web-Workers), which includes callbacks, is run "mutually exclusive".
In this case, because newInfoItems is eagerly evaluated, it is not even that complex: both "/api/news/0" and "/api/news/1" are guaranteed to be fetched (or fail in an attempt). Compare it to this:
// eager evaluation: value of url computed BEFORE request
// this is same as in example (with "fixed" increment order ;-)
// and is just to show point
var url = "/api/news/" + this.newInfoItems
this.newInfoItems += 1;
jQuery.get(url,
function(html) {
// only evaluated on AJAX callback - order of callbacks
// not defined, but will still be mutually exclusive.
jQuery('div.col'+side).append(html);
}
);
However, the order in which the AJAX requests complete is not defined and is influenced by both the server and browser. Furthermore, as discussed below, there is no atomic context established between the server and individual AJAX requests.
Addressing the JavaScript in Context
Now, even though it's established that "/api/news/0" and "/api/news/1" will be invoked, imagine this unlikely, but theoretically possible situation:
articles B,A exist in database
browser sends both AJAX requests -- asynchronously or synchronously, it doesn't matter!
an article is added to the database sometime between when
the server processes the news/0 request, and
the server processes the news/1 request
Then, this happens:
news/0 returns article B (articles B,A in database)
article C added
news/1 returns article B (articles C,B,A in database)
Note that article B was returned twice! Oops :)
So, while the race-condition "seems fairly unlikely", it does exist. A similar race condition (with different results) can occur if news/1 is processed before news/0 and (once again) an article is added between the requests: there no atomic guarantee in either case!
(The above race condition would be more likely if executing the AJAX requests in-series as the time for a new article being added is increased.)
Possible Solution
Consider fetching say, n (2 is okay!) articles in a single request (e.g. "/api/latest/n"), and then laying out the articles as appropriate in the callback. For instance, the first half of the articles on the left and the second half on right, or whatever is appropriate.
As well as eliminating the particular race-condition above by making the single request an atomic action -- with respect to article additions -- it will also result in less network traffic and less work for the server.
The fetch for the API might then look like:
SELECT *
FROM news
ORDER BY `date` DESC
LIMIT :n
Happy coding.
Yes, technically, this could create a race condition of sorts. The calls are asynchronous, and if the first got held up for some reason, the second could return first.
However, as you don't have a great deal that goes on in your callback functions that depend on the presence of the other 'side' being populated I don't see where it should cause you too much grief.
Shouldn't be any race conditions, must be something else wrong in your code. The counter is incremented before your .get() call, so prior to each get the counter should be incremented correctly. Short example to demostrate that it works when called sequentially: http://jsfiddle.net/KkpwF/
I reckon you're hinting at the newItemInfo counter?
Observations:
You're calling News.loadNewInformation twice, with a callback function which will be executed on AJAX completion.
The callback method you provide does not alter the News object.
In this case, you don't have to worry. The second call to News.loadNewInformation will only be executed once the first call completes - that is, excluding executing the callback method. Hence your newItemInfo counter will contain the correct value.
Try this:
loadNewInformation: function(side) {
jQuery.get(
'/api/news/'+ (this.newInfoItems++),
function(html) {
jQuery('div.col'+side).append(html);
}
);
}
Suppose I load some Flash movie that I know at some point in the future will call window.flashReady and will set window.flashReadyTriggered = true.
Now I have a block of code that I want to have executed when the Flash is ready. I want it to execute it immediately if window.flashReady has already been called and I want to put it as the callback in window.flashReady if it has not yet been called. The naive approach is this:
if(window.flashReadyTriggered) {
block();
} else {
window.flashReady = block;
}
So the concern I have based on this is that the expression in the if condition is evaluated to false, but then before block() can be executed, window.flashReady is triggered by the external Flash. Consequently, block is never called.
Is there a better design pattern to accomplish the higher level goal I'm going for (e.g., manually calling the flashReady callback)? If not, am I safe, or are there other things I should do?
All Javascript event handler scripts are handled from one master event queue system. This means that event handlers run one at a time and one runs until completion before the next one that's ready to go starts running. As such, there are none of the typical race conditions in Javascript that one would see in a multithreaded language where multiple threads of the language can be running at once (or time sliced) and create real-time conflict for access to variables.
Any individual thread of execution in javascript will run to completion before the next one starts. That's how Javascript works. An event is pulled from the event queue and then code starts running to handle that event. That code runs by itself until it returns control to the system where the system will then pull the next event from the event queue and run that code until it returns control back to the system.
Thus the typical race conditions that are caused by two threads of execution going at the same time do not happen in Javascript.
This includes all forms of Javascript events including: user events (mouse, keys, etc..), timer events, network events (ajax callbacks), etc...
The only place you can actually do multi-threading in Javascript is with the HTML5 Web Workers or Worker Threads (in node.js), but they are very isolated from regular javascript (they can only communicate with regular javascript via message passing) and cannot manipulate the DOM at all and must have their own scripts and namespace, etc...
While I would not technically call this a race condition, there are situations in Javascript because of some of its asynchronous operations where you may have two or more asynchronous operations in flight at the same time (not actually executing Javascript, but the underlying asynchronous operation is running native code at the same time) and it may be unpredictable when each operation will complete relative to the others. This creates an uncertainty of timing which (if the relative timing of the operations is important to your code) creates something you have to manually code for. You may need to sequence the operations so one runs and you literally wait for it to complete before starting the next one. Or, you may start all three operations and then have some code that collects all three results and when they are all ready, then your code proceeds.
In modern Javascript, promises are generally used to manage these types of asynchronous operations.
So, if you had three asynchronous operations that each return a promise (like reading from a database, fetching a request from another server, etc...), you could manually sequence then like this:
a().then(b).then(c).then(result => {
// result here
}).catch(err => {
// error here
});
Or, if you wanted them all to run together (all in flight at the same time) and just know when they were all done, you could do:
Promise.all([a(), b(), c()])..then(results => {
// results here
}).catch(err => {
// error here
});
While I would not call these race conditions, they are in the same general family of designing your code to control indeterminate sequencing.
There is one special case that can occur in some situations in the browser. It's not really a race condition, but if you're using lots of global variables with temporary state, it could be something to be aware of. When your own code causes another event to occur, the browser will sometimes call that event handler synchronously rather than waiting until the current thread of execution is done. An example of this is:
click
the click event handler changes focus to another field
that other field has an event handler for onfocus
browser calls the onfocus event handler immediately
onfocus event handler runs
the rest of the click event handler runs (after the .focus() call)
This isn't technically a race condition because it's 100% known when the onfocus event handler will execute (during the .focus() call). But, it can create a situation where one event handler runs while another is in the middle of execution.
JavaScript is single threaded. There are no race conditions.
When there is no more code to execute at your current "instruction pointer", the "thread" "passes the baton", and a queued window.setTimeout or event handler may execute its code.
You will get better understanding for Javascript's single-threading approach reading node.js's design ideas.
Further reading:
Why doesn't JavaScript support multithreading?
It is important to note that you may still experience race conditions if you eg. use multiple async XMLHttpRequest. Where the order of returned responses is not defined (that is responses may not come back in the same order they were send). Here the output depends on the sequence or timing of other uncontrollable events (server latency etc.). This is a race condition in a nutshell.
So even using a single event queue (like in JavaScript) does not prevent events coming in uncontrollable order and your code should take care of this.
Sure you need. It happens all the time:
<button onClick=function() {
const el = document.getElementById("view");
fetch('/some/api').then((data) => {
el.innerHTML = JSON.stringify(data);
})
}>Button 1</button>
<button onClick=function() {
const el = document.getElementById("view");
fetch('/some/other/api').then((data) => {
el.innerHTML = JSON.stringify(data);
})
}>Button 2</button>
Some people don't view it as a race condition.
But it really is.
Race condition is broadly defined as "the behavior of an electronic, software, or other system where the output is dependent on the sequence or timing of other uncontrollable events".
If user clicks these 2 buttons in a brief period, the output is not guaranteed to depend of the order of clicking. It depends on which api request will be resolved sooner. Moreover, the DOM element you're referencing can be removed by some other event (like changing route).
You can mitigate this race condition by disabling button or showing some spinner when loading operation in progress, but that's cheating. You should have some mutex/counter/semaphore at the code level to control your asynchronous flow.
To adapt it to your question, it depends on what "block()" is. If it's a synchronous function, you don't need to worry. But if it's asynchronous, you have to worry:
function block() {
window.blockInProgress = true;
// some asynchronous code
return new Promise(/* window.blockInProgress = false */);
}
if(!window.blockInProgress) {
block();
} else {
window.flashReady = block;
}
This code makes sense you want to prevent block from being called multiple times. But if you don't care, or the "block" is synchronous, you shouldn't worry. If you're worried about that a global variable value can change when you're checking it, you shouldn't be worried, it's guaranteed to not change unless you call some asynchronous function.
A more practical example. Consider we want to cache AJAX requests.
fetchCached(params) {
if(!dataInCache()) {
return fetch(params).then(data => putToCache(data));
} else {
return getFromCache();
}
}
So happens if we call this code multiple times? We don't know which data will return first, so we don't know which data will be cached. The first 2 times it will return fresh data, but the 3rd time we don't know the shape of response to be returned.
Yes, of course there are race conditions in Javascript. It is based on the event loop model and hence exhibits race conditions for async computations. The following program will either log 10 or 16 depending on whether incHead or sqrHead is completed first:
const rand = () => Math.round(Math.random() * 100);
const incHead = xs => new Promise((res, rej) =>
setTimeout(ys => {
ys[0] = ys[0] + 1;
res(ys);
}, rand(), xs));
const sqrHead = xs => new Promise((res, rej) =>
setTimeout(ys => {
ys[0] = ys[0] * ys[0];
res(ys);
}, rand(), xs))
const state = [3];
const foo = incHead(state);
const bar = sqrHead(state);
Promise.all([foo, bar])
.then(_ => console.log(state));