I'm new to NodeJS and am having a little trouble understanding what types of actions/tasks are executed asynchronously vs. synchronously. The specific example that I was working through was splitting a string.
I was trying to count the number of new lines in a block of text and then print that out. The below example worked. However, I'm not entirely clear why. My basic (and likely flawed) understanding is that anything that takes time is executed asynchronously (e.g. I/O), but I don't know what types of actions count as "taking time". Does the Split() method "take time"? It has to loop through the contents of a string, and if the string is abnormally long, this could take a while, why does this execute synchronously then or is it just that the split method blocks?
My question here is specific to the split method, but if anyone could also talk about or point me in the direction of some documentation that explains what gets executed synchronously vs asynchronously, it'd really appreciated!
var array = "test\nstring\nexample".split("\n");
console.log(array.length-1);
Most operations in JavaScript itself are synchronous. The exceptions include the obvious ones such as setTimeout(), setInterval(), requestAnimationFrame(), etc. Also just because you pass in a callback does not mean the function is asynchronous (for example see some of the array methods, like array.forEach(), array.map(), array.filter(), array.some(), etc.).
Node.js (core), which builds on top of JavaScript (via the v8 engine), adds its own synchronous and asynchronous methods. However fairly early on it was decided to distinguish between the two by way of an easily visible Sync suffix to functions that perform synchronously. However, similar to JavaScript, there are some exceptions (e.g. require()). It should also be noted that userland modules (on npm for example) may have their own convention (or none at all), so for those third party modules you will need to read the documentation to be sure of the behavior of their exported functions.
Essentially JavaScript is single threaded. The best advice is to assume it is single threaded unless something suggests otherwise.
Functions that aren't synchronous tend to accept a callback parameter to be executed. Examples of these would be a jQuery.ajax call or the setTimeout function.
Related
What difference between methods callAsyncJavaScript and evaluateJavaScript?
Using it with same script like:
evaluateJavaScript("document.documentElement.outerHTML.toString()")
and
callAsyncJavaScript("document.documentElement.outerHTML.toString()")
returns different results:
HTML in first, and .sucess(nil) in second...
There are 3 methods that allow to run JS from WKWebView (well, there's more signatures, but 3 principle methods):
The oldest and most primitive is evaluateJavaScript(_:completionHandler:), which allows to run arbitrary piece of code. This day and age, there's no excuse to use, as it has no benefits over the other methods.
The more modern improved version is evaluateJavaScript(_:in:in:completionHandler:) which was introduced in iOS 14. It can be used to run synchronous code. It uses a modern way to present a result - Result<Any, Error>, which allows to encapsulate any sort of result, success or failure in a single convenient to use object. But the key difference is that it allows you to specify the context of the script, which prevents conflicts between different scripts:
Executing a script in its own content world effectively gives it a separate copy of the environment variables to modify.
But neither of these functions is able to support asynchronous JavaScript code, i.e. promise. You can implement promise execution if you combine evaluateJavaScript with WKScriptMessage (e.g. as explained here), but it's very awkward. So callAsyncJavaScript(_:arguments:in:in:completionHandler:) solves that problem: you can use it to execute a Promise, without additional help of WKScriptMessage, which makes code less distributed, and easier. Here's an example of how it's done.
It also has 2 additional advantages of this method:
The first argument is no longer a function name, but functionBody, which becomes an anonymous function, and prevents the naming conflicts possible due to global scope of the function.
And also arguments are separated from the body as [String : Any], which allow us to define function body somewhere statically, and reuse it by passing the arguments (as opposed to injecting them to function body as you need with evaluateJavaScript.
So you should be using this method for executing asynchronous JS code, and you can use it for the synchronous code.
One more thing: with callAsyncJavaScript you need to remember that you are defining the function body, so if you want something returned, you need to use the word return (as opposed to evaluateJavaScript, where you defined a piece of code to run, which can have no return, unless you define a function).
Straight to the point, I am running an http server in Node.js managing a hotel's check-in/out info where I write all the JSON data from memory to the same file using "fs.writeFile".
The data usually don't exceed 145kB max, however since I need to write them everytime that I get an update from my DataBase, I have data loss/bad JSON format when calls to fs.writeFile happen one after each other immediately.
Currently I have solved this problem using "fs.writeFileSync" however I would like to hear for a more sophisticated solution and not using the easy/bad solution of sync function.
Using fs.promises results in the same error since again I have to make multiple calls to fs.promises.
According to Node's documentation , calling fs.writefile or fs.promises multiple times is not safe and they suggest using a filestream, however this is not currently an option.
To summarize, I need to wait for fs.writeFile to end normally before attempting any repeated write action, and using the callback is not useful since I don't know a priori when a write action needs to be done.
Thank you very much in advance
I assume you mean you are overwriting or truncating the file while the last write request is still being written. If I were you, I would use the promises API and heed the warning from the documentation:
It is unsafe to use fsPromises.writeFile() multiple times on the same file without waiting for the promise to be settled.
You can await the result in a traditional loop, or very carefully use .then() to "synchronize" your callbacks, but if you're not doing anything else in your event loop except reading from your database and writing to this file, you might as well just use writeFileSync to keep things simple/safe. The asynchronous APIs (callback and Promises) are intended to allow your program to do other things in the meantime; if this is not necessary and the async APIs add troublesome complexity for your code, just use the synchronous APIs. That's true for any node API or library function, not just fs.writeFile.
There are also libraries that will perform atomic filesystem operations for you and abstract away the implementation details, but I think these are probably overkill for you unless you describe your use case in more detail. For example, why you're dumping a database to disk as JSON as fast/frequently as you can, rather than keeping things in memory or using event-based incremental updates (e.g. a real, local database with atomicity and consistency guarantees).
thank you for your response!
Since my app is mainly an http server,yes I do other things rather than simply input/output, although with not a great amount of requests. I will review again the promises solution but the first time I had no luck.
To explain more I have a:function updateRoom(data){ ...update things in memory... writetoDisk(); }
and the function writetoDisk(){
fsWriteFile(....)
}
Making the function writetoDisk an async function and implementing "await" inside it still does not solve the problem since the updateRoom function will call the writetoDisk without waiting for it to end.
The ".then" approach can not be implemented since my updateRoom is being called constantly and dynamically .
If you happen to know 1-2 thing about async-await you are more than welcome to explain me a bit more, thanks again nevertheless!
I mean, this may seem like a silly question, but I don't think it really is. In programming languages where we can use threads, the concept is very simple. But Javascript is single-threaded and handles events through callbacks. What really makes something "branch-out" the execution process (I know this is the wrong technical term but I hope you understand my point). Is it the timers? Like setTimeout and setInterval?
I mean, if I were to write a Database with JS (I know JS wouldn't fit it but please bear with me for the sake of the example) and I need to check for some hardware operations to be completed before calling a callback function to handle the completion of such operation I cannot for the sake of me think in a possible way to actually make this 'pseudo-code' async. I mean:
function op() {
while (someHardwareOpIsNotCompleted()) doWhatever();
callback("done");
}
How can I make this function run asynchronously without using some form of timer that refers and 'counts' to a different thread? In my mind the only way around it is to pass to setTimeout the function reference as such setTimeout(op). From here on I can implement my callbacks and listeners, or the syntax-sugar Promise or even the syntax sugar-sugar "async".
Hopefully I was able to get my point across and I don't get replies like: "Use promises!" or something of the sort. If the real structure has nothing to do with timers but only simple callbacks, can someone give me an example on how to achieve a non-blocking execution of some piece of code without timers?
Thanks
A function is asynchronous for one of two reasons.
It needs to happen later (e.g. when a timeout has finished or when a click happens)
It takes a long time and shouldn't tie up the JS engine while it runs (e.g. accessing a database / file / network resource)
The asynchronous parts of the latter functions are not written in JavaScript. They are provided by the host environment and are usually written in the same language a that (e.g. C). The host environment exposes an API to provide access to them to the JavaScript.
For example, the source code for Chrome's implementation of XMLHttpRequest is written in C++.
If you need to poll something, then testing it on a timer is the usual way.
Can anyone help me understand the function of NodeJS and performance impact for the below scenario.
a. Making the request to Rest API end point "/api/XXX". In this request, i am returning the response triggering the asynchronous function like below.
function update(req, res) {
executeUpdate(req.body); //Asynchronous function
res.send(200);
}
b. In this, I send the response back without waiting for the function to complete and this function executing four mongodb updates of different collection.
Questions:
As I read, the NodeJS works on the single thread, how this
asynchronous function is executing?
If there are multiple requests for same end point, how will be the
performance impact of NodeJS?
How exactly the NodeJS handles the asynchronous function of each
request, because as the NodeJS is runs on the single thread, is there
any possibility of the memory issue?
In short, it depends on what you are doing in your function.
The synchronous functions in node are executed on main thread, thus,
they will not preempt and execute until end of the function or until
return statement is encountered.
The async functions, on the other hand, are removed from main thread,
and will only be executed when async tasks are completed on a
separate worker thread.
There are, I think, two different parts in the answer to your question.
Actual Performance - which includes CPU & memory performance. It also obviously includes speed.
Understanding as the previous poster said, Sync and Async.
In dealing with #1 - actual performance the real only way to test it is to create or use a testing environment on your code. In a rudimentary way based upon the system you are using you can view some of the information in top (linux) or Glances will give you a basic idea of performance, but in order to know exactly what is going on you will need to apply some of the various testing environments or writing your own tests.
Approaching #2 - It is not only sync and async processes you have to understand, but also the ramifications of both. This includes the use of callbacks and promises.
It really all depends on the current process you are attempting to code. For instance, many Node programmers seem to prefer using promises when they make calls to MongoDB, especially when one requires more than one call based upon the return of the cursor.
There is really no written-in-stone formula for when you use sync or async processes. Avoiding callback hell is something all Node programmers try to do. Catching errors etc. is something you always need to be careful about. As I said some programmers will always opt for Promises or Async when dealing with returns of data. The famous Async library coupled with Bluebird are the choice of many for certain scenarios.
All that being said, and remember your question is general and therefore so is my answer, in order to properly know the implications on your performance, in memory, cpu and speed as well as in return of information or passing to the browser, it is a good idea to understand as best as you can sync, async, callbacks, promises and error catching. You will discover certain situations are great for sync (and much faster), while others do require async and/or promises.
Hope this helps somewhat.
I have seen this link: Implementing Mutual Exclusion in JavaScript.
On the other hand, I have read that there are no threads in javascript, but what exactly does that mean?
When events occur, where in the code can they interrupt?
And if there are no threads in JS, do I need to use mutexes in JS or not?
Specifically, I am wondering about the effects of using functions called by setTimeout() and XmlHttpRequest's onreadystatechange on globally accessible variables.
Javascript is defined as a reentrant language which means there is no threading exposed to the user, there may be threads in the implementation. Functions like setTimeout() and asynchronous callbacks need to wait for the script engine to sleep before they're able to run.
That means that everything that happens in an event must be finished before the next event will be processed.
That being said, you may need a mutex if your code does something where it expects a value not to change between when the asynchronous event was fired and when the callback was called.
For example if you have a data structure where you click one button and it sends an XmlHttpRequest which calls a callback the changes the data structure in a destructive way, and you have another button that changes the same data structure directly, between when the event was fired and when the call back was executed the user could have clicked and updated the data structure before the callback which could then lose the value.
While you could create a race condition like that it's very easy to prevent that in your code since each function will be atomic. It would be a lot of work and take some odd coding patterns to create the race condition in fact.
The answers to this question are a bit outdated though correct at the time they were given. And still correct if looking at a client-side javascript application that does NOT use webworkers.
Articles on web-workers:
multithreading in javascript using webworkers
Mozilla on webworkers
This clearly shows that javascript via web-workers has multithreading capabilities. As concerning to the question are mutexes needed in javascript? I am unsure of this. But this stackoverflow post seems relevant:
Mutual Exclusion for N Asynchronous Threads
Yes, mutexes can be required in Javascript when accessing resources that are shared between tabs/windows, like localStorage.
For example, if a user has two tabs open, simple code like the following is unsafe:
function appendToList(item) {
var list = localStorage["myKey"];
if (list) {
list += "," + item;
}
else {
list = item;
}
localStorage["myKey"] = list;
}
Between the time that the localStorage item is 'got' and 'set', another tab could have modified the value. It's generally unlikely, but possible - you'd need to judge for yourself the likelihood and risk associated with any contention in your particular circumstances.
See the following articles for a more detail:
Wait, Don't Touch That: Mutual Exclusion Locks & JavaScript - Medium Engineering
JavaScript concurrency and locking the HTML5 localStorage - Benjamin Dumke-von der Eh, Stackoverflow
As #william points out,
you may need a mutex if your code does something where it expects a
value not to change between when the asynchronous event was fired and
when the callback was called.
This can be generalised further - if your code does something where it expects exclusive control of a resource until an asynchronous request resolves, you may need a mutex.
A simple example is where you have a button that fires an ajax call to create a record in the back end. You might need a bit of code to protect you from trigger happy users clicking away and thereby creating multiple records. there are a number of approaches to this problem (e.g. disable the button, enable on ajax success). You could also use a simple lock:
var save_lock = false;
$('#save_button').click(function(){
if(!save_lock){
//lock
save_lock=true;
$.ajax({
success:function()
//unlock
save_lock = false;
}
});
}
}
I'm not sure if that's the best approach and I would be interested to see how others handle mutual exclusion in javascript, but as far as i'm aware that's a simple mutex and it is handy.
JavaScript is single threaded... though Chrome may be a new beast (I think it is also single threaded, but each tab has it's own JavaScript thread... I haven't looked into it in detail, so don't quote me there).
However, one thing you DO need to worry about is how your JavaScript will handle multiple ajax requests coming back in not the same order you send them. So, all you really need to worry about is make sure your ajax calls are handled in a way that they won't step on eachother's feet if the results come back in a different order than you sent them.
This goes for timeouts too...
When JavaScript grows multithreading, then maybe worry about mutexes and the like....
JavaScript, the language, can be as multithreaded as you want, but browser embeddings of the javascript engine only runs one callback (onload, onfocus, <script>, etc...) at a time (per tab, presumably). William's suggestion of using a Mutex for changes between registering and receiving a callback should not be taken too literally because of this, as you wouldn't want to block in the intervening callback since the callback that will unlock it will be blocked behind the current callback! (Wow, English sucks for talking about threading.) In this case, you probably want to do something along the lines of redispatching the current event if a flag is set, either literally or with the likes of setTimeout().
If you are using a different embedding of JS, and that executes multiple threads at once, it can get a bit more dicey, but due to the way JS can use callbacks so easily and locks objects on property access explicit locking is not nearly as necessary. However, I would be surprised if an embedding designed for general code (eg, game scripting) that used multi threading didn't also give some explicit locking primitives as well.
Sorry for the wall of text!
Events are signaled, but JavaScript execution is still single-threaded.
My understanding is that when event is signaled the engine stops what it is executing at the moment to run event handler. After the handler is finished, script execution is resumed. If event handler changed some shared variables then resumed code will see these changes appearing "out of the blue".
If you want to "protect" shared data, simple boolean flag should be sufficient.