When event looping is blocking the application from I/O actions? - javascript

I just read an article about the event loop in JavaScript.
I found two contradictive phrases and I would be glad if someone could clarify.
A downside of this model is that if a message takes too long to
complete, the web application is unable to process user interactions
like click or scroll. The browser mitigates this with the "a script is
taking too long to run" dialog
A very interesting property of the event loop model is that
JavaScript, unlike a lot of other languages, never blocks. Handling
I/O is typically performed via events and callbacks, so when the
application is waiting for an IndexedDB query to return or an XHR
request to return, it can still process other things like user input
So, when is the first one true and when is the second one true?

"A very interesting property of the event loop model is that
JavaScript, unlike a lot of other languages, never blocks.
This is misleading. Without clever programming, JavaScript would always block the UI thread, because runtime logic always blocks the UI, by design. At a smooth sixty frames a second, that means your application logic must always cooperatively yield control (or simply complete execution) within about 16 milliseconds, otherwise your UI will freeze or stutter.
Because of this, most JavaScript APIs that might take a long time (eg. network requests) are designed in such a way to use techniques (eg callbacks, promises) to circumvent this problem, so that they do not block the event loop, avoiding the UI becoming unresponsive.
Put another way: host environments (eg a Web browser or a Node.js runtime instance) are specifically designed to enable the use of an event-based programming model (originally inspired by programming environments like Hypercard on the Mac) whereby the host environment can be asked to perform a long-running task (eg run a timer), without blocking the main thread of execution, and for your program to be notified later, via an "event" when the long-running task is complete, enabling your program to pick-up where it left-off.

Both are correct, even though I agree it is somewhat wrongly expressed.
So by points:
It's true that if a synchronous task takes too long to complete, the event loop "gets stuck" there and then all other queued tasks can't run till it finishes.
Here it is talking about asynchronous tasks so even though an HTTP request, an I/O request or whatever that is async takes too long to process, all the synchronous tasks can keep doing their job, like processing user input

There are two types of code inside Javascript
Synchronous (it's like going one by one).
Asynchronous (it's like skipping for the future)
Synchronous code
You want to find the prime number from 1 to 10000000 with synchronous code you will write a function and that function will perform the calculation and finds out the prime number in the given range but what will happen with synchronous code. The javascript engine is not able to do any task until that task gets finished.
Asynchronous Code
If you wrap the same code inside a callback or more friendly with the SetTimeout method the javascript put that function inside the event queue and perform the other operation when a certain time came the timeout method fires callback certainly when there is nothing inside the call stack, it will ask event loop to pass the first thing which is inside the event queue. So this more about finding an idle time to perform the heavy operation.
Use javascript workers to perform heavy mathematics tasks not
SetTimeout because eventually, it will block the engine when the
function is inside the call stack.

Related

How does node.js schedule asynchronous and synchronous tasks?

I know how node.js executes code asynchronously without blocking the main thread of execution by using the event-loop for scheduling the asynchronous tasks, but I'm not clear on how the main thread actually decides to put aside a piece of code for asynchronous execution.
(Basically what indicates that this piece of code should be executed asynchronously and not synchronously, what are the differentiating factors?)
And also, what are the asynchronous and synchronous APIs provided by node?
There is a mistake in your assumption when you ask:
what indicates that this piece of code should be executed asynchronously and not synchronously
The mistake is thinking that some code are executed asynchronously. This is not true.
Javascript (including node.js) executes all code synchronously. There is no part of your code that is executed asynchronously.
So at first glance that is the answer to your question: there is no such thing as asynchronous code execution.
Wait, what?
But what about all the asynchronous stuff?
Like I said, node.js (and javascript in general) executes all code synchronously. However javascript is able to wait for certain events asynchronously. There are no asynchronous code execution, however there is asynchronous waiting.
What's the difference between code execution and waiting?
Let's look an an example. For the sake of clarity I'll use pseudocode in a fake made-up language to remove any confusion from javascript syntax. Let's say we want to read from a file. This fake language supports both synchronous and asynchronous waiting:
Example1. Synchronously wait for the drive to return bytes form the file
data = readSync('filename.txt');
// the line above will pause the execution of code until all the
// bytes have been read
Example2. Asynchronously wait for the drive to return bytes from the file
// Since it is asynchronous we don't want the read function to
// pause the execution of code. Therefore we cannot return the
// data. We need a mechanism to accept the returned value.
// However, unlike javascript, this fake language does not support
// first-class functions. You cannot pass functions as arguments
// to other functions. However, like Java and C++ we can pass
// objects to functions
class MyFileReaderHandler inherits FileReaderHandler {
method callback (data) {
// process file data in here!
}
}
myHandler = new MyFileReaderHandler()
asyncRead('filename.txt', myHandler);
// The function above does not wait for the file read to complete
// but instead returns immediately and allows other code to execute.
// At some point in the future when it finishes reading all data
// it will call the myHandler.callback() function passing it the
// bytes form the file.
As you can see, asynchronous I/O is not special to javascript. It has existed long before javascript in C++ and even C libraries dealing with file I/O, network I/O and GUI programming. In fact it has existed even before C. You can do this kind of logic in assembly (and indeed that's how people design operating systems).
What's special about javascript is that due to it's functional nature (first-class functions) the syntax for passing some code you want to be executed in the future is simpler:
asyncRead('filename.txt', (data) => {
// process data in here
});
or in modern javascript can even be made to look like synchronous code:
async function main () {
data = await asyncReadPromise('filename.txt');
}
main();
What's the difference between waiting and executing code. Don't you need code to check on events?
Actually, you need exactly 0% CPU time to wait for events. You just need to execute some code to register some interrupt handlers and the CPU hardware (not software) will call your interrupt handler when the interrupt occurs. And all manner of hardware are designed to trigger interrupts: keyboards, hard disk, network cards, USB devices, PCI devices etc.
Disk and network I/O are even more efficient because they also use DMA. These are hardware memory readers/writers that can copy large chunks (kilobytes/megabytes) of memory from one place (eg. hard disk) to another place (eg. RAM). The CPU actually only needs to set up the DMA controller and then is free to do something else. Once the DMA controller completes the transfer it will trigger an interrupt which will execute some device driver code which will inform the OS that some I/O event has completed which will inform node.js which will execute your callback or fulfill your Promise.
All the above use dedicated hardware instead of executing instructions on the CPU to transfer data. Thus waiting for data takes 0% CPU time.
So the parallelism in node.js has more to do with how many PCIe lanes your CPU supports than how many CPU cores it has.
You can, if you want to, execute javascript asynchronously
Like any other language, modern javascript has multi-threading support in the form of webworkers in the browser and worker_threads in node.js. But this is regular multi-threading like any other language where you deliberately start another thread to execute your code asynchronously.

How do Callbacks/Promises/Async Functions make JS asynchronous? [duplicate]

I thought that they were basically the same thing — writing programs that split tasks between processors (on machines that have 2+ processors). Then I'm reading this, which says:
Async methods are intended to be non-blocking operations. An await
expression in an async method doesn’t block the current thread while
the awaited task is running. Instead, the expression signs up the rest
of the method as a continuation and returns control to the caller of
the async method.
The async and await keywords don't cause additional threads to be
created. Async methods don't require multithreading because an async
method doesn't run on its own thread. The method runs on the current
synchronization context and uses time on the thread only when the
method is active. You can use Task.Run to move CPU-bound work to a
background thread, but a background thread doesn't help with a process
that's just waiting for results to become available.
and I'm wondering whether someone can translate that to English for me. It seems to draw a distinction between asynchronicity (is that a word?) and threading and imply that you can have a program that has asynchronous tasks but no multithreading.
Now I understand the idea of asynchronous tasks such as the example on pg. 467 of Jon Skeet's C# In Depth, Third Edition
async void DisplayWebsiteLength ( object sender, EventArgs e )
{
label.Text = "Fetching ...";
using ( HttpClient client = new HttpClient() )
{
Task<string> task = client.GetStringAsync("http://csharpindepth.com");
string text = await task;
label.Text = text.Length.ToString();
}
}
The async keyword means "This function, whenever it is called, will not be called in a context in which its completion is required for everything after its call to be called."
In other words, writing it in the middle of some task
int x = 5;
DisplayWebsiteLength();
double y = Math.Pow((double)x,2000.0);
, since DisplayWebsiteLength() has nothing to do with x or y, will cause DisplayWebsiteLength() to be executed "in the background", like
processor 1 | processor 2
-------------------------------------------------------------------
int x = 5; | DisplayWebsiteLength()
double y = Math.Pow((double)x,2000.0); |
Obviously that's a stupid example, but am I correct or am I totally confused or what?
(Also, I'm confused about why sender and e aren't ever used in the body of the above function.)
Your misunderstanding is extremely common. Many people are taught that multithreading and asynchrony are the same thing, but they are not.
An analogy usually helps. You are cooking in a restaurant. An order comes in for eggs and toast.
Synchronous: you cook the eggs, then you cook the toast.
Asynchronous, single threaded: you start the eggs cooking and set a timer. You start the toast cooking, and set a timer. While they are both cooking, you clean the kitchen. When the timers go off you take the eggs off the heat and the toast out of the toaster and serve them.
Asynchronous, multithreaded: you hire two more cooks, one to cook eggs and one to cook toast. Now you have the problem of coordinating the cooks so that they do not conflict with each other in the kitchen when sharing resources. And you have to pay them.
Now does it make sense that multithreading is only one kind of asynchrony? Threading is about workers; asynchrony is about tasks. In multithreaded workflows you assign tasks to workers. In asynchronous single-threaded workflows you have a graph of tasks where some tasks depend on the results of others; as each task completes it invokes the code that schedules the next task that can run, given the results of the just-completed task. But you (hopefully) only need one worker to perform all the tasks, not one worker per task.
It will help to realize that many tasks are not processor-bound. For processor-bound tasks it makes sense to hire as many workers (threads) as there are processors, assign one task to each worker, assign one processor to each worker, and have each processor do the job of nothing else but computing the result as quickly as possible. But for tasks that are not waiting on a processor, you don't need to assign a worker at all. You just wait for the message to arrive that the result is available and do something else while you're waiting. When that message arrives then you can schedule the continuation of the completed task as the next thing on your to-do list to check off.
So let's look at Jon's example in more detail. What happens?
Someone invokes DisplayWebSiteLength. Who? We don't care.
It sets a label, creates a client, and asks the client to fetch something. The client returns an object representing the task of fetching something. That task is in progress.
Is it in progress on another thread? Probably not. Read Stephen's article on why there is no thread.
Now we await the task. What happens? We check to see if the task has completed between the time we created it and we awaited it. If yes, then we fetch the result and keep running. Let's suppose it has not completed. We sign up the remainder of this method as the continuation of that task and return.
Now control has returned to the caller. What does it do? Whatever it wants.
Now suppose the task completes. How did it do that? Maybe it was running on another thread, or maybe the caller that we just returned to allowed it to run to completion on the current thread. Regardless, we now have a completed task.
The completed task asks the correct thread -- again, likely the only thread -- to run the continuation of the task.
Control passes immediately back into the method we just left at the point of the await. Now there is a result available so we can assign text and run the rest of the method.
It's just like in my analogy. Someone asks you for a document. You send away in the mail for the document, and keep on doing other work. When it arrives in the mail you are signalled, and when you feel like it, you do the rest of the workflow -- open the envelope, pay the delivery fees, whatever. You don't need to hire another worker to do all that for you.
In-browser Javascript is a great example of an asynchronous program that has no multithreading.
You don't have to worry about multiple pieces of code touching the same objects at the same time: each function will finish running before any other javascript is allowed to run on the page. (Update: Since this was written, JavaScript has added async functions and generator functions. These functions do not always run to completion before any other javascript is executed: whenever they reach a yield or await keyword, they yield execution to other javascript, and can continue execution later, similar to C#'s async methods.)
However, when doing something like an AJAX request, no code is running at all, so other javascript can respond to things like click events until that request comes back and invokes the callback associated with it. If one of these other event handlers is still running when the AJAX request gets back, its handler won't be called until they're done. There's only one JavaScript "thread" running, even though it's possible for you to effectively pause the thing you were doing until you have the information you need.
In C# applications, the same thing happens any time you're dealing with UI elements--you're only allowed to interact with UI elements when you're on the UI thread. If the user clicked a button, and you wanted to respond by reading a large file from the disk, an inexperienced programmer might make the mistake of reading the file within the click event handler itself, which would cause the application to "freeze" until the file finished loading because it's not allowed to respond to any more clicking, hovering, or any other UI-related events until that thread is freed.
One option programmers might use to avoid this problem is to create a new thread to load the file, and then tell that thread's code that when the file is loaded it needs to run the remaining code on the UI thread again so it can update UI elements based on what it found in the file. Until recently, this approach was very popular because it was what the C# libraries and language made easy, but it's fundamentally more complicated than it has to be.
If you think about what the CPU is doing when it reads a file at the level of the hardware and Operating System, it's basically issuing an instruction to read pieces of data from the disk into memory, and to hit the operating system with an "interrupt" when the read is complete. In other words, reading from disk (or any I/O really) is an inherently asynchronous operation. The concept of a thread waiting for that I/O to complete is an abstraction that the library developers created to make it easier to program against. It's not necessary.
Now, most I/O operations in .NET have a corresponding ...Async() method you can invoke, which returns a Task almost immediately. You can add callbacks to this Task to specify code that you want to have run when the asynchronous operation completes. You can also specify which thread you want that code to run on, and you can provide a token which the asynchronous operation can check from time to time to see if you decided to cancel the asynchronous task, giving it the opportunity to stop its work quickly and gracefully.
Until the async/await keywords were added, C# was much more obvious about how callback code gets invoked, because those callbacks were in the form of delegates that you associated with the task. In order to still give you the benefit of using the ...Async() operation, while avoiding complexity in code, async/await abstracts away the creation of those delegates. But they're still there in the compiled code.
So you can have your UI event handler await an I/O operation, freeing up the UI thread to do other things, and more-or-less automatically returning to the UI thread once you've finished reading the file--without ever having to create a new thread.

how javascript single threaded and asynchronous

I went through the link below and understood single threaded javascript and its asynchronous nature a little
https://www.sohamkamani.com/blog/2016/03/14/wrapping-your-head-around-async-programming/
But I still have questions that javascript is single threaded and it always moves in forward direction in sequential manner until it finishes its execution.
Whenever we made call to function which has a callback, that callback will be executed after function receives response. Execution of javascript code continues during the wait time for the response. In this way where execution happening in sequence how callback execution will be resumed once after response received. It's like thread is moving backwards for callback execution.
Thread of execution should always move in forward direction righy?.
please clarify on this.
It's true that JavaScript is (now) specified to have only a single active thread per realm (roughly: a global environment and its contents).¹ But I wouldn't call it "single-threaded;" you can have multiple threads via workers. They do not share a common global environment, which makes it dramatically easier to reason about code and not worry about the values of variables changing out from under you unexpectedly, but they can communicate via messaging and even access shared memory (with all the complications that brings, including the values of shared memory slots changing out from under you unexpectedly).
But running on a single thread and having asynchronous callbacks are not at all in conflict. A JavaScript thread works on the basis of a job queue that jobs get added to. A job is a unit of code that runs to completion (no other code in the realm can run until it does). When that unit of code is done running to completion, the thread picks up the next job from the queue and runs that. One job cannot interrupt another job. Jobs running on the main thread (the UI thread in browsers) cannot be suspended in the middle (mostly²), though jobs on worker threads can be (via Atomics.wait). If a job is suspended, no other job in the realm will run until that job is resumed and completed.
So for instance, consider:
console.log("one");
setTimeout(function() {
console.log("three");
}, 10);
console.log("two");
When you run that, you see
one
two
three
in the console. Here's what happened:
A job for the main script execution was added to the job queue
The main JavaScript thread for the browser picked up that job
It ran the first console.log, setTimeout, and last console.log
The job terminated
The main JavaScript thread idled for a bit
The browser's timer mechanism determined that it was time for that setTimeout callback to run and added a job to the job queue to run it
The main JavaScript thread picked up that job and ran that final console.log
If the main JavaScript thread were tied up (for instance, while (true);), jobs would just pile up in the queue and never get processed, because that job never completes.
¹ The JavaScript specification was silent on the topic of threading until fairly recently. Browsers and Node.js used a single-active-thread-per-realm model (mostly), but some much less common environments didn't. I vaguely recall an early fork of V8 (the JavaScript engine in Chromium-based browsers and Node.js) that added multiple threading, but it never went anywhere. The Java virtual machine can run JavaScript code via its scripting support, and that code is multi-threaded (or at least it was with the Rhino engine; I have no ideal whether Narwhal changes that), but again that's quite niche.
² "A job is a unit of code that runs to completion." and "Jobs running on th emain thread...cannot be suspended in the middle..." Two caveats here:
alert, confirm, and prompt — those 90's synchronous user interactions — suspend a job on the main UI thread while waiting on the user. This is antiquated behavior that's grandfathered in (and is being at least partially phased out).
Naturally, the host process — browser, etc. — can terminate the entire environment a job is running in while the job is running. For instance, when a web page becomes "unresponsive," the browser can kill it. But that's not just the job, it's the entire environment the job was running in.
Just to add to T.J.Crowder’s answer above:
The job queue is called an Event Loop which keeps track of all the callbacks that need to be executed. Whenever a callback is ready to be executed ( example: after an asynchronous action has finished ), it is added in the Event loop.
As explained by T.J. Crowder, you can imagine Event loop as a queue. Whenever there is a callback to execute in the loop, the loop takes control of the main thread and executes that callback. The execution of the normal flow stops while this is happening. This way JavaScript can be imagined as a single-threaded language.
You can learn more about Event Loops and how they work in this amazing talk by Philip Roberts.

What would happen if a variable were manipulated more than once at the exact same time? Is it possible? [duplicate]

Lets assume I run this piece of code.
var score = 0;
for (var i = 0; i < arbitrary_length; i++) {
async_task(i, function() { score++; }); // increment callback function
}
In theory I understand that this presents a data race and two threads trying to increment at the same time may result in a single increment, however, nodejs(and javascript) are known to be single threaded. Am I guaranteed that the final value of score will be equal to arbitrary_length?
Am I guaranteed that the final value of score will be equal to
arbitrary_length?
Yes, as long as all async_task() calls call the callback once and only once, you are guaranteed that the final value of score will be equal to arbitrary_length.
It is the single-threaded nature of Javascript that guarantees that there are never two pieces of Javascript running at the exact same time. Instead, because of the event driven nature of Javascript in both browsers and node.js, one piece of JS runs to completion, then the next event is pulled from the event queue and that triggers a callback which will also run to completion.
There is no such thing as interrupt driven Javascript (where some callback might interrupt some other piece of Javascript that is currently running). Everything is serialized through the event queue. This is an enormous simplification and prevents a lot of stickly situations that would otherwise be a lot of work to program safely when you have either multiple threads running concurrently or interrupt driven code.
There still are some concurrency issues to be concerned about, but they have more to do with shared state that multiple asynchronous callbacks can all access. While only one will ever be accessing it at any given time, it is still possible that a piece of code that contains several asynchronous operations could leave some state in an "in between" state while it was in the middle of several async operations at a point where some other async operation could run and could attempt to access that data.
You can read more about the event driven nature of Javascript here: How does JavaScript handle AJAX responses in the background? and that answer also contains a number of other references.
And another similar answer that discusses the kind of shared data race conditions that are possible: Can this code cause a race condition in socket io?
Some other references:
how do I prevent event handlers to handle multiple events at once in javascript?
Do I need to be concerned with race conditions with asynchronous Javascript?
JavaScript - When exactly does the call stack become "empty"?
Node.js server with multiple concurrent requests, how does it work?
To give you an idea of the concurrency issues that can happen in Javascript (even without threads and without interrupts, here's an example from my own code.
I have a Raspberry Pi node.js server that controls the attic fans in my house. Every 10 seconds it checks two temperature probes, one inside the attic and one outside the house and decides how it should control the fans (via relays). It also records temperature data that can be presented in charts. Once an hour, it saves the latest temperature data that was collected in memory to some files for persistence in case of power outage or server crash. That saving operation involves a series of async file writes. Each one of those async writes yields control back to the system and then continues when the async callback is called signaling completion. Because this is a low memory system and the data can potentially occupy a significant portion of the available RAM, the data is not copied in memory before writing (that's simply not practical). So, I'm writing the live in-memory data to disk.
At any time during any of these async file I/O operations, while waiting for a callback to signify completion of the many file writes involved, one of my timers in the server could fire, I'd collect a new set of temperature data and that would attempt to modify the in-memory data set that I'm in the middle of writing. That's a concurrency issue waiting to happen. If it changes the data while I've written part of it and am waiting for that write to finish before writing the rest, then the data that gets written can easily end up corrupted because I will have written out one part of the data, the data will have gotten modified from underneath me and then I will attempt to write out more data without realizing it's been changed. That's a concurrency issue.
I actually have a console.log() statement that explicitly logs when this concurrency issue occurs on my server (and is handled safely by my code). It happens once every few days on my server. I know it's there and it's real.
There are many ways to work around those types of concurrency issues. The simplest would have been to just make a copy in memory of all the data and then write out the copy. Because there are not threads or interrupts, making a copy in memory would be safe from concurrency (there would be no yielding to async operations in the middle of the copy to create a concurrency issue). But, that wasn't practical in this case. So, I implemented a queue. Whenever I start writing, I set a flag on the object that manages the data. Then, anytime the system wants to add or modify data in the stored data while that flag is set, those changes just go into a queue. The actual data is not touched while that flag is set. When the data has been safely written to disk, the flag is reset and the queued items are processed. Any concurrency issue was safely avoided.
So, this is an example of concurrency issues that you do have to be concerned about. One great simplifying assumption with Javascript is that a piece of Javascript will run to completion without any thread of getting interrupted as long as it doesn't purposely return control back to the system. That makes handling concurrency issues like described above lots, lots easier because your code will never be interrupted except when you consciously yield control back to the system. This is why we don't need mutexes and semaphores and other things like that in our own Javascript. We can use simple flags (just a regular Javascript variable) like I described above if needed.
In any entirely synchronous piece of Javascript, you will never be interrupted by other Javascript. A synchronous piece of Javascript will run to completion before the next event in the event queue is processed. This is what is meant by Javascript being an "event-driven" language. As an example of this, if you had this code:
console.log("A");
// schedule timer for 500 ms from now
setTimeout(function() {
console.log("B");
}, 500);
console.log("C");
// spin for 1000ms
var start = Date.now();
while(Data.now() - start < 1000) {}
console.log("D");
You would get the following in the console:
A
C
D
B
The timer event cannot be processed until the current piece of Javascript runs to completion, even though it was likely added to the event queue sooner than that. The way the JS interpreter works is that it runs the current JS until it returns control back to the system and then (and only then), it fetches the next event from the event queue and calls the callback associated with that event.
Here's the sequence of events under the covers.
This JS starts running.
console.log("A") is output.
A timer event is schedule for 500ms from now. The timer subsystem uses native code.
console.log("C") is output.
The code enters the spin loop.
At some point in time part-way through the spin loop the previously set timer is ready to fire. It is up to the interpreter implementation to decide exactly how this works, but the end result is that a timer event is inserted into the Javascript event queue.
The spin loop finishes.
console.log("D") is output.
This piece of Javascript finishes and returns control back to the system.
The Javascript interpreter sees that the current piece of Javascript is done so it checks the event queue to see if there are any pending events waiting to run. It finds the timer event and a callback associated with that event and calls that callback (starting a new block of JS execution). That code starts running and console.log("B") is output.
That setTimeout() callback finishes execution and the interpreter again checks the event queue to see if there are any other events that are ready to run.
Node uses an event loop. You can think of this as a queue. So we can assume, that your for loop puts the function() { score++; } callback arbitrary_length times on this queue. After that the js engine runs these one by one and increase score each time. So yes. The only exception if a callback is not called or the score variable is accessed from somewhere else.
Actually you can use this pattern to do tasks parallel, collect the results and call a single callback when every task is done.
var results = [];
for (var i = 0; i < arbitrary_length; i++) {
async_task(i, function(result) {
results.push(result);
if (results.length == arbitrary_length)
tasksDone(results);
});
}
No two invocations of the function can happen at the same time (b/c node is single threaded) so that will not be a problem. The only problem would be ifin some cases async_task(..) drops the callback. But if, e.g., 'async_task(..)' was just calling setTimeout(..) with the given function, then yes, each call will execute, they will never collide with each other, and 'score' will have the value expected, 'arbitrary_length', at the end.
Of course, the 'arbitrary_length' can't be so great as to exhaust memory, or overflow whatever collection is holding these callbacks. There is no threading issue however.
I do think it’s worth noting for others that view this, you have a common mistake in your code. For the variable i you either need to use let or reassign to another variable before passing it into the async_task(). The current implementation will result in each function getting the last value of i.

Does JavaScript execute top to bottom?

Does the code (e.g. functions) execute at the same time or does it follows the order in which it was written (top to bottom)? I know that order matters in HTML, what about JavaScript?
For instance, if there are two function calls one after the other, will they get executed simultaneously or one after the other even if they have nothing to do with each other?
It may seem as if Javascript functions are executing in an unpredictable order because the model for Javascript in a browser is event-driven. This means that a Javascript program typically attaches event handlers to DOM elements and these are triggered by user actions such as clicking or moving the pointer over an element. However, the script that sets up the event handlers runs as a traditional structured imperative program.
A further complication is that modern Javascript applications make extensive use of asynchronous functions. This means that a function call might return quickly but will have set in motion an action which completes at a later time. An obvious example is the sending of requests to a server in so-called AJAX applications. Typically the request function is passed a callback function which is called when the request completes. However the Javascript program will go on to the next statement without waiting for the completion of the request. This can be somewhat confusing if you aren't thinking clearly enough about what your program is actually doing.
Another example that you might sometimes encounter is the launching of animations in jQuery. These too work asynchronously and you can pass a callback function that runs after the animation completes. Once again this can be surprising sometimes if you expect the next statement to be executed after the animation completes rather than after it starts.
It occurs in the order it was written (with various exceptions). More specifically it's an imperative structured object-oriented prototype based scripting language :)
See Imperative Programming

Categories