I have a:
socketio server
array of values
async function which iterates the array and deletes them
Here is the code
// import
const SocketIo = require('socket.io')
// setup
const Server = new SocketIo.Server()
const arrayOfValues = ['some', 'values', 'here']
// server welcome message
Server.on('connection', async Socket=>{
iterateValues() // server no longer responds, even if function is async
Socket.emit('welcome', 'hi')
})
// the async function
async function iterateValues(){
while(true){
if( arrayOfValues[0] ) arrayOfValues.splice(0, 1)
}
}
Whenever the client connects to my server, "iterateValues()" must be invoked and asynchronously remove the items from array. The problem is when "iterateValues()" is invoked anywhere, it halts the script, and the server no longer responds.
I think I misunderstood what async functions actually are. How can I create a function that can run an infinite while loop without halting the thread of other methods?
I think I misunderstood what async functions actually are.
I am not a back end guy, but will give it a shot. Nodejs still runs in a single thread. So an infinite loop will block the whole program. Asynchronous doesn't mean that the provided callback runs in a separate thread. If nodejs was multithreaded you could have infinite loop in one thread, that wouldn't block the other one for example.
Asynchronous means that the provided callback won't be run immediately, rather delayed, but when it is run it is still within that one thread.
Related
Javascript is single threaded and - Node.js uses an asynchronous event-driven design pattern, which means that multiple actions are taken at the same time while executing a program.
With this in mind, I have a pseudo code:
myFunction() // main flow
var httpCallMade = false // a global variable
async myFunction() {
const someData = await callDB() // LINE 1 network call
renderMethod() // LINE 2 flow1
}
redisPubSubEventHandler() { // a method that is called from redis subscription asynchronously somewhere from a background task in the program
renderMethod() // LINE 3 flow2
}
renderMethod(){
if(!httpCallMade) {
httpCallMade = true //set a global flag
const res = makeHTTPCall() // an asynchronous network call. returns a promise.
} // I want to ensure that this block is "synchronized" and is not acessible by flow1 and flow2 simultaneously!
}
myFunction() is called in the main thread - while redisPubSubEventHandler() is called asynchronously from a background task in the program. Both flows would end in calling renderMethod(). The idea is to ensure makeHTTPCall() (inside renderMethod) is only allowed to be called once
Is it guaranteed that renderMethod() would never be executed in parallel by LINE2 and LINE3 at the same time? My understanding is that as soon as renderMethod() is executed - event loop will not allow anything else to happen in server - which guarantees that it is only executed once at a given time (even if it had a network call without await).
Is this understanding correct?
If not, how do I make synchronize/lock entry to renderMethod?
Javascript is single-threaded. Therefore, unless you are deliberately using threads (eg. worker_threads in node.js) no function in the current thread can be executed by two parallel threads at the same time.
This explains why javascript has no mutex or semaphore capability - because generally it is not needed (note: you can still have race conditions because asynchronous code may be executed in a sequence you did not expect).
There is a general confusion that asynchronous code means parallel code execution (multi-threaded). It can but most of the time when a system is labeled as asynchronous or non-blocking or event-oriented INSTEAD of multi-threaded it often means that the system is single-threaded.
In this case asynchronous means parallel WAIT. Not parallel code execution. Code is always executed sequentially - only, due to the ability of waiting in parallel you may not always know the sequence the code is executed in.
There are parts of javascript that execute in a separate thread. Modern browsers execute each tab and iframe in its own thread (but each tab or iframe are themselves single-threaded). But script cannot cross tabs, windows or iframes. So this is a non-issue. Script may access objects inside iframes but this is done via an API and the script itself cannot execute in the foreign iframe.
Node.js and some browsers also do DNS queries in a separate thread because there is no standardized cross-platform non-blocking API for DNS queries. But this is C code and not your javascript code. Your only interaction with this kind of multi-threading is when you pass a URL to fetch() or XMLHttpRequest().
Node.js also implement file I/O, zip compression and cryptographic functions in separate threads but again this is C code, not your javascript code. All results from these separate threads are returned to you asynchronously via the event loop so by the time your javascript code process the result we are back to executing sequentially in the main thread.
Finally both node.js and browsers have worker APIs (web workers for browsers and worker threads for node.js). However, both these API use message passing to transfer data (in node only a pointer is passed in the message thus the underlying memory is shared) and it still protects functions from having their variables overwritten by another thread.
In your code, both myFunction() and redisPubSubEventHandler() run in the main thread. It works like this:
myFunction() is called, it returns immediately when it encounters the await.
a bunch of functions are declared and compiled.
we reach the end of your script:
// I want to ensure that this method is "synchronized" and is not called by flow1 and flow2 simultaneously!
}
<----- we reach here
now that we have reached the end of script we enter the event loop...
either the callDB or the redis event completes, our process gets woken up
the event loop figures out which handler to call based on what event happened
either the await returns and call renderMethod() or redisPubSubEventHandler() gets executed and call renderMethod()
In either case both your renderMethod() calls will execute on the main thread. Thus it is impossible for renderMethod() to run in parallel.
It is possible for renderMethod() to be half executed and another call to renderMethod() happens IF it contains the await keyword. This is because the first call is suspended at the await allowing the interpreter to call renderMethod() again before the first call completes. But note that even in this case you are only in trouble if you have an await between if.. and httpCallMade = true.
You need to differentiate between synchronous and asynchronous, and single- and multi-threaded.
JavaScript is single-threaded so no two lines of the same execution context can run at the same time.
But JavaScript allows asynchronous code execution (await/async), so the code in the execution context does not need to be in the order it appears in the code but that different parts of the code can be executed interleaved (not overlapped) - which could be called "running in parallel", even so, I think this is misleading.
event-driven design pattern, which means that multiple actions are taken at the same time while executing a program.
There are certain actions that can happen at the same time, like IO, multiprocessing (WebWorkers), but that is (with respect to JavaScript Code execution) not multi-threaded.
Is it guaranteed that renderMethod() would never be executed in parallel by LINE2 and LINE3 at the same time?
Depends on what you define as parallel at the same time.
Parts of logic you describe in renderMethod() will (as you do the request asynchronously) run interleaved, so renderMethod(){ if(!httpCallMade) { could be executed multiple times before you get the response (not the Promise) back from makeHTTPCall but the code lines will never executed at the same time.
My understanding is that as soon as renderMethod() is executed - event loop will not allow anything else to happen in server - which guarantees that it is only executed once at a given time (even if it had a network call without await).
The problem here is, that you somehow need to get the data from your async response.
Therefore you either need to mark your function as async and use const res = await makeHTTPCall() this would allow code interleaving at the point of await. Or use .then(…) with a callback, which will be executed asynchronously at a later point (after you left the function)
But from the beginning of the function to the first await other the .then not interleaving could take place.
So your httpCallMade = true would prevent that another makeHTTPCall could take place, before the currently running is finished, under the assumption that you set httpCallMade to false only when the request is finished (in .then callback, or after the await)
// I want to ensure that this method is "synchronized" and is not called by flow1 and flow2 simultaneously!
As soon as a get a result in an asynchronous way, you can't go back to synchronous code execution. So you need to have a guard like httpCallMade to prevent that the logic described in renderMethod can run multiple times interleaved.
Your question really comes down to:
Given this code:
var flag = false;
function f() {
if (!flag) {
flag = true;
console.log("hello");
}
}
and considering that flag is not modified anywhere else, and many different, asynchronous events may call this function f...:
Can "hello" be printed twice?
The answer is no: if this runs on an ECMAScript compliant JS engine, then the call stack must be empty first before the next job is pulled from an event/job queue. Asynchronous tasks/reactions are pushed on an event queue. They don't execute before the currently executing JavaScript has run to completion, i.e. up until the call stack is empty. So they never interrupt running JavaScript code pre-emptively.
This is true even if these asynchronous tasks/events/jobs are scheduled by other threads, lower-level non-JS code,...etc. They all must wait their turn to be consumed by the JS engine. And this will happen one after the other.
For more information, see the ECMAScript specification on "Job". For instance 8.4 Jobs and Host Operations to Enqueue Jobs:
A Job is an abstract closure with no parameters that initiates an ECMAScript computation when no other ECMAScript computation is currently in progress.
[...]
Only one Job may be actively undergoing evaluation at any point in time.
Once evaluation of a Job starts, it must run to completion before evaluation of any other Job starts.
For example, promises generate such jobs -- See 25.6.1.3.2 Promise Resolve Functions:
When a promise resolve function is called with argument resolution, the following steps are taken:
[...]
Perform HostEnqueuePromiseJob(job.[[Job]], job.[[Realm]]).
It sounds like you want to do something like a 'debounce', where any event will cause makeHttpCall() execute, but it should only be executing once at a time, and should execute again after the last call if another event has occurred while it was executing. So like this:
DB Call is made, and makeHttpCall() should execute
While makeHttpCall() is executing, you get a redis pub/sub event that should execute makeHttpCall() again, but that is delayed because it is already executing
Still before the first call is done, another DB call is made and requires makeHttpCall() to execute again. But even though you have received two events, you only need to have it called one time to update something with the most recent information you have.
The first call to makeHttpCall() finishes, but since there have been two events, you need to make a call again.
const makeHttpCall = () => new Promise(resolve => {
// resolve after 2 seconds
setTimeout(resolve, 2000);
});
// returns a function to call that will call your function
const createDebouncer = (fn) => {
let eventCounter = 0;
let inProgress = false;
const execute = () => {
if (inProgress) {
eventCounter++;
console.log('execute() called, but call is in progress.');
console.log(`There are now ${eventCounter} events since last call.`);
return;
}
console.log(`Executing... There have been ${eventCounter} events.`);
eventCounter = 0;
inProgress = true;
fn()
.then(() => {
console.log('async function call completed!');
inProgress = false;
if (eventCounter > 0) {
// make another call if there are pending events since the last call
execute();
}
});
}
return execute;
}
let debouncer = createDebouncer(makeHttpCall);
document.getElementById('buttonDoEvent').addEventListener('click', () => {
debouncer();
});
<button id="buttonDoEvent">Do Event</button>
Note: this is not about multi threading or multi processing. This question is regarding a single process and single thread.
Python async.io and JavaScript async both are single thread concepts.
In python, async.io, we can use async await keywords to create a function so that when this function is invoked multiple times (via gather) they get executed concurrently. The way it works is that when an await keyword is encountered, other tasks can execute. As explained here we apply the async await keywords to function that we would like to execute concurrently. However while these tasks are running concurrently, the main thread is blocked.
In JavaScript async has evolved from callbacks, promise, async/await. In the main program, when async is encountered, then the function is sent to the event loop (where the function execution begins) and the main thread can continue working. Any subsequent async function also gets added to the event loop. Inside the event loop when the function execution encountered an await then other function is given a chance to execute untill await in encountered.
To get this behaviour in python, that is - allow main thread to continue while executing child tasks the only option is multithreading/multiprocessing. Because once we start the child thread/process, and untill we call .join the main thread is not blocked.
Is there anyway by which the python's async.io can make the main thread non blocking? If not, then is this the fundamental difference between async concept in JavaScript and python?
when async is encountered, then the function is sent to the event loop and the main thread can continue working.
This is close, but not quite right. In Javascript, execution won't stop until the callstack has been emptied - the await keyword will suspend the execution of a particular function until an event triggers, and in the mean time, control returns to its caller. This means the first part of any async function will execute as soon as it is called (it's not immediately put into the event loop), and will only pause as soon as an await is hit.
To get this behaviour in python, that is - allow main thread to continue while executing child tasks the only option is multithreading/multiprocessing.
The difference here is that by default, Javascript always has an event loop and python does not. In other words, python has an on/off switch for asynchronous programming while Javascript does not. When you run something such as loop.run_forever(), you're basically flipping the event loop on, and execution won't continue where you left off until the event loop gets turned back off. (calling it a "thread" isn't quite the right word here, as it's all single-threaded, as you already acknowledged. Instead, we generally call each task that we queue up, well, a "task")
You're asking if there's a way to let your code continue execution after starting up the event loop. I'm pretty sure the answer is no, nor should it be needed. Whatever you want to execute after the event loop has started can just be executed within the event loop.
If you want your python program to act more like Javascript, then the first thing you do can be to start up an event loop, and then any further logic can be placed within the first task that the event loop executes. In Javascript, this boiler plate essentially happens for you, and your source code is effectively that first task that's queued up in the event loop.
Update:
Because there seems to be some confusion with how the Javascript event loop works, I'll try to explain it a little further.
Remember that an event loop is simply a system where, when certain events happen, a block of synchronous code can be queued up to run as soon as the thread is not busy.
So let's see what the event loop does for a simple program like this:
// This async function will resolve
// after the number of ms provided has passed
const wait = ms => { ... }
async function main() {
console.log(2)
await wait(100)
console.log(4)
}
console.log(1)
main()
console.log(3)
When Javascript begins executing the above program, it'll begin with a single task queued up in it's "run these things when you're not busy" queue. This item is the whole program.
So, it'll start at the top, defining whatever needs to be defined, executes console.log(1), call the main function, enters into it and runs console.log(2), calls wait() which will conceptually cause a background timer to start, wait() will return a promise which we then await, at which point we immediately go back to the caller of main, main wasn't awaited so execution continues to console.log(3), until we finally finish at the end of the file. That whole path (from defining functions to console.log(3)) is a single, non-interruptible task. Even if another task got queued up, Javascript wouldn't stop to handle that task until it finished this chunk of synchronous logic.
Later on, our countdown timer will finish, and another task will go into our queue, which will cause our main() function to continue execution. The same logic as before applies here - our execution path could enter and exit other async functions, and will only stop when it reaches the end of, in this case, the main function (even hitting an await keywords doesn't actually make this line of syncrounous logic stop, it just makes it jump back to the caller). The execution of a single task doesn't stop until the callstack has been emptied, and when execution is continuing from an async function, the first entry of the callstack starts at that particular async function.
Python's async/await follows these same rules, except for the fact that in Python, the event loop isn't running by default.
javascript
const wait = async (s) => {
setTimeout(() => {
console.log("wating " + s + "s")
}, s * 1000)
}
async function read_file() {
console.log("initial read_file sleep(2.1)")
await wait(2)
console.log("read_file 1/2 wait(2)")
await wait(0.1)
console.log("read_file 2/2 wait(0.1)")
}
async function read_api() {
console.log("initial read_api wait(2)")
await wait(2)
console.log("read_api whole wait(2)")
}
read_file()
console.log("does not block")
read_api()
console.log("the second time, won't block")
// initial read_file sleep(2.1)
// does not block
// initial read_api wait(2)
// the second time, won't block
// read_file 1/2 wait(2)
// read_api whole wait(2)
// read_file 2/2 wait(0.1)
// !!! Wait a moment
// wating 0.1s
// wating 2s
// wating 2s
python
import asyncio
async def read_file():
print("initial read_file asyncio.sleep(2 + 0.1)")
await asyncio.sleep(2)
print("read_file 1/2 asyncio.sleep(2)")
await asyncio.sleep(0.1)
print("read_file 2/2 asyncio.sleep(0.1)")
async def read_api():
print("initial read_api asyncio.sleep(2)")
await asyncio.sleep(2)
print("read_api whole asyncio.sleep(2)")
async def gather():
await asyncio.gather(
asyncio.create_task(read_file()),
asyncio.create_task(read_api()))
asyncio.run(gather())
"""
initial read_file asyncio.sleep(2.1)
initial read_api asyncio.sleep(2)
!!! Wait a moment
read_file 1/2 asyncio.sleep(2)
read_api whole asyncio.sleep(2)
read_file 2/2 asyncio.sleep(0.1)
"""
await scope:
javascript: After the method is executed, wait for the Promise to resolve
await wait(2) Just wait(2) inside is guaranteed to be synchronous (or wait)
python: Suspend method for other methods to execute
await asyncio.sleep(2) Method read_file will release resources and suspend
btw, javascript's await/async is just Promise syntactic sugar
Threading-wise, what's the difference between web workers and functions declared as
async function xxx()
{
}
?
I am aware web workers are executed on separate threads, but what about async functions? Are such functions threaded in the same way as a function executed through setInterval is, or are they subject to yet another different kind of threading?
async functions are just syntactic sugar around
Promises and they are wrappers for callbacks.
// v await is just syntactic sugar
// v Promises are just wrappers
// v functions taking callbacks are actually the source for the asynchronous behavior
await new Promise(resolve => setTimeout(resolve));
Now a callback could be called back immediately by the code, e.g. if you .filter an array, or the engine could store the callback internally somewhere. Then, when a specific event occurs, it executes the callback. One could say that these are asynchronous callbacks, and those are usually the ones we wrap into Promises and await them.
To make sure that two callbacks do not run at the same time (which would make concurrent modifications possible, which causes a lot of trouble) whenever an event occurs the event does not get processed immediately, instead a Job (callback with arguments) gets placed into a Job Queue. Whenever the JavaScript Agent (= thread²) finishes execution of the current job, it looks into that queue for the next job to process¹.
Therefore one could say that an async function is just a way to express a continuous series of jobs.
async function getPage() {
// the first job starts fetching the webpage
const response = await fetch("https://stackoverflow.com"); // callback gets registered under the hood somewhere, somewhen an event gets triggered
// the second job starts parsing the content
const result = await response.json(); // again, callback and event under the hood
// the third job logs the result
console.log(result);
}
// the same series of jobs can also be found here:
fetch("https://stackoverflow.com") // first job
.then(response => response.json()) // second job / callback
.then(result => console.log(result)); // third job / callback
Although two jobs cannot run in parallel on one agent (= thread), the job of one async function might run between the jobs of another. Therefore, two async functions can run concurrently.
Now who does produce these asynchronous events? That depends on what you are awaiting in the async function (or rather: what callback you registered). If it is a timer (setTimeout), an internal timer is set and the JS-thread continues with other jobs until the timer is done and then it executes the callback passed. Some of them, especially in the Node.js environment (fetch, fs.readFile) will start another thread internally. You only hand over some arguments and receive the results when the thread is done (through an event).
To get real parallelism, that is running two jobs at the same time, multiple agents are needed. WebWorkers are exactly that - agents. The code in the WebWorker therefore runs independently (has it's own job queues and executor).
Agents can communicate with each other via events, and you can react to those events with callbacks. For sure you can await actions from another agent too, if you wrap the callbacks into Promises:
const workerDone = new Promise(res => window.onmessage = res);
(async function(){
const result = await workerDone;
//...
})();
TL;DR:
JS <---> callbacks / promises <--> internal Thread / Webworker
¹ There are other terms coined for this behavior, such as event loop / queue and others. The term Job is specified by ECMA262.
² How the engine implements agents is up to the engine, though as one agent may only execute one Job at a time, it very much makes sense to have one thread per agent.
In contrast to WebWorkers, async functions are never guaranteed to be executed on a separate thread.
They just don't block the whole thread until their response arrives. You can think of them as being registered as waiting for a result, let other code execute and when their response comes through they get executed; hence the name asynchronous programming.
This is achieved through a message queue, which is a list of messages to be processed. Each message has an associated function which gets called in order to handle the message.
Doing this:
setTimeout(() => {
console.log('foo')
}, 1000)
will simply add the callback function (that logs to the console) to the message queue. When it's 1000ms timer elapses, the message is popped from the message queue and executed.
While the timer is ticking, other code is free to execute. This is what gives the illusion of multithreading.
The setTimeout example above uses callbacks. Promises and async work the same way at a lower level — they piggyback on that message-queue concept, but are just syntactically different.
Workers are also accessed by asynchronous code (i.e. Promises) however Workers are a solution to the CPU intensive tasks which would block the thread that the JS code is being run on; even if this CPU intensive function is invoked asynchronously.
So if you have a CPU intensive function like renderThread(duration) and if you do like
new Promise((v,x) => setTimeout(_ => (renderThread(500), v(1)),0)
.then(v => console.log(v);
new Promise((v,x) => setTimeout(_ => (renderThread(100), v(2)),0)
.then(v => console.log(v);
Even if second one takes less time to complete it will only be invoked after the first one releases the CPU thread. So we will get first 1 and then 2 on console.
However had these two function been run on separate Workers, then the outcome we expect would be 2 and 1 as then they could run concurrently and the second one finishes and returns a message earlier.
So for basic IO operations standard single threaded asynchronous code is very efficient and the need for Workers arises from need of using tasks which are CPU intensive and can be segmented (assigned to multiple Workers at once) such as FFT and whatnot.
Async functions have nothing to do with web workers or node child processes - unlike those, they are not a solution for parallel processing on multiple threads.
An async function is just1 syntactic sugar for a function returning a promise then() chain.
async function example() {
await delay(1000);
console.log("waited.");
}
is just the same as
function example() {
return Promise.resolve(delay(1000)).then(() => {
console.log("waited.");
});
}
These two are virtually indistinguishable in their behaviour. The semantics of await or a specified in terms of promises, and every async function does return a promise for its result.
1: The syntactic sugar gets a bit more elaborate in the presence of control structures such as if/else or loops which are much harder to express as a linear promise chain, but it's still conceptually the same.
Are such functions threaded in the same way as a function executed through setInterval is?
Yes, the asynchronous parts of async functions run as (promise) callbacks on the standard event loop. The delay in the example above would implemented with the normal setTimeout - wrapped in a promise for easy consumption:
function delay(t) {
return new Promise(resolve => {
setTimeout(resolve, t);
});
}
I want to add my own answer to my question, with the understanding I gathered through all the other people's answers:
Ultimately, all but web workers, are glorified callbacks. Code in async functions, functions called through promises, functions called through setInterval and such - all get executed in the main thread with a mechanism akin to context switching. No parallelism exists at all.
True parallel execution with all its advantages and pitfalls, pertains to webworkers and webworkers alone.
(pity - I thought with "async functions" we finally got streamlined and "inline" threading)
Here is a way to call standard functions as workers, enabling true parallelism. It's an unholy hack written in blood with help from satan, and probably there are a ton of browser quirks that can break it, but as far as I can tell it works.
[constraints: the function header has to be as simple as function f(a,b,c) and if there's any result, it has to go through a return statement]
function Async(func, params, callback)
{
// ACQUIRE ORIGINAL FUNCTION'S CODE
var text = func.toString();
// EXTRACT ARGUMENTS
var args = text.slice(text.indexOf("(") + 1, text.indexOf(")"));
args = args.split(",");
for(arg of args) arg = arg.trim();
// ALTER FUNCTION'S CODE:
// 1) DECLARE ARGUMENTS AS VARIABLES
// 2) REPLACE RETURN STATEMENTS WITH THREAD POSTMESSAGE AND TERMINATION
var body = text.slice(text.indexOf("{") + 1, text.lastIndexOf("}"));
for(var i = 0, c = params.length; i<c; i++) body = "var " + args[i] + " = " + JSON.stringify(params[i]) + ";" + body;
body = body + " self.close();";
body = body.replace(/return\s+([^;]*);/g, 'self.postMessage($1); self.close();');
// CREATE THE WORKER FROM FUNCTION'S ALTERED CODE
var code = URL.createObjectURL(new Blob([body], {type:"text/javascript"}));
var thread = new Worker(code);
// WHEN THE WORKER SENDS BACK A RESULT, CALLBACK AND TERMINATE THE THREAD
thread.onmessage =
function(result)
{
if(callback) callback(result.data);
thread.terminate();
}
}
So, assuming you have this potentially cpu intensive function...
function HeavyWorkload(nx, ny)
{
var data = [];
for(var x = 0; x < nx; x++)
{
data[x] = [];
for(var y = 0; y < ny; y++)
{
data[x][y] = Math.random();
}
}
return data;
}
...you can now call it like this:
Async(HeavyWorkload, [1000, 1000],
function(result)
{
console.log(result);
}
);
I have a question on the following code below (Source: https://blog.risingstack.com/node-js-at-scale-understanding-node-js-event-loop/):
'use strict'
const express = require('express')
const superagent = require('superagent')
const app = express()
app.get('/', sendWeatherOfRandomCity)
function sendWeatherOfRandomCity (request, response) {
getWeatherOfRandomCity(request, response)
sayHi()
}
const CITIES = [
'london',
'newyork',
'paris',
'budapest',
'warsaw',
'rome',
'madrid',
'moscow',
'beijing',
'capetown',
]
function getWeatherOfRandomCity (request, response) {
const city = CITIES[Math.floor(Math.random() * CITIES.length)]
superagent.get(`wttr.in/${city}`)
.end((err, res) => {
if (err) {
console.log('O snap')
return response.status(500).send('There was an error getting the weather, try looking out the window')
}
const responseText = res.text
response.send(responseText)
console.log('Got the weather')
})
console.log('Fetching the weather, please be patient)
}
function sayHi () {
console.log('Hi')
}
app.listen(3000);
I have these questions:
When the superagent.get(wttr.in/${city}) in the getWeatherOfRandomCity method makes a web request to http://wttr.in/sf for example, that request will be placed on the task queue and not the main call stack correct?
If there are methods on the main call stack (i.e. the main call stack isn't empty), the end event attached to the superagent.get(wttr.in/${city}).end(...) (that will be pushed to the task queue) will not get called until the main call stack is empty correct? In other words, on every tick of the event loop, it will take one item from the task queue?
Let's say two requests to localhost:3000/ come in one after another. The first request will push the sendWeatherOfRandomCity on the stack, the getWeatherOfRandomCity on the stack, then the web request superagent.get(wttr.in/${city}).end(...) will be placed on the background queue, then console.log('Fetching the weather, please be patient'), then the sendWeatherOfRandomCity will pop off the stack, and finally sayHi() will be pushed on the stack and it will print "Hi" and pop off the stack and finally, the end event attached to superagent.get(wttr.in/${city}).end(...) will be called from the task queue since the main call stack will be empty. Now when the second request comes, it will push all of those same things as the first request onto the main call stack but will the end handler from the first request (still in the task queue) run first or the stuff pushed onto the main call stack by the second web request will run first?
when you make http request, libuv sees that you are attempting to make a network request. Neither libuv nor node has any code to handle all of these operations that are involved with a network request. Instead libuv delegates the request making to the underlying operating system.
It's actually the kernel which is the essential part of our operating system that does the real network request work. Libuv is used to issue the request and then it just waits on the operating system to emit a signal that some response has come back to the request. So because Libuv is delegating the work done to the operating system the operating system itself decides whether to make a new threat or not. Or just generally how to handle the entire process of making the request. Each different os has a different method to handle this: On linux it is epoll, in mac os it is called kqueue, and in windows it is called GetQueuedCompletionStatusEx.
there are 6 phases in event loop and one of them is i/o poll. And every phase has precedence over others. number 1 is always timers. When time is up,(or event is done) timer's callback will be called to event queue and timer function are gonna move to event queue as well. Then event loop will check if it call stack is available. Call stack is where functions go to execute. You can do one thing at a time and the call stack enforces that we can only have one function on the top of the call stack that is the thing we're doing.There's no way to execute two things at the same time in JAVASCRIPT RUNTIME.
if the call stack is empty means main() function is removed, event loop will push the timer function to call stack and your function will be executed. None of the Async callbacks are EVER going to run before the main function is done.
So when it is time for i/o poll phase which handles incoming data and connections, with the same path how timer function followed, your function with handles the fetching will be executed.
I am newbie to Node js programming and hence want to understand the core concepts and practises very appropriately. AFAIK node js has non-blocking I/O allowing all disk and other I/O operations running async way while its JS runs in single thread managing the resource and execution paths using Event Loop. As suggested at many places developers are advised to write custom functions/methods using callback pattern e.g.
function processData(inputData, cb){
// do some computation and other stuff
if (err) {
cb(err, null);
}else{
cb(null, result);
}
}
callback = function(err, result){
// check error and handle
// if not error then grab result and do some stuff
}
processData(someData, callback)
// checking whether execution is async or not
console.log('control reached at the end of block');
If I run this code everything runs synchronously printing console message at last. What I believed was that the console message will print first of all followed by execution of processData function code which will in turn invoke the callback function. And if it happens this way, the event loop will be unblocked and would only execute the response to a request when the final response is made ready by 'callback' function.
My question is:- Is the execution sequence alright as per node's nature Or Am I doing something wrong Or missing an important concept?
Any help would be appreciate from you guys !!!
JavaScript (just as pretty much any other language) runs sequentially. If you write
var x = 1;
x *= 2;
x += 1;
you expect the result to be 3, not 4. That would be pretty bad. The same goes with functions. When you write
foo();
bar();
you expect them to run in the very same order. This doesn't change when you nest functions:
var foo = function() {};
var bar = function(clb) {
clb();
foo();
};
var callback = function() {
};
bar(callback);
The expected execution sequence is bar -> callback -> foo.
So the biggest damage people do on the internet is that they put equals sign between asynchronous programming and callback pattern. This is wrong. There are many synchronous functions using callback pattern, e.g. Array.prototype.forEach().
What really happens is that NodeJS has this event loop under the hood. You can tell it to schedule something and run later by calling multiple special functions: setTimeout, setInterval, process.nextTick to name few. All I/O also schedules some things to be done on the event loop. Now if you do
var foo = function() {};
var bar = function(clb) {
procress.nextTick(clb); // <-- async here
foo();
};
var callback = function() {
};
bar(callback);
The expected execution sequence is this bar -> (schedule callback) -> foo and then callback will fire at some point (i.e. when all other queued tasks are processed).
That's pretty much how it works.
You are writing synchronous code all the way down and expecting it to run asynchronously. Just because you are using callbacks does not mean it's an asynchronous operation. You should try using timers, api within your functions to feel node js non blocking asynchronous pattern. I hope The Node.js Event Loop will help you to get started.