For example I found some api library that is based on promises, and I need to issue api requests using this library in some interval, infinite times (like usual back-end loop). This api requests - actually chain of promises.
So, if I write function like:
function r(){
return api
.call(api.anotherCall)
.then(api.anotherCall)
.then(api.anotherCall)
...
.then(r)
}
Will it cause stack overflow?
Solutions that I come up with is to use setTimeout for a call of r recursively.
function r(){
return api
.call(api.anotherCall)
.then(api.anotherCall)
.then(api.anotherCall)
.then(()=>{setTimeout(r, 0)})
}
So setTimeout will call r actually only when call stack is empty.
Is it good solution, or there is some standard way of calling promises recursively?
Will this cause stackoverflow?
No, it will not. Per the promise specification, .then() waits for the stack to completely unwind and is then called after the stack is clear (essentially on the next tick of the event loop). So, .then() is already called asynchronously after the current event is done processing and the stack is unwound. You do not have to use setTimeout() to avoid stack build-up.
Your first code example will not have any stack build-up or stack overflow, no matter how many times you repeat it.
In the Promises/A+ specification, section 2.2.4 says this:
2.2.4 onFulfilled or onRejected must not be called until the execution context stack contains only platform code. [3.1].
And, "platform code" is defined here in 3.1:
“platform code” means engine, environment, and promise implementation code. In practice, this requirement ensures that onFulfilled and onRejected execute asynchronously, after the event loop turn in which then is called, and with a fresh stack. This can be implemented with either a “macro-task” mechanism such as setTimeout or setImmediate, or with a “micro-task” mechanism such as MutationObserver or process.nextTick. Since the promise implementation is considered platform code, it may itself contain a task-scheduling queue or “trampoline” in which the handlers are called.
The ES6 promise specification uses different words, but generates the same effect. In ES6, promise .then() is performed by enqueing a job and then letting that job get processed and the job only gets processed when no other code is running and the stack is empty.
This is how running such as job is described in the ES6 spec:
A Job is an abstract operation that initiates an ECMAScript computation when no other ECMAScript computation is currently in progress. A Job abstract operation may be defined to accept an arbitrary set of job parameters.
Execution of a Job can be initiated only when there is no running execution context and the execution context stack is empty. A PendingJob is a request for the future execution of a Job. A PendingJob is an internal Record whose fields are specified in Table 25. Once execution of a Job is initiated, the Job always executes to completion. No other Job may be initiated until the currently running Job completes. However, the currently running Job or external events may cause the enqueuing of additional PendingJobs that may be initiated sometime after completion of the currently running Job.
Related
In Javascript, the callback (of the awaited async function) runs only when the call stack is empty. If there are multiple callbacks, then they are queued (in the event loop queue) such that only 1 callback runs in its entirety before the next callback gets change to run.
In case of C# (assuming ConfigureAwait is not set to false - that is - the callabck will run in the same thread; also assuming there is syncronization context [winforms]):
Does the callback run immediately without waiting for call stack to be empty?
Suppose there are multiple callbacks queued, then while 1 callback is running, and another callback is ready for execution, then does the running callback suspend while the new callback wants to run? Or is it sequential?
The answer is that it depends on the current SynchronizationContext and, if that is null, the current TaskScheduler.
When the awaited Task completes, and you have configured the Awaitable to capture the context and recover on it, then almost certainly that continuation cannot continue immediately (the thread is likely busy executing something else). All that can be done is to schedule the continuation for later execution on the current SynchronizationContext, which is done by calling SynchronizationContext.Post().
How this Post method is implemented depends on the SynchronizationContext (which is abstract). If you are using WPF, the SynchronizationContext is a DispatcherSynchronizationContext. There, Post is implemented by a call to Dispatcher.BeginInvoke() with DispatcherPriority.Normal.
In other words, your continuation gets queued on the dispatcher for later execution with normal priority. Other delegates may have higher priority (e.g. DispatcherPriority.Input), but eventually your delegate gets executed on the dispatcher.
Two things to notice:
BeginInvoke simply queues your delegate. It does not execute it right away. In other words, queuing your delegate is really fast.
The above explanation should illustrate why async-await is just syntactic sugar. You could achieve all these calls yourself, but the compiler does it for you, making your code flow more naturally.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I'm struggling to grasp the concept of asynchronousness. Is the following roughly correct about asynchronous operations?
A problem can occur if a piece of code takes a long time to complete. This is because i) it stops code below from running and it might be nice to run this whilst the hard code loads in the background. And ii) Indeed, JS might try to execute the code below before the hard code’s finished. If the code below relies on the hard code, that’s a problem.
A solution is: if an operation takes a long time to complete, you want to process it in a separate thread while the original thread is processed. Just make sure the main thread doesn't reference things that the asynchronous operation returns. JS employs event-ques for this solution. Asynchronous operations are executed in an event-que which executes after the main-thread.
But even the event-que can suffer from the same problem as the main-thread. If fetch1, which is positioned above fetch2, takes a long time to return a promise, and fetch2 doesn’t, JS might start executing fetch2 before executing fetch1. This is where Promise.all is useful because it won't proceed with the next step in the asynchronous operation until both both fetch1 & fetch2’s promises are resolved.
On a separate note, I’ve read when chaining .then, that counts as one asynchronous operation so we can always guarantee that the subsequent .thenwill only execute when the .then before it has executed|resolved its promise.
Almost correct but not quite.
If you are talking about "asynchronousness" the word in the English language then it just means that things can happen out of order. This concept is used in a lot of languages including multithreading in Java and C/C++.
If you are talking about the specific concept of asynchronousness as it relates to node.js or asynchronous I/O in C/C++ then you do have some misunderstandings in how this works at the low level.
A problem can occur if a piece of code takes a long time to complete. This is because i) it stops code below from running and it might be nice to run this whilst the hard code loads in the background. And ii) Indeed, JS might try to execute the code below before the hard code’s finished. If the code below relies on the hard code, that’s a problem.
When talking about javascript or asynchronous I/O in C/C++ (where javascript got its asynchronousness from) this is not true.
What actually happens is that waiting for something to happen may take a long time to complete. Instead of waiting why not tell the OS to execute some code (your callback) once that thing happens.
At the OS level most modern operating systems have API that let you tell it to wake your process up when something happens. That thing may be a keyboard event, a mouse event, an I/O event (from disk or network), a system reconfiguration event (eg. changing monitor resolution) etc.
Most traditional languages implement blocking I/O. What happens is that when you try to read something form disk or network your process goes to sleep immediately and the OS will wake it up again when the data arrives:
Traditional blocking I/O
time
│
├────── your code doing stuff ..
├────── read_data_from_disk() ───────────────────┐
┆ ▼
: OS puts process to sleep
.
. other programs running ..
.
: data arrives ..
┆ OS wakes up your process
├────── read_data_from_disk() ◀──────────────────┘
├────── your program resume doing stuff ..
▼
This means that your program can only wait for one thing at a time. Which means that most of the time your program is not using the CPU. The traditional solution to listen to more events is multithreading. Each thread will seperately block on their events but your program can spawn a new thread for each event it is interested in.
It turns out that naive multithreading where each thread waits for one event is slow. Also it ends up consuming a lot of RAM especially for scripting languages. So this is not what javascript does.
Note: Historically the fact that javascript uses a single thread instead of multithreading is a bit of an accident. It was just the result of decisions made by the team that added progressive JPEG rendering and GIF animations to early browsers. But by happy coincidence this is exactly what makes things like node.js fast.
What javascript does instead is wait for multiple events instead of waiting for a single event. All modern OSes have API that lets you wait for multiple events. They range from queue/kqueue on BSD and Mac OSX to poll/epoll on Linux to overlapped I/O on Windows to the cross-platform POSIX select() system call.
The way javascript handles external events is something like the following:
Non-blocking I/O (also known as asynchronous I/O)
time
│
├────── your code doing stuff ..
├────── read_data_from_disk(read_callback) ───▶ javascript stores
│ your callback and
├────── your code doing other stuff .. remember your request
│
├────── wait_for_mouse_click(click_callback) ─▶ javascript stores
│ your callback and
├────── your code doing other stuff .. remember your request
│
├────── your finish doing stuff.
┆ end of script ─────────────▶ javascript now is free to process
┆ pending requests (this is called
┆ "entering the event loop").
┆ Javascript tells the OS about all the
: events it is interested in and waits..
. │
. └───┐
. ▼
. OS puts process to sleep
.
. other programs running ..
.
. data arrives ..
. OS wakes up your process
. │
. ┌───┘
: ▼
┆ Javascript checks which callback it needs to call
┆ to handle the event. It calls your callback.
├────── read_callback() ◀────────────────────┘
├────── your program resume executing read_callback
▼
The main difference is that synchronous multithreaded code waits for one event per thread. Asynchronous code either single threaded like javascript or multi threaded like Nginx or Apache wait for multiple events per thread.
Note: Node.js handles disk I/O in separate threads but all network I/O are processed in the main thread. This is mainly because asynchronous disk I/O APIs are incompatible across Windows and Linux/Unix. However it is possible to do disk I/O in the main thread. The Tcl language is one example that does asynchronous disk I/O in the main thread.
A solution is: if an operation takes a long time to complete, you want to process it in a separate thread while the original thread is processed.
This is not what happens with asynchronous operations in javascript with the exception of web workers (or worker threads in Node.js). In the case of web workers then yes, you are executing code in a different thread.
But even the event-que can suffer from the same problem as the main-thread. If fetch1, which is positioned above fetch2, takes a long time to return a promise, and fetch2 doesn’t, JS might start executing fetch2 before executing fetch1
This is not what is happening. What you are doing is as follows:
fetch(url_1).then(fetch1); // tell js to call fetch1 when this completes
fetch(url_2).then(fetch2); // tell js to call fetch2 when this completes
It is not that js "might" start executing. What happens with the code above is both fetches are executed synchronously. That is, the first fetch strictly happens before the second fetch.
However, all the above code does is tell javascript to call the functions fetch1 and fetch2 back at some later time. This is an important lesson to remember. The code above does not execute the fetch1 and fetch2 functions (the callbacks). All you are doing is tell javascript to call them when the data arrives.
If you do the following:
fetch(url_1).then(fetch1); // tell js to call fetch1 when this completes
fetch(url_2).then(fetch2); // tell js to call fetch2 when this completes
while (1) {
console.log('wait');
}
Then the fetch1 and fetch2 will never get executed.
I'll pause here to let you ponder on that.
Remember how asynchronous I/O is handled. All I/O (often called asynchronous) function calls don't actually cause the I/O to be accessed immediately. All they do is just remind javascript that you want something (a mouse click, a network request, a timeout etc.) and you want javascript to execute your function later when that thing completes. Asynchronous I/O are only processed at the end of your script when there is no more code to execute.
This does mean that you cannot use an infinite while loop in a javascript program. Not because javascript does not support it but there is a built-in while loop that surrounds your entire program: this big while loop is called the event loop.
On a separate note, I’ve read when chaining .then, that counts as one asynchronous operation.
Yes, this is by design to avoid confusing people on when promises are processed.
If you are interested at how the OS handles all this without further creating threads you may be interested in my answers to these related questions:
Is there any other way to implement a "listening" function without an infinite while loop?
node js - what happens to incoming events during callback excution
TLDR
If nothing else I'd like you to understand two things:
Javascript is a strictly synchronous programming language. Each statement in your code is executed strictly sequentially.
Asynchronous code in all languages (yes, including C/C++ and Java and Python etc.) will call your callback at any later time. Your callback will not be called immediately. Asynchronousness is a function-call level concept.
It's not that javascript is anything special when it comes to asynchronousness*. It's just that most javascript libraries are asynchronous by default (though you can also write asynchronous code in any other language but their libraries are normally synchronous by default).
*Note: of course, things like async/await does make javascript more capable of handling asynchronous code.
Side note: Promises are nothing special. It is just a design pattern. It is not something built-in to javascript syntax. It is just that newer versions of javascript comes with Promises as part of its standard library. You could have always used promises even with very old versions of javascript and in other languages (Java8 and above for example call have promises in their standard library but call them Futures).
This page explains Node.js event loop very well (since it's by Node.js):
https://nodejs.dev/learn/the-nodejs-event-loop
A problem can occur if a piece of code takes a long time to complete.
This is because i) it stops code below from running and it might be
nice to run this whilst the hard code loads in the background. And ii)
Indeed, JS might try to execute the code below before the hard code’s
finished. If the code below relies on the hard code, that’s a problem.
Yes, though to reword: having a user wait for code to resolve is bad user experience rather than a runtime error. Still, a runtime error would indeed occur if a code block was dependent on a variable that returned undefined because it had yet to resolve.
A solution is: if an operation takes a long time to complete, you want
to process it in a separate thread while the original thread is
processed. Just make sure the main thread doesn't reference things
that the asynchronous operation returns. JS employs event-ques for
this solution. Asynchronous operations are executed in an event-que
which executes after the main-thread.
Yes, but it is important to point out these other threads are occurring in the browser or on an api server, not within your JS script. Also, async functions are still called in the main call stack, but their resolve is placed in either the job queue or message queue.
But even the event-que can suffer from the same problem as the
main-thread. If fetch1, which is positioned above fetch2, takes a long
time to return a promise, and fetch2 doesn’t, JS might start executing
fetch2 before executing fetch1. This is where Promise.all is useful
because it won't proceed with the next step in the asynchronous
operation until both both fetch1 & fetch2’s promises are resolved.
Yes, a promise's resolve is executed as soon as the current function in the call stack resolves. If one promise resolves before another, its resolve will executed first event if it was called second. Also note that Promise.all doesn't change the resolve time, but rather returns the resolves together in the order their promise was executed.
On a separate note, I’ve read when chaining .then, that counts as one
asynchronous operation so we can always guarantee that the subsequent
.thenwill only execute when the .then before it has executed|resolved
its promise.
Yes, though the newer and cleaner syntax is async await:
function A () {
return new Promise((resolve, reject)=> setTimeout(()=> resolve("done"),2000))
}
async function B () {
try {
console.log("waiting...");
const result = await A();
console.log(result);
} catch (e) {
console.log(e);
}
}
B();
And below shows the Node.js callstack in action:
function A () {
return new Promise((resolve, reject)=> resolve(console.log("A")))
}
function B () {
console.log("B");
C();
}
function C () {
console.log("C");
}
function D () {
setTimeout(()=> console.log("D"),0);
}
B();
D();
A();
B gets called, C gets called, B resolves, D gets called, setTimeout gets called but its resolve is moved to the message queue, D resolves, A gets called, the promise gets called and immediately resolves, A resolves, the call stack completes, the message queue is accessed
For example I found some api library that is based on promises, and I need to issue api requests using this library in some interval, infinite times (like usual back-end loop). This api requests - actually chain of promises.
So, if I write function like:
function r(){
return api
.call(api.anotherCall)
.then(api.anotherCall)
.then(api.anotherCall)
...
.then(r)
}
Will it cause stack overflow?
Solutions that I come up with is to use setTimeout for a call of r recursively.
function r(){
return api
.call(api.anotherCall)
.then(api.anotherCall)
.then(api.anotherCall)
.then(()=>{setTimeout(r, 0)})
}
So setTimeout will call r actually only when call stack is empty.
Is it good solution, or there is some standard way of calling promises recursively?
Will this cause stackoverflow?
No, it will not. Per the promise specification, .then() waits for the stack to completely unwind and is then called after the stack is clear (essentially on the next tick of the event loop). So, .then() is already called asynchronously after the current event is done processing and the stack is unwound. You do not have to use setTimeout() to avoid stack build-up.
Your first code example will not have any stack build-up or stack overflow, no matter how many times you repeat it.
In the Promises/A+ specification, section 2.2.4 says this:
2.2.4 onFulfilled or onRejected must not be called until the execution context stack contains only platform code. [3.1].
And, "platform code" is defined here in 3.1:
“platform code” means engine, environment, and promise implementation code. In practice, this requirement ensures that onFulfilled and onRejected execute asynchronously, after the event loop turn in which then is called, and with a fresh stack. This can be implemented with either a “macro-task” mechanism such as setTimeout or setImmediate, or with a “micro-task” mechanism such as MutationObserver or process.nextTick. Since the promise implementation is considered platform code, it may itself contain a task-scheduling queue or “trampoline” in which the handlers are called.
The ES6 promise specification uses different words, but generates the same effect. In ES6, promise .then() is performed by enqueing a job and then letting that job get processed and the job only gets processed when no other code is running and the stack is empty.
This is how running such as job is described in the ES6 spec:
A Job is an abstract operation that initiates an ECMAScript computation when no other ECMAScript computation is currently in progress. A Job abstract operation may be defined to accept an arbitrary set of job parameters.
Execution of a Job can be initiated only when there is no running execution context and the execution context stack is empty. A PendingJob is a request for the future execution of a Job. A PendingJob is an internal Record whose fields are specified in Table 25. Once execution of a Job is initiated, the Job always executes to completion. No other Job may be initiated until the currently running Job completes. However, the currently running Job or external events may cause the enqueuing of additional PendingJobs that may be initiated sometime after completion of the currently running Job.
Many of my friends, who are using deeply some deferred/promises objects in their libraries, are often telling me, that to use timers in own implementation of it is an evil.
That it doesn't correspond to A+: https://github.com/promises-aplus/promises-spec
And that many libraries as jQuery and others don't use timers. So I've tried to find any timers in jQuery sources, which may relate to promises implementation, but no success:
https://github.com/jquery/jquery/blob/master/src/deferred.js
All, right, but I've found some notes in A+ description, which have confused me about using timers in it:
At Notes article:
Here "platform code" means engine, environment, and promise
implementation code. In practice, this requirement ensures that
onFulfilled and onRejected execute asynchronously, after the event
loop turn in which then is called, and with a fresh stack. This can be
implemented with either a "macro-task" mechanism such as setTimeout or
setImmediate, or with a "micro-task" mechanism such as
MutationObserver or process.nextTick. Since the promise implementation
is considered platform code, it may itself contain a task-scheduling
queue or "trampoline" in which the handlers are called.
So, I understood A+ didn't have strict rules about timer using or did it?
Help me, I'm rather confused.
You are confusing the use of setTimeout with "setting a timer" - Promises/A+ implementations typically use setTimeout to guarantee asynchronous execution of handler functions, not to delay execution by some time period.
Promises/A+ guarantees that the fulfilled and rejected methods are called asynchronously, regardless of when the promise is fulfilled. One way to guarantee async execution in a browser JS environment is to wrap a function call in setTimeout with a timeout of zero (the default).
jQuery does not guarantee asyc execution of fulfilled/rejected callbacks (which is a major design flaw), so an async wrapper call is not required.
Using timers in general in your applications is a bad practice (other than scheduling tasks, that is).
You can never be sure how long an action will take. So you end up doing one of three things:
You either allocate not enough time, in which case, stuff breaks because some things you expected to happen hadn't happened yet.
Or you allocate too much time, in which case you application is slow for no good reason
Or, worst case, you allocate just enough time, which causes your application to sometimes break, and sometimes work as expected.
I'm not sure about specs and stuff. But using timers and delays in your application in general is a bad idea.
Update
this an update to the question below and should help finding an answer
Taking up the answer from torazaburo who also quoted part of the prominent Javascript Promise/A+ definition I want to update the question here.
The Promise/A+ specification suggest in point 2.2.4 this:
onFulfilled or onRejected must not be called until the execution
context stack contains only platform code. 3.1.
and further explains
Here “platform code” means engine, environment, and promise
implementation code. In practice, this requirement ensures that
onFulfilled and onRejected execute asynchronously, after the event
loop turn in which then is called, and with a fresh stack. This can be
implemented with either a “macro-task” mechanism such as setTimeout or
setImmediate, or with a “micro-task” mechanism such as
MutationObserver or process.nextTick. Since the promise implementation
is considered platform code, it may itself contain a task-scheduling
queue or “trampoline” in which the handlers are called.
The very issue I look forward to find with this question is having as the crucial point that Promise implementation Javascript code is itself considered platform code and allows to not yield to the eventloop inbetween resolving subsequent promise via calling the onFulfilled onRejected functions associated. This is good in Node.js(server) as it avoids unnessary relinguishing back to the event-loop (leaving the execution stack), but also causes the challange in a Browser that since the execution stack is not exited in between resolving a potentially large number of Promises (which themselves can generate new Promises). Not leaving the execution stack and yielding to the event loop is causing in a Browser the undesired (blocking script warning/problem).
The "trampoline" task-scheduling of the Promise implementation which causes this needs however not necessary refrain from handing back the execution to the Javascript event loop from time to time. Such a feature would allow for using Promises for heavier tasks. Such an implementation for Promises for "long-running-code" is searched/asked for in this question.
Clarification: The "excessive lenght" is not the individual length of the onFulfilled function, but the joining together several those functions/callbacks as result of the Promise resolving process (when done in such a "trampoline" way). I am already aware that if one individual onFulfilled funciton is too long, this cannot be helped in any way by using any sort of Promise implementation.
The deal here is that the subsequent resolvement of x promises (within one excecution stack and hence without handing back to the Javascript event loop) can provoke an excessive length duration of Javascript code execution. This, when in a Browser is bad (because of blocking).
The question
In Javascript, Promises allow to deal with asynchronous programming tasks. Great!
There are already some implementations and libraries arround Q, WinJS or when.js to name just a few.
Having looked at then I see that they tackle some of the "special things" in Javascript asynchronous programming challanges.
Normally I perceive them to do this for promise resolution
Go to the internal list of promises
Check if the promise is fullfilled + run all the associated (via then(onFullfilled,onReject)) functions.
(in some cases we are done here)
(in other cases there will be still "pending" promises)
This case (4) is because to have them (the remaining promises) fullfilled would need the current Javascript Code (which is this very code for promise resolution) to stop running and allow JS event loop to happend (i.e asynchronous things like XHR-requests, or User-UI-interaction). To make this (4) work, the promise resolution normaly schedules a recall (i.e. via setTimeout/setImmediate) and continues after the event loop ran and hence maybe some of the "pending" promises have been settled (=rejected/fullfilled).
My worry is that the step 1 and 2 could be runnning for quite a some time, only releasing execution to the event loop in case it seems indicated to settle some of the "pending" promises. While "okay" in some cases (i.e. on the server/Node.js) it is quite problematic in a browers, because even though it was no problem to release execution to the event loop and have the UI not-blocking, this is not done in the implementations of promises I have seen.
My question therefore is:
Do you know a promise implementation (Javascript Promises library) that cares for the aspect:
to make "long-running-code-non-blocking-UI" in browser?
which would mean that the promise resolution would voluntarily release execution back to the event loop so that CSS animations, user input, mouse interaction, does get enough attention and that there will be no "Warning: Unresponsive script" message.
Any compliant promises implementation will not run the then functions synchronously, but rather only at the next tick. Therefore, your worry that "step 1 and 2 could be runnning for quite a some time" is unfounded.
From the Promises/A+ spec:
onFulfilled or onRejected must not be called until the execution context stack contains only platform code.
Here "platform code" means engine, environment, and promise implementation code. In practice, this requirement ensures that onFulfilled and onRejected execute asynchronously, after the event loop turn in which then is called, and with a fresh stack.
In other words, your formulation under 2) is incorrect. The promises implementation does not "run the associated functions", it schedules them.
This cannot help you--indeed there is no way to help you--if a handler itself is "long-running" code.
I think the solution could be to parse that long-running JavaScript code for example with https://github.com/NeilFraser/JS-Interpreter.
It will make the code be even slower, but you could specify the priority:
const myInterpreter = new Interpreter(myCode);
function nextStep() {
if (myInterpreter.step()) {
window.setTimeout(nextStep, 100/speed);
}
}
nextStep();