Is using timers in deferred/promises implementation an evil? - javascript

Many of my friends, who are using deeply some deferred/promises objects in their libraries, are often telling me, that to use timers in own implementation of it is an evil.
That it doesn't correspond to A+: https://github.com/promises-aplus/promises-spec
And that many libraries as jQuery and others don't use timers. So I've tried to find any timers in jQuery sources, which may relate to promises implementation, but no success:
https://github.com/jquery/jquery/blob/master/src/deferred.js
All, right, but I've found some notes in A+ description, which have confused me about using timers in it:
At Notes article:
Here "platform code" means engine, environment, and promise
implementation code. In practice, this requirement ensures that
onFulfilled and onRejected execute asynchronously, after the event
loop turn in which then is called, and with a fresh stack. This can be
implemented with either a "macro-task" mechanism such as setTimeout or
setImmediate, or with a "micro-task" mechanism such as
MutationObserver or process.nextTick. Since the promise implementation
is considered platform code, it may itself contain a task-scheduling
queue or "trampoline" in which the handlers are called.
So, I understood A+ didn't have strict rules about timer using or did it?
Help me, I'm rather confused.

You are confusing the use of setTimeout with "setting a timer" - Promises/A+ implementations typically use setTimeout to guarantee asynchronous execution of handler functions, not to delay execution by some time period.
Promises/A+ guarantees that the fulfilled and rejected methods are called asynchronously, regardless of when the promise is fulfilled. One way to guarantee async execution in a browser JS environment is to wrap a function call in setTimeout with a timeout of zero (the default).
jQuery does not guarantee asyc execution of fulfilled/rejected callbacks (which is a major design flaw), so an async wrapper call is not required.

Using timers in general in your applications is a bad practice (other than scheduling tasks, that is).
You can never be sure how long an action will take. So you end up doing one of three things:
You either allocate not enough time, in which case, stuff breaks because some things you expected to happen hadn't happened yet.
Or you allocate too much time, in which case you application is slow for no good reason
Or, worst case, you allocate just enough time, which causes your application to sometimes break, and sometimes work as expected.
I'm not sure about specs and stuff. But using timers and delays in your application in general is a bad idea.

Related

May this long polling code cause stackoverflow? [duplicate]

For example I found some api library that is based on promises, and I need to issue api requests using this library in some interval, infinite times (like usual back-end loop). This api requests - actually chain of promises.
So, if I write function like:
function r(){
return api
.call(api.anotherCall)
.then(api.anotherCall)
.then(api.anotherCall)
...
.then(r)
}
Will it cause stack overflow?
Solutions that I come up with is to use setTimeout for a call of r recursively.
function r(){
return api
.call(api.anotherCall)
.then(api.anotherCall)
.then(api.anotherCall)
.then(()=>{setTimeout(r, 0)})
}
So setTimeout will call r actually only when call stack is empty.
Is it good solution, or there is some standard way of calling promises recursively?
Will this cause stackoverflow?
No, it will not. Per the promise specification, .then() waits for the stack to completely unwind and is then called after the stack is clear (essentially on the next tick of the event loop). So, .then() is already called asynchronously after the current event is done processing and the stack is unwound. You do not have to use setTimeout() to avoid stack build-up.
Your first code example will not have any stack build-up or stack overflow, no matter how many times you repeat it.
In the Promises/A+ specification, section 2.2.4 says this:
2.2.4 onFulfilled or onRejected must not be called until the execution context stack contains only platform code. [3.1].
And, "platform code" is defined here in 3.1:
“platform code” means engine, environment, and promise implementation code. In practice, this requirement ensures that onFulfilled and onRejected execute asynchronously, after the event loop turn in which then is called, and with a fresh stack. This can be implemented with either a “macro-task” mechanism such as setTimeout or setImmediate, or with a “micro-task” mechanism such as MutationObserver or process.nextTick. Since the promise implementation is considered platform code, it may itself contain a task-scheduling queue or “trampoline” in which the handlers are called.
The ES6 promise specification uses different words, but generates the same effect. In ES6, promise .then() is performed by enqueing a job and then letting that job get processed and the job only gets processed when no other code is running and the stack is empty.
This is how running such as job is described in the ES6 spec:
A Job is an abstract operation that initiates an ECMAScript computation when no other ECMAScript computation is currently in progress. A Job abstract operation may be defined to accept an arbitrary set of job parameters.
Execution of a Job can be initiated only when there is no running execution context and the execution context stack is empty. A PendingJob is a request for the future execution of a Job. A PendingJob is an internal Record whose fields are specified in Table 25. Once execution of a Job is initiated, the Job always executes to completion. No other Job may be initiated until the currently running Job completes. However, the currently running Job or external events may cause the enqueuing of additional PendingJobs that may be initiated sometime after completion of the currently running Job.

Recursive promises can cause stack overflow?

For example I found some api library that is based on promises, and I need to issue api requests using this library in some interval, infinite times (like usual back-end loop). This api requests - actually chain of promises.
So, if I write function like:
function r(){
return api
.call(api.anotherCall)
.then(api.anotherCall)
.then(api.anotherCall)
...
.then(r)
}
Will it cause stack overflow?
Solutions that I come up with is to use setTimeout for a call of r recursively.
function r(){
return api
.call(api.anotherCall)
.then(api.anotherCall)
.then(api.anotherCall)
.then(()=>{setTimeout(r, 0)})
}
So setTimeout will call r actually only when call stack is empty.
Is it good solution, or there is some standard way of calling promises recursively?
Will this cause stackoverflow?
No, it will not. Per the promise specification, .then() waits for the stack to completely unwind and is then called after the stack is clear (essentially on the next tick of the event loop). So, .then() is already called asynchronously after the current event is done processing and the stack is unwound. You do not have to use setTimeout() to avoid stack build-up.
Your first code example will not have any stack build-up or stack overflow, no matter how many times you repeat it.
In the Promises/A+ specification, section 2.2.4 says this:
2.2.4 onFulfilled or onRejected must not be called until the execution context stack contains only platform code. [3.1].
And, "platform code" is defined here in 3.1:
“platform code” means engine, environment, and promise implementation code. In practice, this requirement ensures that onFulfilled and onRejected execute asynchronously, after the event loop turn in which then is called, and with a fresh stack. This can be implemented with either a “macro-task” mechanism such as setTimeout or setImmediate, or with a “micro-task” mechanism such as MutationObserver or process.nextTick. Since the promise implementation is considered platform code, it may itself contain a task-scheduling queue or “trampoline” in which the handlers are called.
The ES6 promise specification uses different words, but generates the same effect. In ES6, promise .then() is performed by enqueing a job and then letting that job get processed and the job only gets processed when no other code is running and the stack is empty.
This is how running such as job is described in the ES6 spec:
A Job is an abstract operation that initiates an ECMAScript computation when no other ECMAScript computation is currently in progress. A Job abstract operation may be defined to accept an arbitrary set of job parameters.
Execution of a Job can be initiated only when there is no running execution context and the execution context stack is empty. A PendingJob is a request for the future execution of a Job. A PendingJob is an internal Record whose fields are specified in Table 25. Once execution of a Job is initiated, the Job always executes to completion. No other Job may be initiated until the currently running Job completes. However, the currently running Job or external events may cause the enqueuing of additional PendingJobs that may be initiated sometime after completion of the currently running Job.

Performance impact during the multiple callback functions after sending response

Can anyone help me understand the function of NodeJS and performance impact for the below scenario.
a. Making the request to Rest API end point "/api/XXX". In this request, i am returning the response triggering the asynchronous function like below.
function update(req, res) {
executeUpdate(req.body); //Asynchronous function
res.send(200);
}
b. In this, I send the response back without waiting for the function to complete and this function executing four mongodb updates of different collection.
Questions:
As I read, the NodeJS works on the single thread, how this
asynchronous function is executing?
If there are multiple requests for same end point, how will be the
performance impact of NodeJS?
How exactly the NodeJS handles the asynchronous function of each
request, because as the NodeJS is runs on the single thread, is there
any possibility of the memory issue?
In short, it depends on what you are doing in your function.
The synchronous functions in node are executed on main thread, thus,
they will not preempt and execute until end of the function or until
return statement is encountered.
The async functions, on the other hand, are removed from main thread,
and will only be executed when async tasks are completed on a
separate worker thread.
There are, I think, two different parts in the answer to your question.
Actual Performance - which includes CPU & memory performance. It also obviously includes speed.
Understanding as the previous poster said, Sync and Async.
In dealing with #1 - actual performance the real only way to test it is to create or use a testing environment on your code. In a rudimentary way based upon the system you are using you can view some of the information in top (linux) or Glances will give you a basic idea of performance, but in order to know exactly what is going on you will need to apply some of the various testing environments or writing your own tests.
Approaching #2 - It is not only sync and async processes you have to understand, but also the ramifications of both. This includes the use of callbacks and promises.
It really all depends on the current process you are attempting to code. For instance, many Node programmers seem to prefer using promises when they make calls to MongoDB, especially when one requires more than one call based upon the return of the cursor.
There is really no written-in-stone formula for when you use sync or async processes. Avoiding callback hell is something all Node programmers try to do. Catching errors etc. is something you always need to be careful about. As I said some programmers will always opt for Promises or Async when dealing with returns of data. The famous Async library coupled with Bluebird are the choice of many for certain scenarios.
All that being said, and remember your question is general and therefore so is my answer, in order to properly know the implications on your performance, in memory, cpu and speed as well as in return of information or passing to the browser, it is a good idea to understand as best as you can sync, async, callbacks, promises and error catching. You will discover certain situations are great for sync (and much faster), while others do require async and/or promises.
Hope this helps somewhat.

Promise.resolve().then vs setImmediate vs nextTick

NodeJS 0.11 as well as io.js and the Node 0.12 branch all ship with native promises.
Native promises have a .then method which always executes on a future event loop cycle.
So far I've been using setImmediate to queue things to the next iteration of the event loop ever since I switched from nextTick:
setImmediate(deferThisToNextTick); // My NodeJS 0.10 code
process.nextTick(deferThisToNextTick); // My NodeJS 0.8 code
Since we now have a new way to do this:
Promise.resolve().then(deferThisToNextTick);
Which should I use? Also - does Promise.resolve.then act like setImmediate or like nextTick with regards to code running before or after the event loop?
Using Promise.resolve().then has no advantages over nextTick. It runs on the same queue, but have slightly higher priority, that is, promise handler can prevent next tick callback from ever running, the opposite is not possible. This behaviour is an implementation detail and should not be relied on.
Promise.resolve().then is obviously slower (a lot, I think), because it creates two promises which will be thrown away.
You can find extensive implementation info here: https://github.com/joyent/node/pull/8325
The most important part: Promise.resolve().then is like nextTick and not like setImmediate. Using it n place of setImmediate can change your code behaviour drastically.
I'm not going to answer the bolded part about technicalities, but only the question
Which should I use?
I don't think there is any reason to use Promise.resolve().then() unless you are interested in the promise for the result of your asynchronously executed function. Of course, if you are, then this would be far superior than dealing with callback hell or making a new Promise from setTimeout or nextTick.
There's also a second technical difference, more import than the timing: promises do swallow exceptions. Which you probably don't want. So, like #vkurchatkin mentioned, don't create promises only to throw them away. Not only because it's slower, but because it makes your code less readable and your app more error-prone.
Promise.resolve would be resolved straight away (syncroniously), while setImmediate explicitly straight after the execution of current event.

Javascript Promises library to make "long-running-code-non-blocking-UI" in browser?

Update
this an update to the question below and should help finding an answer
Taking up the answer from torazaburo who also quoted part of the prominent Javascript Promise/A+ definition I want to update the question here.
The Promise/A+ specification suggest in point 2.2.4 this:
onFulfilled or onRejected must not be called until the execution
context stack contains only platform code. 3.1.
and further explains
Here “platform code” means engine, environment, and promise
implementation code. In practice, this requirement ensures that
onFulfilled and onRejected execute asynchronously, after the event
loop turn in which then is called, and with a fresh stack. This can be
implemented with either a “macro-task” mechanism such as setTimeout or
setImmediate, or with a “micro-task” mechanism such as
MutationObserver or process.nextTick. Since the promise implementation
is considered platform code, it may itself contain a task-scheduling
queue or “trampoline” in which the handlers are called.
The very issue I look forward to find with this question is having as the crucial point that Promise implementation Javascript code is itself considered platform code and allows to not yield to the eventloop inbetween resolving subsequent promise via calling the onFulfilled onRejected functions associated. This is good in Node.js(server) as it avoids unnessary relinguishing back to the event-loop (leaving the execution stack), but also causes the challange in a Browser that since the execution stack is not exited in between resolving a potentially large number of Promises (which themselves can generate new Promises). Not leaving the execution stack and yielding to the event loop is causing in a Browser the undesired (blocking script warning/problem).
The "trampoline" task-scheduling of the Promise implementation which causes this needs however not necessary refrain from handing back the execution to the Javascript event loop from time to time. Such a feature would allow for using Promises for heavier tasks. Such an implementation for Promises for "long-running-code" is searched/asked for in this question.
Clarification: The "excessive lenght" is not the individual length of the onFulfilled function, but the joining together several those functions/callbacks as result of the Promise resolving process (when done in such a "trampoline" way). I am already aware that if one individual onFulfilled funciton is too long, this cannot be helped in any way by using any sort of Promise implementation.
The deal here is that the subsequent resolvement of x promises (within one excecution stack and hence without handing back to the Javascript event loop) can provoke an excessive length duration of Javascript code execution. This, when in a Browser is bad (because of blocking).
The question
In Javascript, Promises allow to deal with asynchronous programming tasks. Great!
There are already some implementations and libraries arround Q, WinJS or when.js to name just a few.
Having looked at then I see that they tackle some of the "special things" in Javascript asynchronous programming challanges.
Normally I perceive them to do this for promise resolution
Go to the internal list of promises
Check if the promise is fullfilled + run all the associated (via then(onFullfilled,onReject)) functions.
(in some cases we are done here)
(in other cases there will be still "pending" promises)
This case (4) is because to have them (the remaining promises) fullfilled would need the current Javascript Code (which is this very code for promise resolution) to stop running and allow JS event loop to happend (i.e asynchronous things like XHR-requests, or User-UI-interaction). To make this (4) work, the promise resolution normaly schedules a recall (i.e. via setTimeout/setImmediate) and continues after the event loop ran and hence maybe some of the "pending" promises have been settled (=rejected/fullfilled).
My worry is that the step 1 and 2 could be runnning for quite a some time, only releasing execution to the event loop in case it seems indicated to settle some of the "pending" promises. While "okay" in some cases (i.e. on the server/Node.js) it is quite problematic in a browers, because even though it was no problem to release execution to the event loop and have the UI not-blocking, this is not done in the implementations of promises I have seen.
My question therefore is:
Do you know a promise implementation (Javascript Promises library) that cares for the aspect:
to make "long-running-code-non-blocking-UI" in browser?
which would mean that the promise resolution would voluntarily release execution back to the event loop so that CSS animations, user input, mouse interaction, does get enough attention and that there will be no "Warning: Unresponsive script" message.
Any compliant promises implementation will not run the then functions synchronously, but rather only at the next tick. Therefore, your worry that "step 1 and 2 could be runnning for quite a some time" is unfounded.
From the Promises/A+ spec:
onFulfilled or onRejected must not be called until the execution context stack contains only platform code.
Here "platform code" means engine, environment, and promise implementation code. In practice, this requirement ensures that onFulfilled and onRejected execute asynchronously, after the event loop turn in which then is called, and with a fresh stack.
In other words, your formulation under 2) is incorrect. The promises implementation does not "run the associated functions", it schedules them.
This cannot help you--indeed there is no way to help you--if a handler itself is "long-running" code.
I think the solution could be to parse that long-running JavaScript code for example with https://github.com/NeilFraser/JS-Interpreter.
It will make the code be even slower, but you could specify the priority:
const myInterpreter = new Interpreter(myCode);
function nextStep() {
if (myInterpreter.step()) {
window.setTimeout(nextStep, 100/speed);
}
}
nextStep();

Categories