I receive events on-the-fly in the correct order in which I want to process them after another (as well on-the-fly - so don't "store" them first).
The following example is not going to work as the function someEventOccured() could be called multiple times in a very close timespan. Thus, the execution of handleEvent would overlap in time (which I don't want). As I am writing to a file in handleEvent() I cannot have simultaneously execute the function and need to maintain the order...
async someEventOccured(data:string){
await handleEvent(data);
}
I have already tried using .then() and storing the promise, however I have no idea how to wait for the previous .then() to finish...
Your help is very appreciated <3
The general approach is to have a shared promise to queue onto:
let queue = Promise.resolve();
function someEventOccured(data:string) {
queue = queue.then(() => {
return handleEvent(data);
}).catch(err => {
// ensure `queue` is never rejected or it'll stop processing events
// …
});
}
Of course, this does implicitly store the data events in the closures on the promise chain.
Related
I am having trouble finding a use for Promises. Wouldn't these 2 approaches below work the same exact way? Since the while loop in loopTest() is synchronous, logStatement() function wouldn't run until it's complete anyways so how would the the 2nd approach be any different ..wouldn't it be pointless in waiting for it to resolve() ?
1st approach:
function loopTest() {
while ( i < 10000 ) {
console.log(i)
i++
})
}
function logStatement() {
console.log("Logging test")
}
loopTest();
logStatement();
2nd approach:
function loopTest() {
return new Promise((resolve, reject) => {
while ( i < 10000 ) {
console.log(i)
i++
if (i === 999) {
resolve('I AM DONE')
}
})
});
}
function logStatement() {
console.log("Logging test")
}
loopTest().then(logStatement());
Promises don't make anything asynchronous,¹ so you're right, there's no point to using a promise in the code you've shown.
The purpose of promises is to provide a standard, composable means of observing the result of things that are already asynchronous (like ajax calls).
There are at least three massive benefits to having a standardized way to observe the results of asynchronous operations:
We can have standard semantics for consuming individual promises, rather than every API defining its own signature for callback functions. (Does it signal error with an initial parameter that's null on success, like Node.js? Does it call the callback with an object with a success flag? Or...)
We can have standard ways of composing/combining them, such as Promise.all, Promise.race, Promise.allSettled, etc.
We can have syntax to consume them with our usual control structures, which we have now in the form of async functions and await.
But again, throwing a promise at a synchronous process almost never does anything useful.²
¹ One very small caveat there: The handler functions to attach to a promise are always triggered asynchronously, whether the promise is already settled or not.
² Another small caveat: Sometimes, you have a synchronous result you want to include in a composition operation (Promise.all, etc.) with various asynchronous operations. In that case, wrapping the value in a promise that's instantly fulfilled is useful — and in fact, all the standard promise combinators (Promise.all, etc.) do that for you, as does await.
There's no point in what you are doing, because your function body is just a blocking loop.
To get a benefit from Promises, use it with APIs that do something with IO, such as a HTTP request, or reading a file from disk.
These APIs all traditionally used callbacks, and are now mostly Promise based.
Anything function that uses a Promise-based function, should itself also be Promise-based. This is why you see a lot of promises in modern code, as a promise only has to be used at 1 level in a stack for the entire stack to be asynchronous in nature.
Is this a better example of how Promises are used? This is all I can think of to make it show use to me:
Version 1
function getData() {
fetch('https://jsonplaceholder.typicode.com/todos/1')
.then(data => data.json())
.then(json => console.log(json))
}
function logInfo() {
console.log("i am a logger")
}
getData()
logInfo()
// "I am a logger"
// {"test": "json"}
Version 2
function getData() {
return fetch('https://jsonplaceholder.typicode.com/todos/1')
.then(data => data.json())
.then(json => console.log(json))
}
function logInfo() {
console.log("i am a logger")
}
getData().then(logInfo);
// "{"test": "json"}
// "I am a logger"
// waits for API result to log _then_ logInfo is run , which makes a log statement
There's definitely benefits to using Promises but that's only in certain scenarios where their usage would seem viable.
Your example could represent what would happen when you retrieve data from an external source synchronously, it would block the thread preventing further code from executing until the loop terminates (I explain below why exactly that happens) - wrapping it in a promise gives no different output in that the thread is still being blocked and when the next message in the queue has to be processed, it gets processed like normal right after it ends.
However an implementation similar to this could achieve a while loop running in a non-blocking manner, just an idea (don't mean to derail this topic with setInterval's implementation):
let f = () => {
let tick = Date.now;
let t = tick();
let interval = setInterval(() => {
if (tick() - t >= 3000) {
console.log("stop");
clearInterval(interval);
}
}, 0);
};
f()
console.log("start");
Basically the time is checked/handled in a separate thread in the browser and the callback is executed every time the time specified runs out while the interval hasn't been cleared, after the call stack becomes empty (so UI function isn't affected) and the current executing function terminates/ends or after other functions above it in the stack finish running. I don't know about the performance implications of doing something like this but I feel like this should only be used when necessary, since the callback would have to be executed very frequently (with 0 timeout, although it's not guaranteed to be 0 anyway).
why it happens
I mainly want to clarify that while the handler functions will be scheduled to be executed asynchronously, every message in the queue has to be processed completely before the next one and for the duration your while loop executes, no new message can be processed in the event queue so it would be pointless to involve Promises where the same thing would happen without them.
So basically the answer to:
wouldn't it be pointless in waiting for it to resolve() ?
is yes, it would be pointless in this case.
When using Javascript promises, does the event loop get blocked?
My understanding is that using await & async, makes the stack stop until the operation has completed. Does it do this by blocking the stack or does it act similar to a callback and pass of the process to an API of sorts?
When using Javascript promises, does the event loop get blocked?
No. Promises are only an event notification system. They aren't an operation themselves. They simply respond to being resolved or rejected by calling the appropriate .then() or .catch() handlers and if chained to other promises, they can delay calling those handlers until the promises they are chained to also resolve/reject. As such a single promise doesn't block anything and certainly does not block the event loop.
My understanding is that using await & async, makes the stack stop
until the operation has completed. Does it do this by blocking the
stack or does it act similar to a callback and pass of the process to
an API of sorts?
await is simply syntactic sugar that replaces a .then() handler with a bit simpler syntax. But, under the covers the operation is the same. The code that comes after the await is basically put inside an invisible .then() handler and there is no blocking of the event loop, just like there is no blocking with a .then() handler.
Note to address one of the comments below:
Now, if you were to construct code that overwhelms the event loop with continually resolving promises (in some sort of infinite loop as proposed in some comments here), then the event loop will just over and over process those continually resolved promises from the microtask queue and will never get a chance to process macrotasks waiting in the event loop (other types of events). The event loop is still running and is still processing microtasks, but if you are stuffing new microtasks (resolved promises) into it continually, then it may never get to the macrotasks. There seems to be some debate about whether one would call this "blocking the event loop" or not. That's just a terminology question - what's more important is what is actually happening. In this example of an infinite loop continually resolving a new promise over and over, the event loop will continue processing those resolved promises and the other events in the event queue will not get processed because they never get to the front of the line to get their turn. This is more often referred to as "starvation" than it is "blocking", but the point is that macrotasks may not get serviced if you are continually and infinitely putting new microtasks in the queue.
This notion of an infinite loop continually resolving a new promise should be avoided in Javascript. It can starve other events from getting a chance to be serviced.
Do Javascript promises block the stack
No, not the stack. The current job will run until completion before the Promise's callback starts executing.
When using Javascript promises, does the event loop get blocked?
Yes it does.
Different environments have different event-loop processing models, so I'll be talking about the one in browsers, but even though nodejs's model is a bit simpler, they actually expose the same behavior.
In a browser, Promises' callbacks (PromiseReactionJob in ES terms), are actually executed in what is called a microtask.
A microtask is a special task that gets queued in the special microtask-queue.
This microtask-queue is visited various times during a single event-loop iteration in what is called a microtask-checkpoint, and every time the JS call stack is empty, for instance after the main task is done, after rendering events like resize are executed, after every animation-frame callback, etc.
These microtask-checkpoints are part of the event-loop, and will block it the time they run just like any other task.
What is more about these however is that a microtask scheduled from a microtask-checkpoint will get executed by that same microtask-checkpoint.
This means that the simple fact of using a Promise doesn't make your code let the event-loop breath, like a setTimeout() scheduled task could do, and even though the js stack has been emptied and the previous task has been executed entirely before the callback is called, you can still very well lock completely the event-loop, never allowing it to process any other task or even update the rendering:
const log = document.getElementById( "log" );
let now = performance.now();
let i = 0;
const promLoop = () => {
// only the final result will get painted
// because the event-loop can never reach the "update the rendering steps"
log.textContent = i++;
if( performance.now() - now < 5000 ) {
// this doesn't let the event-loop loop
return Promise.resolve().then( promLoop );
}
else { i = 0; }
};
const taskLoop = () => {
log.textContent = i++;
if( performance.now() - now < 5000 ) {
// this does let the event-loop loop
postTask( taskLoop );
}
else { i = 0; }
};
document.getElementById( "prom-btn" ).onclick = start( promLoop );
document.getElementById( "task-btn" ).onclick = start( taskLoop );
function start( fn ) {
return (evt) => {
i = 0;
now = performance.now();
fn();
};
}
// Posts a "macro-task".
// We could use setTimeout, but this method gets throttled
// to 4ms after 5 recursive calls.
// So instead we use either the incoming postTask API
// or the MesageChannel API which are not affected
// by this limitation
function postTask( task ) {
// Available in Chrome 86+ under the 'Experimental Web Platforms' flag
if( window.scheduler ) {
return scheduler.postTask( task, { priority: "user-blocking" } );
}
else {
const channel = postTask.channel ||= new MessageChannel();
channel.port1
.addEventListener( "message", () => task(), { once: true } );
channel.port2.postMessage( "" );
channel.port1.start();
}
}
<button id="prom-btn">use promises</button>
<button id="task-btn">use postTask</button>
<pre id="log"></pre>
So beware, using a Promise doesn't help at all with letting the event-loop actually loop.
Too often we see code using a batching pattern to not block the UI that fails completely its goal because it is assuming Promises will let the event-loop loop. For this, keep using setTimeout() as a mean to schedule a task, or use the postTask API if you are in a near future.
My understanding is that using await & async, makes the stack stop until the operation has completed.
Kind of... when awaiting a value it will add the remaining of the function execution to the callbacks attached to the awaited Promise (which can be a new Promise resolving the non-Promise value).
So the stack is indeed cleared at this time, but the event loop is not blocked at all here, on the contrary it's been freed to execute anything else until the Promise resolves.
This means that you can very well await for a never resolving promise and still let your browser live correctly.
async function fn() {
console.log( "will wait a bit" );
const prom = await new Promise( (res, rej) => {} );
console.log( "done waiting" );
}
fn();
onmousemove = () => console.log( "still alive" );
move your mouse to check if the page is locked
An await blocks only the current async function, the event loop continues to run normally. When the promise settles, the execution of the function body is resumed where it stopped.
Every async/await can be transformed in an equivalent .then(…)-callback program, and works just like that from the concurrency perspective. So while a promise is being awaited, other events may fire and arbitrary other code may run.
As other mentioned above... Promises are just like an event notification system and async/await is the same as then(). However, be very careful, You can "block" the event loop by executing a blocking operation. Take a look to the following code:
function blocking_operation_inside_promise(){
return new Promise ( (res, rej) => {
while( true ) console.log(' loop inside promise ')
res();
})
}
async function init(){
let await_forever = await blocking_operation_inside_promise()
}
init()
console.log('END')
The END log will never be printed. JS is single threaded and that thread is busy right now. You could say that whole thing is "blocked" by the blocking operation. In this particular case the event loop is not blocked per se, but it wont deliver events to your application because the main thread is busy.
JS/Node can be a very useful programming language, very efficient when using non-blocking operations (like network operations). But do not use it to execute very intense CPU algorithms. If you are at the browser consider to use Web Workers, if you are at the server side use Worker Threads, Child Processes or a Microservice Architecture.
console.log('1')
setTimeout(() => {
console.log('2')
}, 0)
function three() {
return new Promise(resolve => {
setTimeout(() => {
return new Promise(resolve => resolve('3'))
},0)
})
}
three().then(result => console.log(result))
console.log('4')
This code snippet outputs 1 4 2
This is the behavior I would expect based on my understanding of javascript's event loop and concurrency model. But it leaves me with some lingering questions.
Before getting to those questions, I'll first break down my understanding of this code snippet.
Why code outputs 1
no explanation needed
Why code outputs 4
the callback that outputs 2 gets loaded into the event queue (aka macro task queue) after 0ms, but doesn't get executed until the main call stack is emptied.
even if three was a promise that was immediately resolved, its code is loaded into the job queue (aka microtask queue) and wouldn't be executed until the main call stack is emptied (regardless of the contents of the event queue)
Why code outputs 2
after console.log(4) the main call stack is empty and javascript looks for the next callback to load on the main stack. It's pretty safe to assume that at this point, some "worker thread" had already put the callback function that outputs 2 onto the macro task queue. This gets loaded onto the stack and 2 is output.
Why code does NOT output 3
This is where its a little blurry for me. The function three returns a promise that is then-ed in the main thread. Callback functions passed through then are loaded onto microtask queue and executed before the next task in the macrotask queue. So while you might think it'll run before the callback that logs 2, its actually theoretically impossible for it to run at all. That's because the Promise is only resolved via the callback function of its setTimeout, and that callback function (because of setTimeout) would only run if the main execution thread (the same thread that's waiting for the promise to resolve) is empty.
Why does this bother me
I'm trying to build a complete theoretical mental model of how javascript handles concurrency. One of the missing pieces in that model is the relationship between network requests, promises, and the event loop. Take the above code snippet, and suppose I replace three's setTimeout with some sort of network request (a very common thing in async web development). Assuming that the network request behaves similarly to setTimeout, in that when the "worker thread" is done, a callback is pushed to the macro task queue, it's hard for me to understand how that callback even gets executed. But this is something that happens literally all the time.
Can someone help me understand? Do I have any missing gaps in my current understanding of js concurrency? Have I made an incorrect assumption? Does any of this actually make any sense? lol
Why code does NOT output 3
In this code:
function three() {
return new Promise(resolve => {
setTimeout(() => {
return new Promise(resolve => resolve('3'))
},0)
})
}
three().then(result => console.log(result))
You never resolve the first Promise that three() creates. Since that's the one that is returned form three(), then the .then() handler in three().then(...) is never called.
You do resolve the promise created inside the timer, but you're returning that promise only to the timer callback which does nothing.
If you change your code to this:
function three() {
return new Promise(resolve => {
setTimeout(() => {
resolve('3');
},0)
})
}
three().then(result => console.log(result))
Then, you would see the 3get output.
So, this doesn't have anything to do with the event loop or how it works. It has to do with not resolving the promise that three() returns so the .then() handler on that promise never gets called.
I'm trying to build a complete theoretical mental model of how javascript handles concurrency. One of the missing pieces in that model is the relationship between network requests, promises, and the event loop. Take the above code snippet, and suppose I replace three's setTimeout with some sort of network request (a very common thing in async web development). Assuming that the network request behaves similarly to setTimeout, in that when the "worker thread" is done, a callback is pushed to the macro task queue, it's hard for me to understand how that callback even gets executed. But this is something that happens literally all the time.
Network requests and promises and timers all go through the event loop. There are very complicated rules about how multiple events in queue at the same time are prioritized relative to one another. .then() handlers are generally prioritized first.
Think of the Javascript interpreter as this simplistic sequence.
Get event from event queue
If nothing in the event queue, sleep until something is in the event queue
Run callback function associated with the event you pull from the event queue
Run that callback function until it returns
Note, it may not be completely done with its work because it may
have started other asynchronous operations and set up its own
callbacks or promises for those. But, it has returned from the
original callback that started it
When that callback returns, go back to the first step above and get the next event
Remember that network requests, promises, timers and literally ALL asynchronous operations in node.js go through the event queue in this manner.
Why you assume that network request would behave as setTimeout?
It is a Promise which resolved() would go in microtasks
My code:
async function run(){
process.nextTick(()=>{
console.log(1);
});
await (new Promise(resolve=>resolve()).then(()=>{console.log(2)}));
console.log(3);
process.nextTick(()=>{
console.log(4);
});
new Promise(resolve=>resolve()).then(()=>{console.log(5)});
}
run();
My expected output is the numbers to print 1,2,3,4,5 in order, but instead I get:
1
2
3
5
4
As expected, the first nextTick is evaluated before the first .then callback, because process.nextTick and .then are both deferred to future ticks, and process.nextTick is declared before .then. So 1 and 2 are outputted in order as expected.
The code should not reach what is after await until after .then is resolved, and this works as expected, as 3 is outputted in the expected place.
Then essentially we have a repeat of the first part of the code, but this time .then is called before process.nextTick.
This seems like inconsistent behavior. Why does process.nextTick get called before the .then callback the first time around but not the second?
The node.js event queue is not a single queue. It is actually a bunch of different queues and things like process.nextTick() and promise .then() handlers are not handled in the same queues. So, events of different types are not necessarily FIFO.
As such, if you have multiple things that go in the event queue around the same time and you want them served in a specific order, the simplest way to guarantee that order is to write your code to force the order you want, not to try to guess exactly how two things are going to get sequenced that went into the queue around the same time.
It is true that two operations of the exact same type like two process.nextTick() operations or two resolved promise operations will be processed in the order they were put into the event queue. But, operations of different types may not be processed in the order relative to each other that they were put in the event queue because different types of events are processed at different times in the cycle the event loop makes through all the different types of events.
It is probably possible to fully understand exactly how the event loop in node.js works for every type of event and predict exactly how two events that enter the event queue at about the same time will be processed relative to one another, but it is not easy. It is further complicated by the fact that it also depends upon where the event loop is in its current processing when the new events are added to the event queue.
As in my delivery example in my earlier comments, when exactly a new delivery will be processed relative to other deliveries depends upon where the delivery driver is when the new order arrives in the queue. The same can be true of the node.js event system. If a new event is inserted while node.js is processing a timer event, it may have a different relative order to other types of events than if it node.js was processing a file I/O completion event when it was inserted. So, because of this significant complication, I don't recommend trying to predict the execution order of asynchronous events of different types that are inserted into the event queue at about the same time.
And, I should add that native promises are plugged directly into the event loop implementation (as their own type of micro task) so a native promise implementation may behave differently in your original code than a non-native promise implementation. Again a reason not to try to forecast exactly how the event loop will schedule different types of events relative to one another.
If the order of processing is important to your code, then use code to enforce a specific completion processing order.
As an example of how it matters what the event queue is doing when events are inserted into the event queue, your code simplified to this:
async function run(){
process.nextTick(()=>{
console.log(1);
});
await Promise.resolve().then(()=>{console.log(2)});
console.log(3);
process.nextTick(()=>{
console.log(4);
});
Promise.resolve().then(()=>{console.log(5)});
}
run();
Generates this output:
1
2
3
5
4
But, simply change when the run() is called to be Promise.resolve().then(run) and the order is suddenly different:
async function run(){
process.nextTick(()=>{
console.log(1);
});
await Promise.resolve().then(()=>{console.log(2)});
console.log(3);
process.nextTick(()=>{
console.log(4);
});
Promise.resolve().then(()=>{console.log(5)});
}
Promise.resolve().then(run);
Generates this output which is quite different:
2
3
5
1
4
You can see that when the code is started from a resolved promise, then other resolved promises that happen in that code get processed before .nextTick() events which wasn't the case when the code was started from a different point in the event queue processing. This is the part that makes the event queue system very difficult to forecast.
So, if you're trying to guarantee a specific execution order, you have to either use all the same type of events and then they will execute in FIFO order relative to each other or you have to make your code enforce the execution order you want. So, if you really wanted to see this order:
1
2
3
4
5
You could use all promises which would essentially map to this:
async function run(){
Promise.resolve().then(() => {
console.log(1);
})
Promise.resolve().then(() => {
console.log(2)
});
await Promise.resolve().then(()=>{});
console.log(3);
Promise.resolve().then(() => {
console.log(4)
});
Promise.resolve().then(()=>{console.log(5)});
}
run();
Or, you change the structure of your code so the code makes it always process things in the desired order:
async function run(){
process.nextTick(async ()=>{
console.log(1);
await Promise.resolve().then(()=>{console.log(2)});
console.log(3);
process.nextTick(()=>{
console.log(4);
Promise.resolve().then(()=>{console.log(5)});
});
});
}
run();
Either of these last two scenarios will generate the output:
1
2
3
4
5
Thanks to the helpful comments that made me realize that not all javascript ways of deferring to a later tick are created equal. I had figured that (new Promise()).then, process.nextTick, even setTimeout(callback,0) would all be exactly the same but it turns out I can't assume that.
For now I'll leave here that the solution to my problem is simply to not use process.nextTick if it does not work in the expected order.
So I can change my code to (Disclaimer this is not actually good async code in general):
async function run(){
process.nextTick(()=>{
console.log(1);
});
await (new Promise(resolve=>resolve()).then(()=>{console.log(2)}));
console.log(3);
(new Promise(resolve=>resolve())).then(()=>{console.log(4)});
(new Promise(resolve=>resolve())).then(()=>{console.log(5)});
}
run();
Now I'm ensuring that 4 is logged before 5, by making both of them the same type of async call. Making both of them use process.nextTick would also ensure that 4 logs before 5.
async function run(){
process.nextTick(()=>{
console.log(1);
});
await (new Promise(resolve=>resolve()).then(()=>{console.log(2)}));
console.log(3);
process.nextTick(()=>{
console.log(4)
});
process.nextTick(()=>{
console.log(5)
});
}
run();
This is a solution to my problem, but if anyone wants to provide a more direct answer to my question I'll be happy to accept.
I have a long, complicated asynchronous process in TypeScript/JavaScript spread across many libraries and functions that, when it is finally finished receiving and processing all of its data, calls a function processComplete() to signal that it's finished:
processComplete(); // Let the program know we're done processing
Right now, that function looks something like this:
let complete = false;
function processComplete() {
complete = true;
}
In order to determine whether the process is complete, other code either uses timeouts or process.nextTick and checks the complete variable over and over again in loops. This is complicated and inefficient.
I'd instead like to let various async functions simply use await to wait and be awoken when the process is complete:
// This code will appear in many different places
await /* something regarding completion */;
console.log("We're done!");
If I were programming in Windows in C, I'd use an event synchronization primitive and the code would look something like this:
Event complete;
void processComplete() {
SetEvent(complete);
}
// Elsewhere, repeated in many different places
WaitForSingleObject(complete, INFINITE);
console.log("We're done!");
In JavaScript or TypeScript, rather than setting a boolean complete value to true, what exactly could processComplete do to make wake up any number of functions that are waiting using await? In other words, how can I implement an event synchronization primitive using await and async or Promises?
This pattern is quite close to your code:
const processComplete = args => new Promise(resolve => {
// ...
// In the middle of a callback for a async function, etc.:
resolve(); // instead of `complete = true;`
// ...
}));
// elsewhere
await processComplete(args);
console.log("We're done!");
More info: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise.
It really depends on what you mean by "other code" in this scenario. It sounds like you want to use some variation of the delegation pattern or the observer pattern.
A simple approach is to take advantage of the fact that JavaScript allows you to store an array of functions. Your processComplete() method could do something like this:
function processComplete(){
arrayOfFunctions.forEach(fn => fn());
}
Elsewhere, in your other code, you could create functions for what needs to be done when the process is complete, and add those functions to the arrayOfFunctions.
If you don't want these different parts of code to be so closely connected, you could set up a completely separate part of your code that functions as a notification center. Then, you would have your other code tell the notification center that it wants to be notified when the process is complete, and your processComplete() method would simply tell the notification center that the process is complete.
Another approach is to use promises.
I have a long, complicated asynchronous process in TypeScript/JavaScript spread across many libraries and functions
Then make sure that every bit of the process that is asynchronous returns a promise for its partial result, so that you can chain onto them and compose them together or await them.
When it is finally finished receiving and processing all of its data, calls a function processComplete() to signal that it's finished
It shouldn't. The function that starts the process should return a promise, and when the process is finished it should fulfill that promise.
If you don't want to properly promisify every bit of the whole process because it's too cumbersome, you can just do
function startProcess(…);
… // do whatever you need to do
return new Promise(resolve => {
processComplete = resolve;
// don't forget to reject when the process failed!
});
}
In JavaScript or TypeScript, rather than setting a boolean complete value to true, what exactly could processComplete do to make wake up any number of functions that are waiting using await?
If they are already awaiting the result of the promise, there is nothing else that needs to be done. (The awaited promise internally has such a flag already). It's really just doing
// somewhere:
var resultPromise = startProcess(…);
// elsewhere:
await resultPromise;
… // the process is completed here
You don't even need to fulfill the promise with a useful result if all you need is to synchronise your tasks, but you really should. (If there's no data they are waiting for, what are they waiting for at all?)