wait for Promise.all in a synchronous function, basically blocking! javascript - javascript

I have alot of syncrhounous functions that i want to execute before that are basic ajax requests, these request will render html to the DOM.
In order to do this i had to execute all of this synchrounous
requests one by one. But i somehow want to these synchrounous functions asynchrounous all at the same time and wait for them to finnish in order to speed things up. This has to happen inside a synchrounous function. But my understanding is that this doesnt work in javascript, but i would like to hear what you guys have to say.
So my attempt was to add all of these synchrounous requests into asynchrounous promises and then do a Promise.all call. I cant wait for the promise.all().then because the main thread will keep on execute the rest of the code after this main synchrounous thread/function. So i wonder if there is a way to block the main thread in order to wait for these asynchrounous calls
heres a short illustration of what im talking about
var syncfunc = () => {
var getPromise = () => {
return new Promise((resolve) => {
var asyncAjaxRequest = async function() {
doSomeStuff();
resolve();
}
})
}
var promises = [getPromse(), getPromse(), getPromse()];
Promise.all(promises);
console.log('i want this console.log to execute after all promises executed doSomeStuff');
/**
*
* Promise.all(promises).then(() => {
// I cant use this, because its script in other files that will execute if i wait like this
})
*/
}
I know .then will execute when all resolves are done, but i basiacally want to block this synchrounous thread waiting for all other asynchrounous to finish.
If i could i would ofcourse change the structure into my needs, but the problem and the reason why im trying to do this is because im using sitevision framework, and want to add some content to the dom before a print module opens the print window. To call every function synchrounous is just not the way to go, its to slow. Ive also tried to set window.print = null to make the print function disabled, and then add the print function back when promises resolves, but it simply doesnt work

You cannot make an asynchronous operation turn into a synchronous one in plain Javascript (without external code). The event driven JS engine just doesn't work that way.
By definition, an asynchronous operation starts the operation (handing execution off to native code) and then returns back to the interpreter which then continues to execute the code that follows. The native code will add an event to JS event queue when it finishes to allow the interpreter event loop to service the completion of the asynchronous operation. If you were the create some sort of "block" such as a semi-infinite while loop, that would "block" the interpreter from executing more code, you end up in a stalemate. The loop that is blocking the interpreter prevents the JS interpreter from ever getting to the point where it can ever process the event that signals the end of the asynchronous operation. So, you have a loop waiting for something to finish, but the thing it's waiting for can't finish until the loop finishes - stalemate.
So, because of the single threaded event loop nature of the JS interpreter, you can't (purely in Javascript) block waiting for the end of an asynchronous operation.
Pretty much always, the correct design is to refactor the surrounding code/infrastructure to work with an asynchronous operation and asynchronous result (callback or promise).
If this is node.js, there are a couple of horrific hacks that can get you this result, but they block the entire interpreter so are almost never a desired design.
The first option involves writing a custom nodejs plugin (async operations done in native code) that provides a blocking interface that just doesn't return until the operation is done.
The second option involves using the synchronous child_process operations (such as child_process.execFileSync() to create a blocking child process, run your code in that child process and then continue when that process finishes.
Both I could consider pretty bad hacks and pretty much never the desired way to solve such a problem. But, I did want to show you what has to be done in order to block for an asynchronous operation (it has to be moved out of Javascript or out of the process).
If you can't figure out how to solve your real problem with non-blocking, asynchronous operations, I'd suggest you post a new question where you describe in detail exactly what the real problem is and we can help you find an asynchronous design that would work for your situation. If you post a link to the new question in a comment here, some of the people engaged here may check in on the new question and attempt to help.

You could use async/await to solve this. This is how you do it:
async function promiseSolver() {
var getPromise = () => {
return new Promise((resolve) => {
var asyncAjaxRequest = async function() {
doSomeStuff();
resolve();
}
})
}
var promises = [getPromse(), getPromse(), getPromse()];
await Promise.all(promises);
console.log('i want this console.log to execute after all promises executed doSomeStuff');
/**
*
* Promise.all(promises).then(() => {
// I cant use this, because its script in other files that will execute if i wait like this
})
*/
}
Basically, your code will wait until the .all is completed and then will continue with processing. Take into consideration that while the code execution is synchronous, the code will be non blocking.

Related

JS: what is async/await benefit when calling api?

how can async/await be used when calling APIs where the data you're fetching is dependent on the rest of the function content under it?
What if the fake API calls here pushed more names to the array? The updateNames function would be called already before the api calls would be done. Wouldn't I want to block the runtime in this case and make it synchronous?
let names = [];
async foo() {
await Promise.all[
this.callAPI1();
this.callAPI2();
]
this.updateNames(names);
}
In your example, this.updateNames will be executed after your API calls are resolved because you have the await keyword and it is the same as doing:
Promise.all([call1(), call2()]).then(() => { this.updateNames() })
If you have any doubt you can always try by simulating an API call with a promisified setTimeout and some console.log:
const simulateAsync = () => {
return new Promise(resolve => {
setTimeout(() => {
console.log("resolved");
resolve();
}, 100);
});
};
async function test() {
await Promise.all([simulateAsync(), simulateAsync()]);
console.log("after");
}
test();
What if the fake API calls here pushed more names to the array? The updateNames function would be called already before the api calls would be done.
updateNames would not be called until both of the callAPI# functions completed; the await makes the code block (yielding control back to the event loop) until both promises (explicitly created or implicit when the functions are async themselves) complete.
If the functions in question don't return promises, executing synchronously/eagerly, then the await Promise.all([...]) is pointless (there were no promises involved), but there's still no risk of updateNames being called before the callAPI# calls finish; either:
callAPI# returns a promise (in which case await blocks the current task until they complete, returning control to the event loop while it waits), or
callAPI# does not return a promise and all the work is done immediately
and in either case, you can't get to updateNames before they finish.
Wouldn't I want to block the runtime in this case and make it synchronous?
You are blocking the current function call/task with the await. Other tasks scheduled on the event loop can run while the await is still blocked though, and that's the advantage to async processing. Every apparently asynchronous thing in JavaScript is sharing that event loop, and if you completely blocked (rather than awaiting an async call), none of them could run. This means, for example:
setTimeout/setInterval calls don't fire
UI drawing operations don't occur
User input is blocked
All events and observer handling code is deferred
Truly synchronous calls would make all of that (and more) wait until the entirety of foo completed. This is why async code is sometimes referred to as "cooperative multitasking"; control is never wrested away from the current task without its consent like you'd see in multithreaded code, instead control is handed back to the event loop voluntarily (via await) whenever a task is blocked waiting on asynchronous processing, or when the task completes. When the async task has a result, it waits for the event loop to hand control back to it, then continues where it left off.

Is non-parallel access to a method in node JS guaranteed?

Javascript is single threaded and - Node.js uses an asynchronous event-driven design pattern, which means that multiple actions are taken at the same time while executing a program.
With this in mind, I have a pseudo code:
myFunction() // main flow
var httpCallMade = false // a global variable
async myFunction() {
const someData = await callDB() // LINE 1 network call
renderMethod() // LINE 2 flow1
}
redisPubSubEventHandler() { // a method that is called from redis subscription asynchronously somewhere from a background task in the program
renderMethod() // LINE 3 flow2
}
renderMethod(){
if(!httpCallMade) {
httpCallMade = true //set a global flag
const res = makeHTTPCall() // an asynchronous network call. returns a promise.
} // I want to ensure that this block is "synchronized" and is not acessible by flow1 and flow2 simultaneously!
}
myFunction() is called in the main thread - while redisPubSubEventHandler() is called asynchronously from a background task in the program. Both flows would end in calling renderMethod(). The idea is to ensure makeHTTPCall() (inside renderMethod) is only allowed to be called once
Is it guaranteed that renderMethod() would never be executed in parallel by LINE2 and LINE3 at the same time? My understanding is that as soon as renderMethod() is executed - event loop will not allow anything else to happen in server - which guarantees that it is only executed once at a given time (even if it had a network call without await).
Is this understanding correct?
If not, how do I make synchronize/lock entry to renderMethod?
Javascript is single-threaded. Therefore, unless you are deliberately using threads (eg. worker_threads in node.js) no function in the current thread can be executed by two parallel threads at the same time.
This explains why javascript has no mutex or semaphore capability - because generally it is not needed (note: you can still have race conditions because asynchronous code may be executed in a sequence you did not expect).
There is a general confusion that asynchronous code means parallel code execution (multi-threaded). It can but most of the time when a system is labeled as asynchronous or non-blocking or event-oriented INSTEAD of multi-threaded it often means that the system is single-threaded.
In this case asynchronous means parallel WAIT. Not parallel code execution. Code is always executed sequentially - only, due to the ability of waiting in parallel you may not always know the sequence the code is executed in.
There are parts of javascript that execute in a separate thread. Modern browsers execute each tab and iframe in its own thread (but each tab or iframe are themselves single-threaded). But script cannot cross tabs, windows or iframes. So this is a non-issue. Script may access objects inside iframes but this is done via an API and the script itself cannot execute in the foreign iframe.
Node.js and some browsers also do DNS queries in a separate thread because there is no standardized cross-platform non-blocking API for DNS queries. But this is C code and not your javascript code. Your only interaction with this kind of multi-threading is when you pass a URL to fetch() or XMLHttpRequest().
Node.js also implement file I/O, zip compression and cryptographic functions in separate threads but again this is C code, not your javascript code. All results from these separate threads are returned to you asynchronously via the event loop so by the time your javascript code process the result we are back to executing sequentially in the main thread.
Finally both node.js and browsers have worker APIs (web workers for browsers and worker threads for node.js). However, both these API use message passing to transfer data (in node only a pointer is passed in the message thus the underlying memory is shared) and it still protects functions from having their variables overwritten by another thread.
In your code, both myFunction() and redisPubSubEventHandler() run in the main thread. It works like this:
myFunction() is called, it returns immediately when it encounters the await.
a bunch of functions are declared and compiled.
we reach the end of your script:
// I want to ensure that this method is "synchronized" and is not called by flow1 and flow2 simultaneously!
}
<----- we reach here
now that we have reached the end of script we enter the event loop...
either the callDB or the redis event completes, our process gets woken up
the event loop figures out which handler to call based on what event happened
either the await returns and call renderMethod() or redisPubSubEventHandler() gets executed and call renderMethod()
In either case both your renderMethod() calls will execute on the main thread. Thus it is impossible for renderMethod() to run in parallel.
It is possible for renderMethod() to be half executed and another call to renderMethod() happens IF it contains the await keyword. This is because the first call is suspended at the await allowing the interpreter to call renderMethod() again before the first call completes. But note that even in this case you are only in trouble if you have an await between if.. and httpCallMade = true.
You need to differentiate between synchronous and asynchronous, and single- and multi-threaded.
JavaScript is single-threaded so no two lines of the same execution context can run at the same time.
But JavaScript allows asynchronous code execution (await/async), so the code in the execution context does not need to be in the order it appears in the code but that different parts of the code can be executed interleaved (not overlapped) - which could be called "running in parallel", even so, I think this is misleading.
event-driven design pattern, which means that multiple actions are taken at the same time while executing a program.
There are certain actions that can happen at the same time, like IO, multiprocessing (WebWorkers), but that is (with respect to JavaScript Code execution) not multi-threaded.
Is it guaranteed that renderMethod() would never be executed in parallel by LINE2 and LINE3 at the same time?
Depends on what you define as parallel at the same time.
Parts of logic you describe in renderMethod() will (as you do the request asynchronously) run interleaved, so renderMethod(){ if(!httpCallMade) { could be executed multiple times before you get the response (not the Promise) back from makeHTTPCall but the code lines will never executed at the same time.
My understanding is that as soon as renderMethod() is executed - event loop will not allow anything else to happen in server - which guarantees that it is only executed once at a given time (even if it had a network call without await).
The problem here is, that you somehow need to get the data from your async response.
Therefore you either need to mark your function as async and use const res = await makeHTTPCall() this would allow code interleaving at the point of await. Or use .then(…) with a callback, which will be executed asynchronously at a later point (after you left the function)
But from the beginning of the function to the first await other the .then not interleaving could take place.
So your httpCallMade = true would prevent that another makeHTTPCall could take place, before the currently running is finished, under the assumption that you set httpCallMade to false only when the request is finished (in .then callback, or after the await)
// I want to ensure that this method is "synchronized" and is not called by flow1 and flow2 simultaneously!
As soon as a get a result in an asynchronous way, you can't go back to synchronous code execution. So you need to have a guard like httpCallMade to prevent that the logic described in renderMethod can run multiple times interleaved.
Your question really comes down to:
Given this code:
var flag = false;
function f() {
if (!flag) {
flag = true;
console.log("hello");
}
}
and considering that flag is not modified anywhere else, and many different, asynchronous events may call this function f...:
Can "hello" be printed twice?
The answer is no: if this runs on an ECMAScript compliant JS engine, then the call stack must be empty first before the next job is pulled from an event/job queue. Asynchronous tasks/reactions are pushed on an event queue. They don't execute before the currently executing JavaScript has run to completion, i.e. up until the call stack is empty. So they never interrupt running JavaScript code pre-emptively.
This is true even if these asynchronous tasks/events/jobs are scheduled by other threads, lower-level non-JS code,...etc. They all must wait their turn to be consumed by the JS engine. And this will happen one after the other.
For more information, see the ECMAScript specification on "Job". For instance 8.4 Jobs and Host Operations to Enqueue Jobs:
A Job is an abstract closure with no parameters that initiates an ECMAScript computation when no other ECMAScript computation is currently in progress.
[...]
Only one Job may be actively undergoing evaluation at any point in time.
Once evaluation of a Job starts, it must run to completion before evaluation of any other Job starts.
For example, promises generate such jobs -- See 25.6.1.3.2 Promise Resolve Functions:
When a promise resolve function is called with argument resolution, the following steps are taken:
[...]
Perform HostEnqueuePromiseJob(job.[[Job]], job.[[Realm]]).
It sounds like you want to do something like a 'debounce', where any event will cause makeHttpCall() execute, but it should only be executing once at a time, and should execute again after the last call if another event has occurred while it was executing. So like this:
DB Call is made, and makeHttpCall() should execute
While makeHttpCall() is executing, you get a redis pub/sub event that should execute makeHttpCall() again, but that is delayed because it is already executing
Still before the first call is done, another DB call is made and requires makeHttpCall() to execute again. But even though you have received two events, you only need to have it called one time to update something with the most recent information you have.
The first call to makeHttpCall() finishes, but since there have been two events, you need to make a call again.
const makeHttpCall = () => new Promise(resolve => {
// resolve after 2 seconds
setTimeout(resolve, 2000);
});
// returns a function to call that will call your function
const createDebouncer = (fn) => {
let eventCounter = 0;
let inProgress = false;
const execute = () => {
if (inProgress) {
eventCounter++;
console.log('execute() called, but call is in progress.');
console.log(`There are now ${eventCounter} events since last call.`);
return;
}
console.log(`Executing... There have been ${eventCounter} events.`);
eventCounter = 0;
inProgress = true;
fn()
.then(() => {
console.log('async function call completed!');
inProgress = false;
if (eventCounter > 0) {
// make another call if there are pending events since the last call
execute();
}
});
}
return execute;
}
let debouncer = createDebouncer(makeHttpCall);
document.getElementById('buttonDoEvent').addEventListener('click', () => {
debouncer();
});
<button id="buttonDoEvent">Do Event</button>

How do Promises change the use of functions

I am having trouble finding a use for Promises. Wouldn't these 2 approaches below work the same exact way? Since the while loop in loopTest() is synchronous, logStatement() function wouldn't run until it's complete anyways so how would the the 2nd approach be any different ..wouldn't it be pointless in waiting for it to resolve() ?
1st approach:
function loopTest() {
while ( i < 10000 ) {
console.log(i)
i++
})
}
function logStatement() {
console.log("Logging test")
}
loopTest();
logStatement();
2nd approach:
function loopTest() {
return new Promise((resolve, reject) => {
while ( i < 10000 ) {
console.log(i)
i++
if (i === 999) {
resolve('I AM DONE')
}
})
});
}
function logStatement() {
console.log("Logging test")
}
loopTest().then(logStatement());
Promises don't make anything asynchronous,¹ so you're right, there's no point to using a promise in the code you've shown.
The purpose of promises is to provide a standard, composable means of observing the result of things that are already asynchronous (like ajax calls).
There are at least three massive benefits to having a standardized way to observe the results of asynchronous operations:
We can have standard semantics for consuming individual promises, rather than every API defining its own signature for callback functions. (Does it signal error with an initial parameter that's null on success, like Node.js? Does it call the callback with an object with a success flag? Or...)
We can have standard ways of composing/combining them, such as Promise.all, Promise.race, Promise.allSettled, etc.
We can have syntax to consume them with our usual control structures, which we have now in the form of async functions and await.
But again, throwing a promise at a synchronous process almost never does anything useful.²
¹ One very small caveat there: The handler functions to attach to a promise are always triggered asynchronously, whether the promise is already settled or not.
² Another small caveat: Sometimes, you have a synchronous result you want to include in a composition operation (Promise.all, etc.) with various asynchronous operations. In that case, wrapping the value in a promise that's instantly fulfilled is useful — and in fact, all the standard promise combinators (Promise.all, etc.) do that for you, as does await.
There's no point in what you are doing, because your function body is just a blocking loop.
To get a benefit from Promises, use it with APIs that do something with IO, such as a HTTP request, or reading a file from disk.
These APIs all traditionally used callbacks, and are now mostly Promise based.
Anything function that uses a Promise-based function, should itself also be Promise-based. This is why you see a lot of promises in modern code, as a promise only has to be used at 1 level in a stack for the entire stack to be asynchronous in nature.
Is this a better example of how Promises are used? This is all I can think of to make it show use to me:
Version 1
function getData() {
fetch('https://jsonplaceholder.typicode.com/todos/1')
.then(data => data.json())
.then(json => console.log(json))
}
function logInfo() {
console.log("i am a logger")
}
getData()
logInfo()
// "I am a logger"
// {"test": "json"}
Version 2
function getData() {
return fetch('https://jsonplaceholder.typicode.com/todos/1')
.then(data => data.json())
.then(json => console.log(json))
}
function logInfo() {
console.log("i am a logger")
}
getData().then(logInfo);
// "{"test": "json"}
// "I am a logger"
// waits for API result to log _then_ logInfo is run , which makes a log statement
There's definitely benefits to using Promises but that's only in certain scenarios where their usage would seem viable.
Your example could represent what would happen when you retrieve data from an external source synchronously, it would block the thread preventing further code from executing until the loop terminates (I explain below why exactly that happens) - wrapping it in a promise gives no different output in that the thread is still being blocked and when the next message in the queue has to be processed, it gets processed like normal right after it ends.
However an implementation similar to this could achieve a while loop running in a non-blocking manner, just an idea (don't mean to derail this topic with setInterval's implementation):
let f = () => {
let tick = Date.now;
let t = tick();
let interval = setInterval(() => {
if (tick() - t >= 3000) {
console.log("stop");
clearInterval(interval);
}
}, 0);
};
f()
console.log("start");
Basically the time is checked/handled in a separate thread in the browser and the callback is executed every time the time specified runs out while the interval hasn't been cleared, after the call stack becomes empty (so UI function isn't affected) and the current executing function terminates/ends or after other functions above it in the stack finish running. I don't know about the performance implications of doing something like this but I feel like this should only be used when necessary, since the callback would have to be executed very frequently (with 0 timeout, although it's not guaranteed to be 0 anyway).
why it happens
I mainly want to clarify that while the handler functions will be scheduled to be executed asynchronously, every message in the queue has to be processed completely before the next one and for the duration your while loop executes, no new message can be processed in the event queue so it would be pointless to involve Promises where the same thing would happen without them.
So basically the answer to:
wouldn't it be pointless in waiting for it to resolve() ?
is yes, it would be pointless in this case.

Node.js / Express.js - Should I Wrap All My Functions Inside A New Promise?

Express documentation Production best practices: performance and reliability says:
Don’t use synchronous functions
Synchronous functions and methods tie up the executing process until
they return. A single call to a synchronous function might return in a
few microseconds or milliseconds, however in high-traffic websites,
these calls add up and reduce the performance of the app. Avoid their
use in production.
So my question is, in the context of node/express, if I have a function that accepts some static value and returns a calculated result (what I would normally consider a "synchronous function"), is it best practice to wrap that function inside a new Promise and resolve the result or does this create any significant unnecessary overhead? For example:
Current:
//inside my index.js
var myArgument = 'some long string';
var myResult = myFunction(myArgument);
function myFunction(myArgument){
var thisResult;
//some calculations
return thisResult;
}
New (and Improved?)
//inside my index.js
(async function() {
var myArgument = 'some long string';
var myResult = await myFunction(myArgument);
});
function myFunction(url) {
return new Promise((resolve, reject) => {
var thisResult;
//some calculations
if(thisResult){
resolve (thisResult);
} else {
reject (null)
}
});
}
Short answer: Nope.
The documentation is talking about not using synchronous versions of functions like readfileSync from nodeJS filesystem or bcrypt.compareSync for example. Synchronous calls block the event loop in nodeJS. So nothing happens while you are waiting for the synchronous call to finish. The whole program is on halt while this one method finishes. This is bad in a single threaded system like nodeJS.
There no reason to wrap functions that are just simple calculations or array manipulations with callbacks or promises.
Its just saying that if there's a library/method that offers synchronous version of the method, try to avoid that method.
Check out: https://nodejs.org/en/docs/guides/blocking-vs-non-blocking/
JavaScript execution in Node.js is single threaded, so concurrency
refers to the event loop's capacity to execute JavaScript callback
functions after completing other work. Any code that is expected to
run in a concurrent manner must allow the event loop to continue
running as non-JavaScript operations, like I/O, are occurring.
As an example, let's consider a case where each request to a web
server takes 50ms to complete and 45ms of that 50ms is database I/O
that can be done asynchronously. Choosing non-blocking asynchronous
operations frees up that 45ms per request to handle other requests.
This is a significant difference in capacity just by choosing to use
non-blocking methods instead of blocking methods.
The event loop is different than models in many other languages where
additional threads may be created to handle concurrent work.
Regarding additional overhead with wrapping everything in promises. The answer is still no.
You will experience no difference in
function sum(x,y) {
return x+y
}
const ans = sum(1,2)
console.log(ans) // 3
and
function sum(x,y) {
return Promise.resolve(x+y) // Shorthand for your new Promise
}
sum(1,2).then(ans => {
console.log(ans) //3
})

Implementing event synchronization primitives in JavaScript/TypeScript using async/await/Promises

I have a long, complicated asynchronous process in TypeScript/JavaScript spread across many libraries and functions that, when it is finally finished receiving and processing all of its data, calls a function processComplete() to signal that it's finished:
processComplete(); // Let the program know we're done processing
Right now, that function looks something like this:
let complete = false;
function processComplete() {
complete = true;
}
In order to determine whether the process is complete, other code either uses timeouts or process.nextTick and checks the complete variable over and over again in loops. This is complicated and inefficient.
I'd instead like to let various async functions simply use await to wait and be awoken when the process is complete:
// This code will appear in many different places
await /* something regarding completion */;
console.log("We're done!");
If I were programming in Windows in C, I'd use an event synchronization primitive and the code would look something like this:
Event complete;
void processComplete() {
SetEvent(complete);
}
// Elsewhere, repeated in many different places
WaitForSingleObject(complete, INFINITE);
console.log("We're done!");
In JavaScript or TypeScript, rather than setting a boolean complete value to true, what exactly could processComplete do to make wake up any number of functions that are waiting using await? In other words, how can I implement an event synchronization primitive using await and async or Promises?
This pattern is quite close to your code:
const processComplete = args => new Promise(resolve => {
// ...
// In the middle of a callback for a async function, etc.:
resolve(); // instead of `complete = true;`
// ...
}));
// elsewhere
await processComplete(args);
console.log("We're done!");
More info: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise.
It really depends on what you mean by "other code" in this scenario. It sounds like you want to use some variation of the delegation pattern or the observer pattern.
A simple approach is to take advantage of the fact that JavaScript allows you to store an array of functions. Your processComplete() method could do something like this:
function processComplete(){
arrayOfFunctions.forEach(fn => fn());
}
Elsewhere, in your other code, you could create functions for what needs to be done when the process is complete, and add those functions to the arrayOfFunctions.
If you don't want these different parts of code to be so closely connected, you could set up a completely separate part of your code that functions as a notification center. Then, you would have your other code tell the notification center that it wants to be notified when the process is complete, and your processComplete() method would simply tell the notification center that the process is complete.
Another approach is to use promises.
I have a long, complicated asynchronous process in TypeScript/JavaScript spread across many libraries and functions
Then make sure that every bit of the process that is asynchronous returns a promise for its partial result, so that you can chain onto them and compose them together or await them.
When it is finally finished receiving and processing all of its data, calls a function processComplete() to signal that it's finished
It shouldn't. The function that starts the process should return a promise, and when the process is finished it should fulfill that promise.
If you don't want to properly promisify every bit of the whole process because it's too cumbersome, you can just do
function startProcess(…);
… // do whatever you need to do
return new Promise(resolve => {
processComplete = resolve;
// don't forget to reject when the process failed!
});
}
In JavaScript or TypeScript, rather than setting a boolean complete value to true, what exactly could processComplete do to make wake up any number of functions that are waiting using await?
If they are already awaiting the result of the promise, there is nothing else that needs to be done. (The awaited promise internally has such a flag already). It's really just doing
// somewhere:
var resultPromise = startProcess(…);
// elsewhere:
await resultPromise;
… // the process is completed here
You don't even need to fulfill the promise with a useful result if all you need is to synchronise your tasks, but you really should. (If there's no data they are waiting for, what are they waiting for at all?)

Categories