I am trying to access the rate variable from outside the function in which it is defined. I have declared a global rate variable but it is "undefined" at the point I need to use it. I've minimized the code to keep it as clean as possible.
var db = firebase.firestore();
var docRef = db.collection("pricing").doc("default");
var rate; //Setting up global variable to store value
docRef.get().then(function(doc) {
console.log("Document data:", doc.data().base); //This works
rate = doc.data().base;
console.log("Rate:", rate); //This works
});
console.log("Rate:", rate); //rate is "undefined" but rate variable value
//is accessible if typed via the console?
docRef.get() is asynchronous and returns immediately, before the query is complete. You use that promise to attach a callback function using then that gets invoked some time later. Meanwhile, your code will continue to execute. What you are observing is that the console log is running before rate gets defined in the callback.
You will have to adjust things so that your code that depends on rate being defined must only execute after the callback. This typically means that you write the code inside the callback, or using another callback.
It's strongly advisable to learn how JavaScript promises work, as they are critical to writing effective code.
rate is being assigned inside then block, which is executed when Promise, returned from docRef.get() will be resolved. So at the moment of calling console.log, as of then block executes asynchronously, rate is not assigned yet.
It's a race condition: rate is assigned when the promise returns--after your console.log.
You could set your code up to run synchronously; then, the console.log will not run until after the data have been returned:
Firebase Firestore retrieve data Synchronously/without callbacks
Note that this is generally not recommended, as it can negatively affect performance and UX. Pay close attention, and you may notice a short lock-up of the interface. Regardless, per your OP, since you are a "newbie," this may help you here for learning and testing.
Related
This is a predominantly theoretical question as I guess I would not use the fetched data outside of the .then() method.
I was just checking the behaviour of fetch.then() out of curiosity as I like to test and understand the principles of the languages I learn.
As expected
setTimeout(()=>{
console.log("myGlobVar in setTimeOut");
console.log(myGlobVar)}, 1000);
returns the fetched object, provided I use a delay long enough in setTimeOut (about 40ms in this case)
But I'm curious why the
console.log("myGlobVar after for loop");
console.log(myGlobVar); //returns undefined
does not work.
My rationale here was that running the loop long enough, would give enough time to fetch() to fetch the data and assign it to myGlobVar, which doesn't happen no matter how long it runs.
Actually -counterintuitively- for extremely high numbers in the loop (1000000000) even the
setTimeout(()=>{
console.log("myGlobVar in setTimeOut");
console.log(myGlobVar)}, 1000);
returns undefined.
<script>
"use strict"
let myGlobVar;
function loadPeaks() {
console.log("loadPeaks");
fetch('soundofChengdu.json')
.then(response => {
if (!response.ok) {
throw new Error("HTTP error: " + response.status);
}
return response.json();
})
.then(peaks => {
// wavesurfer.load('soundofChengdu.mp3', peaks.data);
myGlobVar= peaks;
})
.catch((e) => {
console.log({errorIs: e})
// console.error('error', e);
});
}
loadPeaks();
console.log("myGlobVar without waiting");
console.log(myGlobVar); //returns undefined
setTimeout(()=>{console.log("myGlobVar in setTimeOut"); console.log(myGlobVar)}, 1000); //returns undefined under +-40 ms, works above.
let b;
console.log("loop running");
for (let a=0; a<100000000; a++) {
b=b+a;
}
console.log("myGlobVar after for loop");
console.log(myGlobVar); //returns undefined
</script>
The first part is quite usual, you blocked the event-loop synchronously, no other task or even microtask can be executed in between so your variable is not set.
The second part about setTimeout() firing before the request's Promise gets resolved is a bit more interesting though, what you've discovered here is the task prioritization system.
All tasks don't have the same priority, and browsers are free to execute one kind of task before another.
For instance here, your browser gives more importance to timer tasks than to network tasks. And for the ones wondering "but fetch().then() is a microtask and should have higher priority", the returned Promise is resolved in a "normal" task (see this answer of mine), and that task is subject to the task prioritization system.
The code after loadPeaks() is synchronously called, meaning that the code will not wait for loadPeaks() to finish running and will run completely before loadPeaks() finishes.
This is the same for the for loop. Regardless of the iteration count, the for loop will finish before the request finishes. You can test this by adding a console.log() statement in your second then() call and see when it is printed in relation to the rest of your log statements.
Promise.then() on the other hand, waits for the request to finish. In your case, fetch() will run first and the then() calls will run sequentially, waiting for the previous call to finish before running.
My rationale here was that running the loop long enough, would give enough time to fetch() to fetch the data and assign it to myGlobVar, which doesn't happen no matter how long it runs.
JavaScript is actually strictly single-threaded, unlike most other languages you're probably familiar with. Instead of having threads running in parallel, it uses an Event Loop pattern where each operation runs, pushes follow-up operations onto the queue, and then yields back to the loop. I'll quote from the article I just linked to explain how this affects your code (emphasis mine):
"Run-to-Completion"
Each message is processed completely before any other message is processed.
This offers some nice properties when reasoning about your program, including the fact that whenever a function runs, it cannot be preempted and will run entirely before any other code runs (and can modify data the function manipulates). This differs from C, for instance, where if a function runs in a thread, it may be stopped at any point by the runtime system to run some other code in another thread.
This explains why the code after the for loop never has access to that global variable no matter how high the iteration count is: the event loop is still executing your original function, so none of those promise handlers have had any chance to execute.
#Kaiido's answer for your second question is much better than what I was going to write, so I won't bother. I'll leave the above up just in case a more basic explanation helps.
I couldn't understand the execution order of the two codes below.
async function main(test)
{
while(1) console.log(test);
}
main("a");
main("b");
This code above logs infinite "a".
async function main(test)
{
while(1) await console.log(test);
}
main("a");
main("b");
While this code above logs infinite "a" and "b".
I wanted to understand async/await better but this behaviour above made me confused. Like how are those log functions handled by event loop?
An async function runs synchronously until the first await, return, or implicit return (the code falling off the end of the function). That's so it can start the asynchronous process that it will later report the completion of (by settling the promise it returns). So your first example logs a forever because that's done by the synchronous part of the first call to main; execution never reaches the second call to main.
Your second example introduces asynchronousness by adding await (it doesn't matter that what await is awaiting isn't a promise [console.log returns undefined]; it gets wrapped in one). That means that the first call to main runs synchronously until after the first console.log, then stops and returns a promise while it waits for the promise wrapped around undefined to settle.¹ That allows the second call to main to go forward, doing the same thing. At that point, the code making the calls to main is complete and any pending promise callbacks can be executed. The first pending promise callback is for the await in the first call to main, so that call does another loop iteration; the second pending promise is for the await in the second call to main, so it does another loop iteration; and so on, and so on, and so on.
You mentioned "concurrency." The two calls to main in your second example do run concurrently but are not multi-threaded; instead, they each alternately get access to the one main thread briefly until their next await. JavaScript works on a model of one thread per "realm" (loosely, per global environment). So you don't have to worry about threading issues like reordered operations or stale caches like you would in a multi-threaded environment (unless you're using shared memory, but you aren't in that code); there's only one thread.
¹ Technically, the promise is already settled in this case before await starts awaiting it (because it's just a promise wrapped around undefined), but when you attach to a promise, it never triggers your code synchronously, even if the promise is already settled.
Using Asynchronous functions within Javascript usually resolves data that is not readily available. Examples include calling from a database and scraping web pages for data.
The code provided endlessly loops because there is no end to the said loop (while). In the context of the code provided, while(1) will always remain true, as there is no condition to state otherwise. In practical use, loops are designed to cease when a condition is met.
Here's a basic example of using a while loop sourced from MDN Docs covering break
let n = 0;
while (n < 3) {
n++;
}
console.log(n);
// expected output: 3
To better understand asynchronous programming within Javascript, I highly recommend the MDN Docs on async functions.
To answer your question regarding console logging's interactions with await: JS will 'freeze' the script when it reads an await and work on completing its defined actions, then returns to read the following line. In the context of the code, Javascript is doing its job. It circles back to a lack of a break or condition for the loop.
Javascript is single threaded and - Node.js uses an asynchronous event-driven design pattern, which means that multiple actions are taken at the same time while executing a program.
With this in mind, I have a pseudo code:
myFunction() // main flow
var httpCallMade = false // a global variable
async myFunction() {
const someData = await callDB() // LINE 1 network call
renderMethod() // LINE 2 flow1
}
redisPubSubEventHandler() { // a method that is called from redis subscription asynchronously somewhere from a background task in the program
renderMethod() // LINE 3 flow2
}
renderMethod(){
if(!httpCallMade) {
httpCallMade = true //set a global flag
const res = makeHTTPCall() // an asynchronous network call. returns a promise.
} // I want to ensure that this block is "synchronized" and is not acessible by flow1 and flow2 simultaneously!
}
myFunction() is called in the main thread - while redisPubSubEventHandler() is called asynchronously from a background task in the program. Both flows would end in calling renderMethod(). The idea is to ensure makeHTTPCall() (inside renderMethod) is only allowed to be called once
Is it guaranteed that renderMethod() would never be executed in parallel by LINE2 and LINE3 at the same time? My understanding is that as soon as renderMethod() is executed - event loop will not allow anything else to happen in server - which guarantees that it is only executed once at a given time (even if it had a network call without await).
Is this understanding correct?
If not, how do I make synchronize/lock entry to renderMethod?
Javascript is single-threaded. Therefore, unless you are deliberately using threads (eg. worker_threads in node.js) no function in the current thread can be executed by two parallel threads at the same time.
This explains why javascript has no mutex or semaphore capability - because generally it is not needed (note: you can still have race conditions because asynchronous code may be executed in a sequence you did not expect).
There is a general confusion that asynchronous code means parallel code execution (multi-threaded). It can but most of the time when a system is labeled as asynchronous or non-blocking or event-oriented INSTEAD of multi-threaded it often means that the system is single-threaded.
In this case asynchronous means parallel WAIT. Not parallel code execution. Code is always executed sequentially - only, due to the ability of waiting in parallel you may not always know the sequence the code is executed in.
There are parts of javascript that execute in a separate thread. Modern browsers execute each tab and iframe in its own thread (but each tab or iframe are themselves single-threaded). But script cannot cross tabs, windows or iframes. So this is a non-issue. Script may access objects inside iframes but this is done via an API and the script itself cannot execute in the foreign iframe.
Node.js and some browsers also do DNS queries in a separate thread because there is no standardized cross-platform non-blocking API for DNS queries. But this is C code and not your javascript code. Your only interaction with this kind of multi-threading is when you pass a URL to fetch() or XMLHttpRequest().
Node.js also implement file I/O, zip compression and cryptographic functions in separate threads but again this is C code, not your javascript code. All results from these separate threads are returned to you asynchronously via the event loop so by the time your javascript code process the result we are back to executing sequentially in the main thread.
Finally both node.js and browsers have worker APIs (web workers for browsers and worker threads for node.js). However, both these API use message passing to transfer data (in node only a pointer is passed in the message thus the underlying memory is shared) and it still protects functions from having their variables overwritten by another thread.
In your code, both myFunction() and redisPubSubEventHandler() run in the main thread. It works like this:
myFunction() is called, it returns immediately when it encounters the await.
a bunch of functions are declared and compiled.
we reach the end of your script:
// I want to ensure that this method is "synchronized" and is not called by flow1 and flow2 simultaneously!
}
<----- we reach here
now that we have reached the end of script we enter the event loop...
either the callDB or the redis event completes, our process gets woken up
the event loop figures out which handler to call based on what event happened
either the await returns and call renderMethod() or redisPubSubEventHandler() gets executed and call renderMethod()
In either case both your renderMethod() calls will execute on the main thread. Thus it is impossible for renderMethod() to run in parallel.
It is possible for renderMethod() to be half executed and another call to renderMethod() happens IF it contains the await keyword. This is because the first call is suspended at the await allowing the interpreter to call renderMethod() again before the first call completes. But note that even in this case you are only in trouble if you have an await between if.. and httpCallMade = true.
You need to differentiate between synchronous and asynchronous, and single- and multi-threaded.
JavaScript is single-threaded so no two lines of the same execution context can run at the same time.
But JavaScript allows asynchronous code execution (await/async), so the code in the execution context does not need to be in the order it appears in the code but that different parts of the code can be executed interleaved (not overlapped) - which could be called "running in parallel", even so, I think this is misleading.
event-driven design pattern, which means that multiple actions are taken at the same time while executing a program.
There are certain actions that can happen at the same time, like IO, multiprocessing (WebWorkers), but that is (with respect to JavaScript Code execution) not multi-threaded.
Is it guaranteed that renderMethod() would never be executed in parallel by LINE2 and LINE3 at the same time?
Depends on what you define as parallel at the same time.
Parts of logic you describe in renderMethod() will (as you do the request asynchronously) run interleaved, so renderMethod(){ if(!httpCallMade) { could be executed multiple times before you get the response (not the Promise) back from makeHTTPCall but the code lines will never executed at the same time.
My understanding is that as soon as renderMethod() is executed - event loop will not allow anything else to happen in server - which guarantees that it is only executed once at a given time (even if it had a network call without await).
The problem here is, that you somehow need to get the data from your async response.
Therefore you either need to mark your function as async and use const res = await makeHTTPCall() this would allow code interleaving at the point of await. Or use .then(…) with a callback, which will be executed asynchronously at a later point (after you left the function)
But from the beginning of the function to the first await other the .then not interleaving could take place.
So your httpCallMade = true would prevent that another makeHTTPCall could take place, before the currently running is finished, under the assumption that you set httpCallMade to false only when the request is finished (in .then callback, or after the await)
// I want to ensure that this method is "synchronized" and is not called by flow1 and flow2 simultaneously!
As soon as a get a result in an asynchronous way, you can't go back to synchronous code execution. So you need to have a guard like httpCallMade to prevent that the logic described in renderMethod can run multiple times interleaved.
Your question really comes down to:
Given this code:
var flag = false;
function f() {
if (!flag) {
flag = true;
console.log("hello");
}
}
and considering that flag is not modified anywhere else, and many different, asynchronous events may call this function f...:
Can "hello" be printed twice?
The answer is no: if this runs on an ECMAScript compliant JS engine, then the call stack must be empty first before the next job is pulled from an event/job queue. Asynchronous tasks/reactions are pushed on an event queue. They don't execute before the currently executing JavaScript has run to completion, i.e. up until the call stack is empty. So they never interrupt running JavaScript code pre-emptively.
This is true even if these asynchronous tasks/events/jobs are scheduled by other threads, lower-level non-JS code,...etc. They all must wait their turn to be consumed by the JS engine. And this will happen one after the other.
For more information, see the ECMAScript specification on "Job". For instance 8.4 Jobs and Host Operations to Enqueue Jobs:
A Job is an abstract closure with no parameters that initiates an ECMAScript computation when no other ECMAScript computation is currently in progress.
[...]
Only one Job may be actively undergoing evaluation at any point in time.
Once evaluation of a Job starts, it must run to completion before evaluation of any other Job starts.
For example, promises generate such jobs -- See 25.6.1.3.2 Promise Resolve Functions:
When a promise resolve function is called with argument resolution, the following steps are taken:
[...]
Perform HostEnqueuePromiseJob(job.[[Job]], job.[[Realm]]).
It sounds like you want to do something like a 'debounce', where any event will cause makeHttpCall() execute, but it should only be executing once at a time, and should execute again after the last call if another event has occurred while it was executing. So like this:
DB Call is made, and makeHttpCall() should execute
While makeHttpCall() is executing, you get a redis pub/sub event that should execute makeHttpCall() again, but that is delayed because it is already executing
Still before the first call is done, another DB call is made and requires makeHttpCall() to execute again. But even though you have received two events, you only need to have it called one time to update something with the most recent information you have.
The first call to makeHttpCall() finishes, but since there have been two events, you need to make a call again.
const makeHttpCall = () => new Promise(resolve => {
// resolve after 2 seconds
setTimeout(resolve, 2000);
});
// returns a function to call that will call your function
const createDebouncer = (fn) => {
let eventCounter = 0;
let inProgress = false;
const execute = () => {
if (inProgress) {
eventCounter++;
console.log('execute() called, but call is in progress.');
console.log(`There are now ${eventCounter} events since last call.`);
return;
}
console.log(`Executing... There have been ${eventCounter} events.`);
eventCounter = 0;
inProgress = true;
fn()
.then(() => {
console.log('async function call completed!');
inProgress = false;
if (eventCounter > 0) {
// make another call if there are pending events since the last call
execute();
}
});
}
return execute;
}
let debouncer = createDebouncer(makeHttpCall);
document.getElementById('buttonDoEvent').addEventListener('click', () => {
debouncer();
});
<button id="buttonDoEvent">Do Event</button>
I am using web3 version 1.0.0-beta.27 where all accesses to the blockchain will be asynchronous, clearly this opens up the possibility of race conditions, ie:
var Web3 = require("web3");
// connect to etherum blockchain
var ether_port = 'http://localhost:8545'
var web3 = new Web3(new Web3.providers.HttpProvider(ether_port));
// this is how we set the value, note there is the possiblity of race condidtions here
var accounts = []
web3.eth.getAccounts().then(function(accts){
console.log("printing account: ", accts)
accounts = accts
})
// observe race condition
console.log("assert race condition: ", accounts[0])
The last line above is contrived, it is there to demonstrate that I would like to use accounts after it has been evaluated. Ie, eventually I would like modify/read the blockchain from a front end express.js web app or even a mobile app, so in the interest of being rigorous, what are the common tools in node.js to ensure race conditions never occur? Do these tools exist? If not what are some common practices. I am new to node.js as well.
One idea is to not attempt to directly store the data because code trying to access the data has no idea when it's valid due to the uncertain nature of asynchronous results. So, instead you store the promise and any code that wants access to the data, just uses .then()/.catch() on the promise. This will always work, regardless of the async timing. If the data is already there, the .then() handler will be called quickly. If the data is not yet there, then the caller will be in line to be notified when the data arrives.
let accountDataPromise = web3.eth.getAccounts().then(function(accts){
console.log("printing account: ", accts)
return accts;
});
// then, elsewhere in the code
accountDataPromise.then(accts => {
// use accts here
}).catch(err => {
// error getting accts data
});
FYI, assigning data from a .then() handler to a higher scoped variable that you want to generally use in other code outside the promise chain is nearly always a sign of troublesome code - don't do it. This is because other code outside the promise chain has no idea when that data will or will not be valid.
In a language with threads and locks it is easy to implement a lazy load by checking the value of a variable, if it's null then lock the next section of code, check the value again and then load the resource and assign. This prevents it from being loaded multiple times and causes threads after the first to wait for the first thread to complete the action that's needed.
Psuedo code:
if(myvar == null) {
lock(obj) {
if(myvar == null) {
myvar = getData();
}
}
}
return myvar;
JavaScript runs in a single thread, however, it still has this type of issue because of asynchronous execution while one call is waiting on a blocking resource. In this Node.js example:
var allRecords;
module.exports = getAllRecords(callback) {
if(allRecords) {
return callback(null,allRecords);
}
db.getRecords({}, function(err, records) {
if (err) {
return callback(err);
}
// Use existing object if it has been
// set by another async request to this
// function
allRecords = allRecords || partners;
return callback(null, allRecords);
});
}
I'm lazy loading all the records from a small DB table the first time this function is called and then returning the in-memory records on subsequent calls.
Problem: If multiple async requests are made to this function at the same time then the table is going to be loaded unnecessarily from the DB multiple times.
In order to solve this I could simulate a locking mechanism by creating a var lock; variable and setting it to true while the table is loading. I would then put the other async calls into a setTimeout() loop and check back on this variable every (say) 1 second until the data was available and then allow them to return.
The problems with that solution are:
It's fragile, what if the first async call throws and doesn't unset the lock.
How many times do we loop back into the timer before giving up?
How long should the timer be set for? In some environments 1 second might be way too long and inefficient.
Is there a best practise for solving this in JavaScript?
On the first call to the service, initialize an array. Start the fetch operation. Create a Promise, store it in the array.
On subsequent calls, if the data is there, return an already-fulfilled Promise. If not, add another Promise to the array and return that.
When the data arrives, resolve all the waiting Promise objects in the list. (You can throw away the list once the data's there.)
I really like the promise solution in the other answer -- very clever, very interesting. Promises aren't the dominent methodology, so you may need to educate the team. I'm going to go in another direction though.
What you're after is a memoize function -- an in-memory key/value cache of expensive results. JavaScript the Good Parts has a memoize sample towards the end. Lodash has a memoize function. These assume synchronous processing so don't account for your scenario -- which is to say they'd hit the database lots of times until one of the "threads" replied.
The async library also has a memoize function that does exactly what you want. In it's innards, it keeps a queue array of callbacks, and once it gets the answer, it both caches it and calls all the callbacks.
If you're into inventing, by all means, use promises. If you'd just like a plug-n-play answer, use async#memoize.