I have some third party library whose events I'm listening. I get a chance to modify data which that library is going to append in the UI. It is all fine until that data modification is synchronous. As soon as I involve Ajax callbacks/promises, this fails to work. Let me put an example to show case the problem.
Below is how I'm listening to a event:-
d.on('gotResults', function (data) {
// If alter data directly it works fine.
data.title = 'newTitle';
// Above code alters the text correctly.
//I want some properties to be grabbed from elsewhere so I make an Ajax call.
$.ajax('http://someurl...', {data.id}, function (res) {
data.someProperty = res.thatProperty;
});
// Above code doesn't wait for ajax call to complete, it just go away and
renders page without data change.
// Yes I tried promises but doesn't help
return fetch('http://someurl...').then(function (data) {
data.someProperty = res.thatProperty;
return true;
});
// Above code also triggers the url and gets away. Doesn't wait for then to complete.
});
I cannot change/alter the third party library. All I have is to listen to event and alter that data.
Any better solutions. Nope. I can't use async/wait, generators, because I want to have it supported for ES5 browsers.
You cannot make a synchronous function wait for an asynchronous response, it's simply not possible by definition. Your options pretty much are:
BAD IDEA: Make a synchronous AJAX request. Again: BAD IDEA. Not only will this block the entire browser, it is also a deprecated practice and should not be used in new code, or indeed ever.
Fetch the asynchronous data first and store it locally, so it's available synchronously when needed. That obviously only works if you have an idea what data you'll be needing ahead of time.
Alter the 3rd party library to add support for asynchronous callbacks, or request that of the vendor.
Find some hackaround where you'll probably let the library work with incomplete data first and then update it when the asynchronous data is available. That obviously depends a lot on the specifics of that library and the task being done.
Does the gotResults callback function really need to return anything else than true? If not, then you could just write regular asynchronous code without this library knowing about it. Let me explain myself by rewriting your pseudocode:
d.on('gotResults', function (data) {
// If alter data directly it works fine.
data.title = 'newTitle';
// Above code alters the text correctly.
//I want some properties to be grabbed from elsewhere so I make an Ajax call.
$.ajax('http://someurl...', {data.id}, function (res) {
data.someProperty = res.thatProperty;
// Above code doesn't wait for ajax call to complete, it just go away and
// EDIT: now it should render properly
renders page without data change.
// Yes I tried promises but doesn't help
return fetch('http://someurl...');
// Above code also triggers the url and gets away. Doesn't wait for then to complete.
}).then(function (data) {
data.someProperty = res.thatProperty;
// maybe render again here?
}).catch(function(err) {
handleError(err); // handle errors so the don't disappear silently
});
return true; // this line runs before any of the above asynchronous code but do we care?
});
For the past two days I have been working with chrome asynchronous storage. It works "fine" if you have a function. (Like Below):
chrome.storage.sync.get({"disableautoplay": true}, function(e){
console.log(e.disableautoplay);
});
My problem is that I can't use a function with what I'm doing. I want to just return it, like LocalStorage can. Something like:
var a = chrome.storage.sync.get({"disableautoplay": true});
or
var a = chrome.storage.sync.get({"disableautoplay": true}, function(e){
return e.disableautoplay;
});
I've tried a million combinations, even setting a public variable and setting that:
var a;
window.onload = function(){
chrome.storage.sync.get({"disableautoplay": true}, function(e){
a = e.disableautoplay;
});
}
Nothing works. It all returns undefined unless the code referencing it is inside the function of the get, and that's useless to me. I just want to be able to return a value as a variable.
Is this even possible?
EDIT: This question is not a duplicate, please allow me to explain why:
1: There are no other posts asking this specifically (I spent two days looking first, just in case).
2: My question is still not answered. Yes, Chrome Storage is asynchronous, and yes, it does not return a value. That's the problem. I'll elaborate below...
I need to be able to get a stored value outside of the chrome.storage.sync.get function. I -cannot- use localStorage, as it is url specific, and the same values cannot be accessed from both the browser_action page of the chrome extension, and the background.js. I cannot store a value with one script and access it with another. They're treated separately.
So my only solution is to use Chrome Storage. There must be some way to get the value of a stored item and reference it outside the get function. I need to check it in an if statement.
Just like how localStorage can do
if(localStorage.getItem("disableautoplay") == true);
There has to be some way to do something along the lines of
if(chrome.storage.sync.get("disableautoplay") == true);
I realize it's not going to be THAT simple, but that's the best way I can explain it.
Every post I see says to do it this way:
chrome.storage.sync.get({"disableautoplay": true, function(i){
console.log(i.disableautoplay);
//But the info is worthless to me inside this function.
});
//I need it outside this function.
Here's a tailored answer to your question. It will still be 90% long explanation why you can't get around async, but bear with me — it will help you in general. I promise there is something pertinent to chrome.storage in the end.
Before we even begin, I will reiterate canonical links for this:
After calling chrome.tabs.query, the results are not available
(Chrome specific, excellent answer by RobW, probably easiest to understand)
Why is my variable unaltered after I modify it inside of a function? - Asynchronous code reference (General canonical reference on what you're asking for)
How do I return the response from an asynchronous call?
(an older but no less respected canonical question on asynchronous JS)
You Don't Know JS: Async & Performance (ebook on JS asynchronicity)
So, let's discuss JS asynchonicity.
Section 1: What is it?
First concept to cover is runtime environment. JavaScript is, in a way, embedded in another program that controls its execution flow - in this case, Chrome. All events that happen (timers, clicks, etc.) come from the runtime environment. JavaScript code registers handlers for events, which are remembered by the runtime and are called as appropriate.
Second, it's important to understand that JavaScript is single-threaded. There is a single event loop maintained by the runtime environment; if there is some other code executing when an event happens, that event is put into a queue to be processed when the current code terminates.
Take a look at this code:
var clicks = 0;
someCode();
element.addEventListener("click", function(e) {
console.log("Oh hey, I'm clicked!");
clicks += 1;
});
someMoreCode();
So, what is happening here? As this code executes, when the execution reaches .addEventListener, the following happens: the runtime environment is notified that when the event happens (element is clicked), it should call the handler function.
It's important to understand (though in this particular case it's fairly obvious) that the function is not run at this point. It will only run later, when that event happens. The execution continues as soon as the runtime acknowledges 'I will run (or "call back", hence the name "callback") this when that happens.' If someMoreCode() tries to access clicks, it will be 0, not 1.
This is what called asynchronicity, as this is something that will happen outside the current execution flow.
Section 2: Why is it needed, or why synchronous APIs are dying out?
Now, an important consideration. Suppose that someMoreCode() is actually a very long-running piece of code. What will happen if a click event happened while it's still running?
JavaScript has no concept of interrupts. Runtime will see that there is code executing, and will put the event handler call into the queue. The handler will not execute before someMoreCode() finishes completely.
While a click event handler is extreme in the sense that the click is not guaranteed to occur, this explains why you cannot wait for the result of an asynchronous operation. Here's an example that won't work:
element.addEventListener("click", function(e) {
console.log("Oh hey, I'm clicked!");
clicks += 1;
});
while(1) {
if(clicks > 0) {
console.log("Oh, hey, we clicked indeed!");
break;
}
}
You can click to your heart's content, but the code that would increment clicks is patiently waiting for the (non-terminating) loop to terminate. Oops.
Note that this piece of code doesn't only freeze this piece of code: every single event is no longer handled while we wait, because there is only one event queue / thread. There is only one way in JavaScript to let other handlers do their job: terminate current code, and let the runtime know what to call when something we want occurs.
This is why asynchronous treatment is applied to another class of calls that:
require the runtime, and not JS, to do something (disk/network access for example)
are guaranteed to terminate (whether in success or failure)
Let's go with a classic example: AJAX calls. Suppose we want to load a file from a URL.
Let's say that on our current connection, the runtime can request, download, and process the file in the form that can be used in JS in 100ms.
On another connection, that's kinda worse, it would take 500ms.
And sometimes the connection is really bad, so runtime will wait for 1000ms and give up with a timeout.
If we were to wait until this completes, we would have a variable, unpredictable, and relatively long delay. Because of how JS waiting works, all other handlers (e.g. UI) would not do their job for this delay, leading to a frozen page.
Sounds familiar? Yes, that's exactly how synchronous XMLHttpRequest works. Instead of a while(1) loop in JS code, it essentially happens in the runtime code - since JavaScript cannot let other code execute while it's waiting.
Yes, this allows for a familiar form of code:
var file = get("http://example.com/cat_video.mp4");
But at a terrible, terrible cost of everything freezing. A cost so terrible that, in fact, the modern browsers consider this deprecated. Here's a discussion on the topic on MDN.
Now let's look at localStorage. It matches the description of "terminating call to the runtime", and yet it is synchronous. Why?
To put it simply: historical reasons (it's a very old specification).
While it's certainly more predictable than a network request, localStorage still needs the following chain:
JS code <-> Runtime <-> Storage DB <-> Cache <-> File storage on disk
It's a complex chain of events, and the whole JS engine needs to be paused for it. This leads to what is considered unacceptable performance.
Now, Chrome APIs are, from ground up, designed for performance. You can still see some synchronous calls in older APIs like chrome.extension, and there are calls that are handled in JS (and therefore make sense as synchronous) but chrome.storage is (relatively) new.
As such, it embraces the paradigm "I acknowledge your call and will be back with results, now do something useful meanwhile" if there's a delay involved with doing something with runtime. There are no synchronous versions of those calls, unlike XMLHttpRequest.
Quoting the docs:
It's [chrome.storage] asynchronous with bulk read and write operations, and therefore faster than the blocking and serial localStorage API.
Section 3: How to embrace asynchronicity?
The classic way to deal with asynchronicity are callback chains.
Suppose you have the following synchronous code:
var result = doSomething();
doSomethingElse(result);
Suppose that, now, doSomething is asynchronous. Then this becomes:
doSomething(function(result) {
doSomethingElse(result);
});
But what if it's even more complex? Say it was:
function doABunchOfThings() {
var intermediate = doSomething();
return doSomethingElse(intermediate);
}
if (doABunchOfThings() == 42) {
andNowForSomethingCompletelyDifferent()
}
Well.. In this case you need to move all this in the callback. return must become a call instead.
function doABunchOfThings(callback) {
doSomething(function(intermediate) {
callback(doSomethingElse(intermediate));
});
}
doABunchOfThings(function(result) {
if (result == 42) {
andNowForSomethingCompletelyDifferent();
}
});
Here you have a chain of callbacks: doABunchOfThings calls doSomething immediately, which terminates, but sometime later calls doSomethingElse, the result of which is fed to if through another callback.
Obviously, the layering of this can get messy. Well, nobody said that JavaScript is a good language.. Welcome to Callback Hell.
There are tools to make it more manageable, for example Promises and async/await. I will not discuss them here (running out of space), but they do not change the fundamental "this code will only run later" part.
Section TL;DR: I absolutely must have the storage synchronous, halp!
Sometimes there are legitimate reasons to have a synchronous storage. For instance, webRequest API blocking calls can't wait. Or Callback Hell is going to cost you dearly.
What you can do is have a synchronous cache of the asynchronous chrome.storage. It comes with some costs, but it's not impossible.
Consider:
var storageCache = {};
chrome.storage.sync.get(null, function(data) {
storageCache = data;
// Now you have a synchronous snapshot!
});
// Not HERE, though, not until "inner" code runs
If you can put ALL your initialization code in one function init(), then you have this:
var storageCache = {};
chrome.storage.sync.get(null, function(data) {
storageCache = data;
init(); // All your code is contained here, or executes later that this
});
By the time code in init() executes, and afterwards when any event that was assigned handlers in init() happens, storageCache will be populated. You have reduced the asynchronicity to ONE callback.
Of course, this is only a snapshot of what storage looks at the time of executing get(). If you want to maintain coherency with storage, you need to set up updates to storageCache via chrome.storage.onChanged events. Because of the single-event-loop nature of JS, this means the cache will only be updated while your code doesn't run, but in many cases that's acceptable.
Similarly, if you want to propagate changes to storageCache to the real storage, just setting storageCache['key'] is not enough. You would need to write a set(key, value) shim that BOTH writes to storageCache and schedules an (asynchronous) chrome.storage.sync.set.
Implementing those is left as an exercise.
Make the main function "async" and make a "Promise" in it :)
async function mainFuction() {
var p = new Promise(function(resolve, reject){
chrome.storage.sync.get({"disableautoplay": true}, function(options){
resolve(options.disableautoplay);
})
});
const configOut = await p;
console.log(configOut);
}
Yes, you can achieve that using promise:
let getFromStorage = keys => new Promise((resolve, reject) =>
chrome.storage.sync.get(...keys, result => resolve(result)));
chrome.storage.sync.get has no returned values, which explains why you would get undefined when calling something like
var a = chrome.storage.sync.get({"disableautoplay": true});
chrome.storage.sync.get is also an asynchronous method, which explains why in the following code a would be undefined unless you access it inside the callback function.
var a;
window.onload = function(){
chrome.storage.sync.get({"disableautoplay": true}, function(e){
// #2
a = e.disableautoplay; // true or false
});
// #1
a; // undefined
}
If you could manage to work this out you will have made a source of strange bugs. Messages are executed asynchronously which means that when you send a message the rest of your code can execute before the asychronous function returns. There is not guarantee for that since chrome is multi-threaded and the get function may delay, i.e. hdd is busy.
Using your code as an example:
var a;
window.onload = function(){
chrome.storage.sync.get({"disableautoplay": true}, function(e){
a = e.disableautoplay;
});
}
if(a)
console.log("true!");
else
console.log("false! Maybe undefined as well. Strange if you know that a is true, right?");
So it will be better if you use something like this:
chrome.storage.sync.get({"disableautoplay": true}, function(e){
a = e.disableautoplay;
if(a)
console.log("true!");
else
console.log("false! But maybe undefined as well");
});
If you really want to return this value then use the javascript storage API. This stores only string values so you have to cast the value before storing and after getting it.
//Setting the value
localStorage.setItem('disableautoplay', JSON.stringify(true));
//Getting the value
var a = JSON.stringify(localStorage.getItem('disableautoplay'));
var a = await chrome.storage.sync.get({"disableautoplay": true});
This should be in an async function. e.g. if you need to run it at top level, wrap it:
(async () => {
var a = await chrome.storage.sync.get({"disableautoplay": true});
})();
I've been using selenium (with python bindings and through protractor mostly) for a rather long time and every time I needed to execute a javascript code, I've used execute_script() method. For example, for scrolling the page (python):
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
Or, for infinite scrolling inside an another element (protractor):
var div = element(by.css('div.table-scroll'));
var lastRow = element(by.css('table#myid tr:last-of-type'));
browser.executeScript("return arguments[0].offsetTop;", lastRow.getWebElement()).then(function (offset) {
browser.executeScript('arguments[0].scrollTop = arguments[1];', div.getWebElement(), offset).then(function() {
// assertions
});
});
Or, for getting a dictionary of all element attributes (python):
driver.execute_script('var items = {}; for (index = 0; index < arguments[0].attributes.length; ++index) { items[arguments[0].attributes[index].name] = arguments[0].attributes[index].value }; return items;', element)
But, WebDriver API also has execute_async_script() which I haven't personally used.
What use cases does it cover? When should I use execute_async_script() instead of the regular execute_script()?
The question is selenium-specific, but language-agnostic.
When should I use execute_async_script() instead of the regular execute_script()?
When it comes to checking conditions on the browser side, all checks you can perform with execute_async_script can be performed with execute_script. Even if what you are checking is asynchronous. I know because once upon a time there was a bug with execute_async_script that made my tests fail if the script returned results too quickly. As far as I can tell, the bug is gone now so I've been using execute_async_script but for months beforehand, I used execute_script for tasks where execute_async_script would have been more natural. For instance, performing a check that requires loading a module with RequireJS to perform the check:
driver.execute_script("""
// Reset in case it's been used already.
window.__selenium_test_check = undefined;
require(["foo"], function (foo) {
window.__selenium_test_check = foo.computeSomething();
});
""")
result = driver.wait(lambda driver:
driver.execute_script("return window.__selenium_test_check;"))
The require call is asynchronous. The problem with this though, besides leaking a variable into the global space, is that it multiplies the network requests. Each execute_script call is a network request. The wait method works by polling: it runs the test until the returned value is true. This means one network request per check that wait performs (in the code above).
When you test locally it is not a big deal. If you have to go through the network because you are having the browsers provisioned by a service like Sauce Labs (which I use, so I'm talking from experience), each network request slows down your test suite. So using execute_async_script not only allows writing a test that looks more natural (call a callback, as we normally do with asynchronous code, rather than leak into the global space) but it also helps the performance of your tests.
result = driver.execute_async_script("""
var done = arguments[0];
require(["foo"], function (foo) {
done(foo.computeSomething());
});
""")
The way I see it now is that if a test is going to hook into asynchronous code on the browser side to get a result, I use execute_async_script. If it is going to do something for which there is no asynchronous method available, I use execute_script.
Here's the reference to the two APIs (well it's Javadoc, but the functions are the same), and here's an excerpt from it that highlights the difference
[executeAsyncScript] Execute an asynchronous piece of JavaScript in
the context of the currently selected frame or window. Unlike
executing synchronous JavaScript, scripts executed with this method
must explicitly signal they are finished by invoking the provided
callback. This callback is always injected into the executed function
as the last argument.
Basically, execSync blocks further actions being performed by the selenium browser, while execAsync does not block and calls on a callback when it's done.
Since you've worked with protractor, I'll use that as example.
Protractor uses executeAsyncScript in both get and waitForAngular
In waitForAngular, protractor needs to wait until angular announces that all events settled. You can't use executeScript because that needs to return a value at the end (although I guess you can implement a busy loop that polls angular constantly until it's done). The way it works is that protractor provides a callback, which Angular calls once all events settled, and that requires executeAsyncScript. Code here
In get, protractor needs to poll the page until the global window.angular is set by Angular. One way to do it is driver.wait(function() {driver.executeScript('return window.angular')}, 5000), but that way protractor would pound at the browser every few ms. Instead, we do this (simplified):
functions.testForAngular = function(attempts, callback) {
var check = function(n) {
if (window.angular) {
callback('good');
} else if (n < 1) {
callback('timedout');
} else {
setTimeout(function() {check(n - 1);}, 1000);
}
};
check(attempts);
};
Again, that requires executeAsyncScript because we don't have a return value immediately. Code here
All in all, use executeAsyncScript when you care about a return value in a calling script, but that return value won't be available immediately. This is especially necessary if you can't poll for the result, but must get the result using a callback or promise (which you must translate to callback yourself).
I recognise this question probably goes to how Javascript functions operate in general (and not just my particular case), but...
I want to know how my document ready function... umm... functions. In stylised form, it's like the following:
$(function () {
object1.init();
object2.init();
object3.init(); etc...
Will object2.init() only fire when object1.init() returns? Or will they all fire in an asynchronous kind of way?
UPDATE: If they fire sequentially, is there any way I can get them to go simultaneously (which I possibly don't need to do)?
Thanks.
Will object2.init() only fire when object1.init() returns? Or will they all fire in an asynchronous kind of way?
object2.init() only fire when object1.init() returns.
All the functions are stored in a queue, and get fired one after the other.
There is no way to make them fire simultaneously.
Not just jQuery, but javascript doesn't support parallel programming.
You can see the jQuery source code:
( Update: I removed the actuall code, because the answer became a mini jQuery source code...)
// Handle when the DOM is ready
ready: function(wait) {
...
// Call all callbacks with the given context and arguments
fireWith: function(context, args) {
...
jQuery.Callbacks = function( flags ) {
...
// The deferred used on DOM ready
readyList,
Check those functions and variables in the jQuery source code
It wil run sequencially.
If one of the init()'s does an async call, that will obviously be async, but each function (or any line actually) will run in turn.
Based on chrome developer tools a breakpoints I think I'm dealing with a scope issue I can figure out. Is it the way I define the function? The script below is an include js file and the array ' timeStamp I want available for use in other functions without having to call my loadData function everytime.
The timeStamp array goes undefined once it leaves the for loop before it even leaves the function.
var timeStamp = []; // Want this array to be global
function loadData (url){
$.getJSON(url, function(json) {
for (var i=0;i<json.length;i++){
timeStamp.push(json[i].TimeStamp);
}
console.log(inputBITS); //returns the value
});
console.log(inputBITS); //undefined
}
Thank you for anyhelp
It looks like the issue is that getJSON is asynchronous. When it executes and finishes and your code continues on, it indicates only the START of the networking operation to retrieve the data. The actual networking operation does not complete until some time later.
When it does complete, the success handler is called (as specified as the second argument to your getJSON() call) and you populate the timeStamp array. ONLY after that success handler has been called is the timeStamp array valid.
As such, you cannot use the timeStamp array in code that immediately follows the getJSON() call (it hasn't been filled in yet). If other code needs the timeStamp array, you should call that code from the success handler or use some other timing mechanism to make sure that the code that uses the timeStamp array doesn't try to use it until AFTER the success handler has been called and the timeStamp array has been populated.
It is possible to make some Ajax calls be synchronous instead of asynchronous, but that is generally a very bad idea because it locks up the browser during the entire networking operation which is very unfriendly to the viewer. It is much better to fix the coding logic to work with asynchronous networking.
A typical design pattern for an ajax call like this is as follows:
function loadData (url){
$.getJSON(url, function(json) {
// this will execute AFTER the ajax networking finishes
var timeStamp = [];
for (var i=0;i<json.length;i++) {
timeStamp.push(json[i].TimeStamp);
}
console.log(timeStamp);
// now call other functions that need timeStamp data
myOtherFunc(timeStamp);
});
// this will execute when the ajax networking has just been started
//
// timeStamp data is NOT valid here because
// the ajax call has not yet completed
// You can only use the ajax data inside the success handler function
// or in any functions that you call from there
}
And here's another person who doesn't understand basic AJAX...
getJSON is asynchronous. Meaning, code keeps running after the function call and before the successful return of the JSON request.
You can "fix" this by forcing the request to be synchronous with an appropriate flag, but that's a really bad idea for many reasons (the least of which is that you're violating the basic idea of AJAX). The best way is to remember how AJAX works and instead put all your code that should be executed when the AJAX returns, in the right place.