I am giving my first steps in Javascript and trying to understand how it works.
I've come to a problem of execution order of the code.
var Parsed = [[]]
var txtFile = new XMLHttpRequest();
alert("Trying to open file!");
txtFile.open("GET", "http://foo/f2/statistics/nServsDistrito.txt", false);
txtFile.onreadystatechange = function() {
if (txtFile.readyState === 4) { // Makes sure the document is ready to parse.
if (txtFile.status === 200) { // Makes sure it's found the file.
alert("File Open");
allText = txtFile.responseText;
Parsed = CSVToArray(allText, ",")
}
}
}
txtFile.send(null);
alert("Job Done");
The problem is "Job Done" is appearing first than "File Open".
But the file has information necessary for code following the "Job Done" alert.
I changed the asynchronous part of the "get" request but didn't work.
What can i do to stand by all code while the file is open and the information retrieved?
Can i use the readyState to stall the code while the file is being opened and parsed?
Thanks for the help.
Update: It now works thanks to all.
XMLHttpRequest is an async operation. It doesn't matter whether your file is readily available or even if there is no networking involved. Because it is an async operation, it will always execute after after any sequential/synchronous code. That's why you have to declare a callback function (onreadystatechange) that will be called when open comes back with the file contents.
By the explanation above, your code in this example wouldn't be correct. The alert line will be executed immediately, not waiting for the file contents to be ready. The job will only be done when onreadystatechange has finished executing, so you would have to put the alert at the end of onreadystatechange.
Another very common way to trigger async operations is by using setTimeout, which forces its callback function to be executed asynchronously. Check out how it works here.
Edit: You are indeed forcing the request to be synchronous by setting the third parameter to open to false (https://developer.mozilla.org/en-US/docs/DOM/XMLHttpRequest#open()). There are very few cases in which you want a request like that to be synchronous, though. Consider whether you need it to be synchronous, because you will be blocking your whole application or website until the file has been read.
That's because you are using asynchronous functions. When working with async functions you have to use callbacks.
A callback is a function (eg. function cback()) you pass as parameter to another function (eg function async()). Well, cback will be used by async when necessary.
For example, if you are doing IO operations like reading files or executing SQL queries, the callback can be used to handle the data once retrieved:
asyncOperation("SELECT * FROM stackoverflow.unicorns", function(unicorns) {
for(var i=0; i<unicorns.length; i++) {
alert("Unicorn! "+unicorns[i].name);
}
});
The anonymous function we are giving asyncOperation as the second parameter is the "callback", and it's going to be executed once the query data is ready. But while that operation is being handled your script is not blocked, this means that if we add this line after the previous code:
alert("We are not blocked muahahaha");
That alert will be shown before the query is completed and unicorns appear.
So, if you want to do something after the async task finishes, add that code inside the callback:
asyncOperation("SELECT * FROM stackoverflow.unicorns", function(unicorns) {
for(var i=0; i<unicorns.length; i++) {
alert("Unicorn! "+unicorns[i].name);
}
//add here your code, so that it's not executed until the query is ready
});
Note: as #radhakrishna pointed in a comment the open() function can also work in a synchronous manner if you pass true instead of false. This way the code will work as you were expecting: line after line, in other words: synchronously.
Callbacks can be used for a lot of things, for example:
function handleData(unicorns) {
//handle data... check if unicorns are purple
}
function queryError(error) {
alert("Error: "+error);
}
asyncOperation("SELECT * FROM stackoverflow.unicorns", handleData, queryError);
Here we are using two callbacks, one for handling the data and another one if an error occurs (of course that depends on how asyncOperation works, each async task has it's own callbacks).
Related
I have some third party library whose events I'm listening. I get a chance to modify data which that library is going to append in the UI. It is all fine until that data modification is synchronous. As soon as I involve Ajax callbacks/promises, this fails to work. Let me put an example to show case the problem.
Below is how I'm listening to a event:-
d.on('gotResults', function (data) {
// If alter data directly it works fine.
data.title = 'newTitle';
// Above code alters the text correctly.
//I want some properties to be grabbed from elsewhere so I make an Ajax call.
$.ajax('http://someurl...', {data.id}, function (res) {
data.someProperty = res.thatProperty;
});
// Above code doesn't wait for ajax call to complete, it just go away and
renders page without data change.
// Yes I tried promises but doesn't help
return fetch('http://someurl...').then(function (data) {
data.someProperty = res.thatProperty;
return true;
});
// Above code also triggers the url and gets away. Doesn't wait for then to complete.
});
I cannot change/alter the third party library. All I have is to listen to event and alter that data.
Any better solutions. Nope. I can't use async/wait, generators, because I want to have it supported for ES5 browsers.
You cannot make a synchronous function wait for an asynchronous response, it's simply not possible by definition. Your options pretty much are:
BAD IDEA: Make a synchronous AJAX request. Again: BAD IDEA. Not only will this block the entire browser, it is also a deprecated practice and should not be used in new code, or indeed ever.
Fetch the asynchronous data first and store it locally, so it's available synchronously when needed. That obviously only works if you have an idea what data you'll be needing ahead of time.
Alter the 3rd party library to add support for asynchronous callbacks, or request that of the vendor.
Find some hackaround where you'll probably let the library work with incomplete data first and then update it when the asynchronous data is available. That obviously depends a lot on the specifics of that library and the task being done.
Does the gotResults callback function really need to return anything else than true? If not, then you could just write regular asynchronous code without this library knowing about it. Let me explain myself by rewriting your pseudocode:
d.on('gotResults', function (data) {
// If alter data directly it works fine.
data.title = 'newTitle';
// Above code alters the text correctly.
//I want some properties to be grabbed from elsewhere so I make an Ajax call.
$.ajax('http://someurl...', {data.id}, function (res) {
data.someProperty = res.thatProperty;
// Above code doesn't wait for ajax call to complete, it just go away and
// EDIT: now it should render properly
renders page without data change.
// Yes I tried promises but doesn't help
return fetch('http://someurl...');
// Above code also triggers the url and gets away. Doesn't wait for then to complete.
}).then(function (data) {
data.someProperty = res.thatProperty;
// maybe render again here?
}).catch(function(err) {
handleError(err); // handle errors so the don't disappear silently
});
return true; // this line runs before any of the above asynchronous code but do we care?
});
For the past two days I have been working with chrome asynchronous storage. It works "fine" if you have a function. (Like Below):
chrome.storage.sync.get({"disableautoplay": true}, function(e){
console.log(e.disableautoplay);
});
My problem is that I can't use a function with what I'm doing. I want to just return it, like LocalStorage can. Something like:
var a = chrome.storage.sync.get({"disableautoplay": true});
or
var a = chrome.storage.sync.get({"disableautoplay": true}, function(e){
return e.disableautoplay;
});
I've tried a million combinations, even setting a public variable and setting that:
var a;
window.onload = function(){
chrome.storage.sync.get({"disableautoplay": true}, function(e){
a = e.disableautoplay;
});
}
Nothing works. It all returns undefined unless the code referencing it is inside the function of the get, and that's useless to me. I just want to be able to return a value as a variable.
Is this even possible?
EDIT: This question is not a duplicate, please allow me to explain why:
1: There are no other posts asking this specifically (I spent two days looking first, just in case).
2: My question is still not answered. Yes, Chrome Storage is asynchronous, and yes, it does not return a value. That's the problem. I'll elaborate below...
I need to be able to get a stored value outside of the chrome.storage.sync.get function. I -cannot- use localStorage, as it is url specific, and the same values cannot be accessed from both the browser_action page of the chrome extension, and the background.js. I cannot store a value with one script and access it with another. They're treated separately.
So my only solution is to use Chrome Storage. There must be some way to get the value of a stored item and reference it outside the get function. I need to check it in an if statement.
Just like how localStorage can do
if(localStorage.getItem("disableautoplay") == true);
There has to be some way to do something along the lines of
if(chrome.storage.sync.get("disableautoplay") == true);
I realize it's not going to be THAT simple, but that's the best way I can explain it.
Every post I see says to do it this way:
chrome.storage.sync.get({"disableautoplay": true, function(i){
console.log(i.disableautoplay);
//But the info is worthless to me inside this function.
});
//I need it outside this function.
Here's a tailored answer to your question. It will still be 90% long explanation why you can't get around async, but bear with me — it will help you in general. I promise there is something pertinent to chrome.storage in the end.
Before we even begin, I will reiterate canonical links for this:
After calling chrome.tabs.query, the results are not available
(Chrome specific, excellent answer by RobW, probably easiest to understand)
Why is my variable unaltered after I modify it inside of a function? - Asynchronous code reference (General canonical reference on what you're asking for)
How do I return the response from an asynchronous call?
(an older but no less respected canonical question on asynchronous JS)
You Don't Know JS: Async & Performance (ebook on JS asynchronicity)
So, let's discuss JS asynchonicity.
Section 1: What is it?
First concept to cover is runtime environment. JavaScript is, in a way, embedded in another program that controls its execution flow - in this case, Chrome. All events that happen (timers, clicks, etc.) come from the runtime environment. JavaScript code registers handlers for events, which are remembered by the runtime and are called as appropriate.
Second, it's important to understand that JavaScript is single-threaded. There is a single event loop maintained by the runtime environment; if there is some other code executing when an event happens, that event is put into a queue to be processed when the current code terminates.
Take a look at this code:
var clicks = 0;
someCode();
element.addEventListener("click", function(e) {
console.log("Oh hey, I'm clicked!");
clicks += 1;
});
someMoreCode();
So, what is happening here? As this code executes, when the execution reaches .addEventListener, the following happens: the runtime environment is notified that when the event happens (element is clicked), it should call the handler function.
It's important to understand (though in this particular case it's fairly obvious) that the function is not run at this point. It will only run later, when that event happens. The execution continues as soon as the runtime acknowledges 'I will run (or "call back", hence the name "callback") this when that happens.' If someMoreCode() tries to access clicks, it will be 0, not 1.
This is what called asynchronicity, as this is something that will happen outside the current execution flow.
Section 2: Why is it needed, or why synchronous APIs are dying out?
Now, an important consideration. Suppose that someMoreCode() is actually a very long-running piece of code. What will happen if a click event happened while it's still running?
JavaScript has no concept of interrupts. Runtime will see that there is code executing, and will put the event handler call into the queue. The handler will not execute before someMoreCode() finishes completely.
While a click event handler is extreme in the sense that the click is not guaranteed to occur, this explains why you cannot wait for the result of an asynchronous operation. Here's an example that won't work:
element.addEventListener("click", function(e) {
console.log("Oh hey, I'm clicked!");
clicks += 1;
});
while(1) {
if(clicks > 0) {
console.log("Oh, hey, we clicked indeed!");
break;
}
}
You can click to your heart's content, but the code that would increment clicks is patiently waiting for the (non-terminating) loop to terminate. Oops.
Note that this piece of code doesn't only freeze this piece of code: every single event is no longer handled while we wait, because there is only one event queue / thread. There is only one way in JavaScript to let other handlers do their job: terminate current code, and let the runtime know what to call when something we want occurs.
This is why asynchronous treatment is applied to another class of calls that:
require the runtime, and not JS, to do something (disk/network access for example)
are guaranteed to terminate (whether in success or failure)
Let's go with a classic example: AJAX calls. Suppose we want to load a file from a URL.
Let's say that on our current connection, the runtime can request, download, and process the file in the form that can be used in JS in 100ms.
On another connection, that's kinda worse, it would take 500ms.
And sometimes the connection is really bad, so runtime will wait for 1000ms and give up with a timeout.
If we were to wait until this completes, we would have a variable, unpredictable, and relatively long delay. Because of how JS waiting works, all other handlers (e.g. UI) would not do their job for this delay, leading to a frozen page.
Sounds familiar? Yes, that's exactly how synchronous XMLHttpRequest works. Instead of a while(1) loop in JS code, it essentially happens in the runtime code - since JavaScript cannot let other code execute while it's waiting.
Yes, this allows for a familiar form of code:
var file = get("http://example.com/cat_video.mp4");
But at a terrible, terrible cost of everything freezing. A cost so terrible that, in fact, the modern browsers consider this deprecated. Here's a discussion on the topic on MDN.
Now let's look at localStorage. It matches the description of "terminating call to the runtime", and yet it is synchronous. Why?
To put it simply: historical reasons (it's a very old specification).
While it's certainly more predictable than a network request, localStorage still needs the following chain:
JS code <-> Runtime <-> Storage DB <-> Cache <-> File storage on disk
It's a complex chain of events, and the whole JS engine needs to be paused for it. This leads to what is considered unacceptable performance.
Now, Chrome APIs are, from ground up, designed for performance. You can still see some synchronous calls in older APIs like chrome.extension, and there are calls that are handled in JS (and therefore make sense as synchronous) but chrome.storage is (relatively) new.
As such, it embraces the paradigm "I acknowledge your call and will be back with results, now do something useful meanwhile" if there's a delay involved with doing something with runtime. There are no synchronous versions of those calls, unlike XMLHttpRequest.
Quoting the docs:
It's [chrome.storage] asynchronous with bulk read and write operations, and therefore faster than the blocking and serial localStorage API.
Section 3: How to embrace asynchronicity?
The classic way to deal with asynchronicity are callback chains.
Suppose you have the following synchronous code:
var result = doSomething();
doSomethingElse(result);
Suppose that, now, doSomething is asynchronous. Then this becomes:
doSomething(function(result) {
doSomethingElse(result);
});
But what if it's even more complex? Say it was:
function doABunchOfThings() {
var intermediate = doSomething();
return doSomethingElse(intermediate);
}
if (doABunchOfThings() == 42) {
andNowForSomethingCompletelyDifferent()
}
Well.. In this case you need to move all this in the callback. return must become a call instead.
function doABunchOfThings(callback) {
doSomething(function(intermediate) {
callback(doSomethingElse(intermediate));
});
}
doABunchOfThings(function(result) {
if (result == 42) {
andNowForSomethingCompletelyDifferent();
}
});
Here you have a chain of callbacks: doABunchOfThings calls doSomething immediately, which terminates, but sometime later calls doSomethingElse, the result of which is fed to if through another callback.
Obviously, the layering of this can get messy. Well, nobody said that JavaScript is a good language.. Welcome to Callback Hell.
There are tools to make it more manageable, for example Promises and async/await. I will not discuss them here (running out of space), but they do not change the fundamental "this code will only run later" part.
Section TL;DR: I absolutely must have the storage synchronous, halp!
Sometimes there are legitimate reasons to have a synchronous storage. For instance, webRequest API blocking calls can't wait. Or Callback Hell is going to cost you dearly.
What you can do is have a synchronous cache of the asynchronous chrome.storage. It comes with some costs, but it's not impossible.
Consider:
var storageCache = {};
chrome.storage.sync.get(null, function(data) {
storageCache = data;
// Now you have a synchronous snapshot!
});
// Not HERE, though, not until "inner" code runs
If you can put ALL your initialization code in one function init(), then you have this:
var storageCache = {};
chrome.storage.sync.get(null, function(data) {
storageCache = data;
init(); // All your code is contained here, or executes later that this
});
By the time code in init() executes, and afterwards when any event that was assigned handlers in init() happens, storageCache will be populated. You have reduced the asynchronicity to ONE callback.
Of course, this is only a snapshot of what storage looks at the time of executing get(). If you want to maintain coherency with storage, you need to set up updates to storageCache via chrome.storage.onChanged events. Because of the single-event-loop nature of JS, this means the cache will only be updated while your code doesn't run, but in many cases that's acceptable.
Similarly, if you want to propagate changes to storageCache to the real storage, just setting storageCache['key'] is not enough. You would need to write a set(key, value) shim that BOTH writes to storageCache and schedules an (asynchronous) chrome.storage.sync.set.
Implementing those is left as an exercise.
Make the main function "async" and make a "Promise" in it :)
async function mainFuction() {
var p = new Promise(function(resolve, reject){
chrome.storage.sync.get({"disableautoplay": true}, function(options){
resolve(options.disableautoplay);
})
});
const configOut = await p;
console.log(configOut);
}
Yes, you can achieve that using promise:
let getFromStorage = keys => new Promise((resolve, reject) =>
chrome.storage.sync.get(...keys, result => resolve(result)));
chrome.storage.sync.get has no returned values, which explains why you would get undefined when calling something like
var a = chrome.storage.sync.get({"disableautoplay": true});
chrome.storage.sync.get is also an asynchronous method, which explains why in the following code a would be undefined unless you access it inside the callback function.
var a;
window.onload = function(){
chrome.storage.sync.get({"disableautoplay": true}, function(e){
// #2
a = e.disableautoplay; // true or false
});
// #1
a; // undefined
}
If you could manage to work this out you will have made a source of strange bugs. Messages are executed asynchronously which means that when you send a message the rest of your code can execute before the asychronous function returns. There is not guarantee for that since chrome is multi-threaded and the get function may delay, i.e. hdd is busy.
Using your code as an example:
var a;
window.onload = function(){
chrome.storage.sync.get({"disableautoplay": true}, function(e){
a = e.disableautoplay;
});
}
if(a)
console.log("true!");
else
console.log("false! Maybe undefined as well. Strange if you know that a is true, right?");
So it will be better if you use something like this:
chrome.storage.sync.get({"disableautoplay": true}, function(e){
a = e.disableautoplay;
if(a)
console.log("true!");
else
console.log("false! But maybe undefined as well");
});
If you really want to return this value then use the javascript storage API. This stores only string values so you have to cast the value before storing and after getting it.
//Setting the value
localStorage.setItem('disableautoplay', JSON.stringify(true));
//Getting the value
var a = JSON.stringify(localStorage.getItem('disableautoplay'));
var a = await chrome.storage.sync.get({"disableautoplay": true});
This should be in an async function. e.g. if you need to run it at top level, wrap it:
(async () => {
var a = await chrome.storage.sync.get({"disableautoplay": true});
})();
My code is structured as follows:
IF (something) {
..stuff
..Asynchronous Function Call
}
ELSE (something) {
..stuff
..Asynchronous Function Call
}
..more stuff
Let's say the IF condition is met, the code executes 'stuff', then moves onto the Asynchronous Function Call. Will it simple do the call but get out of the IF statement and execute 'more stuff' in the mean time on the assumption of waiting for the Asynchronous Function Call to finish?
OR
Does it finish waiting for the Asynchronous Function Call to finish executing, then continue with 'more stuff' as a normal IF statement block would do.
In the prior case, any advice on how to ensure the Asynchronous Function Call finished before it exits the IF block?
** Note, I've included more stuff inside both Asynchronous Function Calls to ensure the calls are done before it moves on, but I feel this is really bad programming because if I had 50 ELIF's, I would have to copy paste that code 50 times as opposed to just putting it at the end of the IF statement.
Thank you very much for any help provided!
You can approach this easily and less painfully using JavaScript Promises. Have a look to the following links:
http://davidwalsh.name/write-javascript-promises
https://www.promisejs.org/
The basic idea of JavaScript Promises is to the use of asynchronous calls that can be executed in a certain order. Like this:
$.when(GET_PRODUCTS).then(
IF_SUCCESS DO THIS
ELSE DO THAT
).fail(
SHOW MESSAGE
CLEAN EVERYTHING BECAUSE SOMETHING WRONG HAPPENED
).done(
CLEAN EVERYTHING BECAUSE EVERYTHING WENT OKAY
)
With that, you can make code that will be more maintainable. It is not easy to grasp it at the beginning, but give it a try, will save you a lot of headaches!
Does it finish waiting for the Asynchronous Function Call to finish executing,
No, that isn't what "asynchronous" means. The while point is that it doesn't wait. The function will run and finish at some point in the future; the flow of execution continues to the next line immediately.
In regards to your given code, more stuff happens while the asynchronous function is happening. It doesn't wait for the asynchronous function to return a result.
Based on your tag of "node.js", I'm assuming your question is about asynchronous calls on the server. However, you can compare the behavior to a client-side AJAX call.
Say you have this:
var nav = document.getElementById('nav');
function async(params) {
var xhr = new XMLHttpRequest();
// set up your request
xhr.onreadystatechange = function() {
// some conditions, and then on success:
nav.style.color = 'black';
};
xhr.open('GET', 'resource.php'+params, true);
// send your request
}
if ( /* condition */ ) {
async( /* some parameter */ );
} else {
nav.style.color = 'red';
}
If you were to run the above code, either way, your #nav element's color will be set to red at first, but if the async request comes back with a successful response, your #nav element's color will be black. This is a very trivial and probably impractical example, but it is one that could be tested pretty easily to confirm that yes, async calls will happen asynchronously.
Like others of said you can use Promises, async.js, step.js, etc. To control flow. You can also use generators if you use latest version of node with --harmony enabled.
I promise to show you the right way. First off asynchronous if conditions are what you're looking for. Secondly you want a full example. You'll have to modify the URL of the AJAX request and set some server code to give you responses. I'll provide a baseline PHP file towards the end.
So effectively: what if my if condition takes too long for the JavaScript parser? JavaScript uses promises. I'm not going to go all-out crazy, my goal here is to provide a baseline. Towards the end of the script you'll notice either success or failure levels. This script requires two asynchronous if conditions. Additionally instead of being cheap/static/fragile I've kept the script element within the head element where it belongs. Lastly ensure you change the HTTP query and acknowledge it at the server, no need to produce redundant files. Browser compatibility is good except no support for IE11 however I've only encountered very specific use-cases to require this so if you're considering using this code for non-technical audience I would highly recommend reconsider your initial approach to the given problem.
<head>
<script defer="true" type="application/javascript">
//<![CDATA[
function request(method,url)
{
return new Promise(function(resolve, reject)
{
var req = new XMLHttpRequest();
req.open(method,url);
req.withCredentials = true;
req.onerror = function() {reject(Error('Network error.'));};
req.onload = function() {if (req.status == 200) {resolve(req.response);} else {reject(Error(new Object({'response':req.response,'status':req.statusText})));}};
req.send();
});
}
function aysn_if()
{
request('get','https://www.example.com/test.php?t=1').then(function(response)
{
console.log('Success, level one if!', response);
request('get','https://www.example.com/test.php?t=2').then(function(response)
{
console.log('Success, level two if!', response);
},
function(error)
{
console.error('Failed, second level.', error);
});
},
function(error)
{
console.error('Failed, first level.', error);
});
}
document.addEventListener('DOMContentLoaded', function (e) {aysn_if();},false);
//]]>
</script>
</head>
PHP
<?php
header('Access-Control-Allow-Credentials: true');
header('Access-Control-Allow-Origin: '.((isset($_SERVER['HTTP_ORIGIN'])) ? $_SERVER['HTTP_ORIGIN'] : '*'));
//JUST for testing, don't send this stuff outside of test environments!
ksort($_SERVER);
print_r($_SERVER);
?>
You first option is what happens.
You don't have to copy/paste N times. Just put "more stuff" into a function, and pass that function to all your asynchronous callbacks. The callbacks can just call the "more stuff" function when they are done with their normal processing.
Based on chrome developer tools a breakpoints I think I'm dealing with a scope issue I can figure out. Is it the way I define the function? The script below is an include js file and the array ' timeStamp I want available for use in other functions without having to call my loadData function everytime.
The timeStamp array goes undefined once it leaves the for loop before it even leaves the function.
var timeStamp = []; // Want this array to be global
function loadData (url){
$.getJSON(url, function(json) {
for (var i=0;i<json.length;i++){
timeStamp.push(json[i].TimeStamp);
}
console.log(inputBITS); //returns the value
});
console.log(inputBITS); //undefined
}
Thank you for anyhelp
It looks like the issue is that getJSON is asynchronous. When it executes and finishes and your code continues on, it indicates only the START of the networking operation to retrieve the data. The actual networking operation does not complete until some time later.
When it does complete, the success handler is called (as specified as the second argument to your getJSON() call) and you populate the timeStamp array. ONLY after that success handler has been called is the timeStamp array valid.
As such, you cannot use the timeStamp array in code that immediately follows the getJSON() call (it hasn't been filled in yet). If other code needs the timeStamp array, you should call that code from the success handler or use some other timing mechanism to make sure that the code that uses the timeStamp array doesn't try to use it until AFTER the success handler has been called and the timeStamp array has been populated.
It is possible to make some Ajax calls be synchronous instead of asynchronous, but that is generally a very bad idea because it locks up the browser during the entire networking operation which is very unfriendly to the viewer. It is much better to fix the coding logic to work with asynchronous networking.
A typical design pattern for an ajax call like this is as follows:
function loadData (url){
$.getJSON(url, function(json) {
// this will execute AFTER the ajax networking finishes
var timeStamp = [];
for (var i=0;i<json.length;i++) {
timeStamp.push(json[i].TimeStamp);
}
console.log(timeStamp);
// now call other functions that need timeStamp data
myOtherFunc(timeStamp);
});
// this will execute when the ajax networking has just been started
//
// timeStamp data is NOT valid here because
// the ajax call has not yet completed
// You can only use the ajax data inside the success handler function
// or in any functions that you call from there
}
And here's another person who doesn't understand basic AJAX...
getJSON is asynchronous. Meaning, code keeps running after the function call and before the successful return of the JSON request.
You can "fix" this by forcing the request to be synchronous with an appropriate flag, but that's a really bad idea for many reasons (the least of which is that you're violating the basic idea of AJAX). The best way is to remember how AJAX works and instead put all your code that should be executed when the AJAX returns, in the right place.
I have an Ajax call that currently needs to be synchronous. However, while this Ajax call is executing, the browser interface freezes, until the call returns. In cases of timeout, this can freeze the browser for a significant period of time.
Is there any way to get the browser (any browser) to refresh the user interface, but not execute any Javascript? Ideally it would be some command like window.update(), which would let the user interface thread refresh.
If this would be possible, then I could replace the synchronous AJAX call with something like:
obj = do_async_ajax_call();
while (!obj.hasReturned()) {
window.update();
}
// synchronous call can resume
The reason that I can't use setTimeout, or resume a function in the callback, is that the execution flow cannot be interrupted: (there are far too many state variables that all depend on each other, and the long_function() flow would otherwise have to be resumed somehow):
function long_function() {
// lots of code, reads/writes variable 'a', 'b', ...
if (sync_call_is_true()) {
// lots of code, reads/writes variable 'a', 'b', ...
} else {
// lots of code, reads/writes variable 'a', 'b', ...
}
// lots of code, reads/writes variable 'a', 'b', ...
return calculated_value;
}
You need to replace your synchronous request with an asynchronous request and use a callback. An oversimplified example would be:
obj = do_async_ajax_call(function (data, success)
{
if (success)
{
// continue...
}
});
function do_async_ajax_call(callback)
{
var xhr = new XMLHttpRequest();
xhr.open("GET", "http://mysite.com", true);
xhr.onreadystatechange = function ()
{
if (xhr.readyState == 4 && xhr.status == 200)
callback(xhr.responseXML, true);
else if (xhr.readyState == 4)
callback(null, false);
}
xhr.send();
}
This way you're passing an anonymous function as a parameter to the ajax requesting function. When the ajax is complete, the function that was passed is called with the responseXML passed to it. In the meantime, the browser has been free to do it's usual thing until the call completes. From here, the rest of your code continues.
Take the rest of the call and put it in the callback that is called when the result comes back. I seriously doubt that this would be completely impossible for you to do. Any logic you need to put in the call can be duplicated in the callback
asynchronous ajax fetch then settimeout and do the processing work in chunks (triggered by the callback)
JavaScript is single-thread. So by definition, you cannot update UI while you are in a tide loop. However, starting from Firefox 3.5 there's added support for multi-threaded JavaScripts called web workers. Web workers can't affect UI of the page, but they will not block the updates of the UI either. We workers are also supported by Chrome and Safari.
Problem is, that even if you move your AJAX call into background thread and wait of execution to complete on it, users will be able to press buttons and change values on your UI (and as far as I understand, that's what you are trying to avoid). The only thing I can suggest to prevent users for causing any changes is a spinner that will block the entire UI and will not allow any interaction with the page until the web-call returns.