How to run second Javascript function only after first function fully completes? - javascript

I’m creating a Facebook game using HTML and Javascript, and I’ve just finished building a leaderboard table which lists every player’s name and rank number. This table is populated with data returned from Facebook's game scores API.
This is working perfectly, but I also want to reward players for improving their rank in the table.
This is how I plan to do this:
When the game loads, I run a function called updateTable();, this
populates the leaderboard with the scores and ranks of the players
received from an API call to Facebook's database.
When the player starts to play the game, I store a copy of their rank inside a separate hidden div.
When the game ends, if the player has achieved a new high score, then
it gets entered into the database. After this happens, I run
updateTable(); again to update the leaderboard.
I then run a function called compareRanks();, this compares the
player’s new rank with the rank that I’ve stored in the hidden div.
If the new ranking is a lower number than the stored rank, then they’ve moved up the leaderboard and I reward them 100 coins for every place they move up.
For example:
Player A starts the game and is ranked 5th (so “5” gets stored in a hidden div).
When Player A finishes the game, the leaderboard is updated, and Player A is now ranked 2nd (so the player has jumped 3 places).
To work out what the reward should be, I want to subtract the first variable from the second (5-2 = 3), Player A overtook 3 other players, so their reward will be 3 x 100 gold coins.
The problem I’m having is that when I run compareRanks();, the new rank keeps showing up as the same number as the stored rank, even though I know that the player has improved their rank.
I’m pretty sure this is due to the new rank being grabbed before updateTable(); has fully interacted with the database. I’ve tested this by separating the functions, by making compareRanks(); run on click of a button, when I did this, I completed a game, improved my rank, waited a few seconds after updateTable(); ran, then clicked the button, and the two ranks showed up differently, which is correct. So I think compareRanks(); just needs to wait for updateTable(); to fully complete before it runs.
This is how my functions are laid out:
updateTable(){
//code here interacts with the database/makes a call using Facebook's API,
//and populates the leaderboard table with the returned data
}
On start of a new game, the player’s current rank is stored in the hidden div.
When the game completes updateTable(); is run again, followed by compareRanks();:
compareRanks(){
//code here grabs the stored rank from the hidden div
//code here grabs the newly updated rank and compares the two.
}
I’ve read answers about using callbacks, but I couldn’t get them to work. And I’ve tried doing something like this:
updateTable(){
{
//code here interacts with the database/makes a call using Facebook's API,
//and populates the leaderboard table with the returned data
}
compareRanks();
};
But the new rank is still showing up the same as the old rank when compareRanks(); runs. updateTable(); is changing the ranks correctly on the leaderboard when it runs, so I think compareRanks(); is just running before updateTable(); fully completes.
I’d really appreciate any help in fixing this problem, thank you in advance!

A good way of approaching this would be the use of Javascript Promises. They allow you to do async stuff without nesting multiple callback functions.
function first (parameter){
return new Promise(function(resolve,reject){
//Do async stuff, maybe some ajax
//When async stuff is finished:
resolve(async_data);
//Or when something went wrong:
reject(error);
}
}
function second(parameter){
return new Promise(function(resolve,reject){
//Do async stuff, maybe some ajax
//When async stuff is finished:
resolve(async_data);
//Or when something went wrong:
reject(error);
}
}
//You can then use:
first(data).then(second(async_data)).then(function(async_data){
//Here would be the point where both functions finished after eachother!
}).catch(function(error){
//Hey, one of my promises was rejected! Maybe I should handle that error :)
});
This comes with a few advantages. You can put as many functions and operations as you want into that chain of .thens without nesting big amounts of callback functions. You can also access the reject() call by using .catch(). You should concider reading the docs for Promises as there are many more features that should be interesting for you.
If you don't want to get involved with Promises (They make your code alot cleaner because they're composable so you can create very clear chains of promises) you can look into some of the other awnsers that work with Callbacks (Not that bad for such a small usecase).
Here is a great article about it: Article: JavaScript Promises

Basically a callback is a function passed in as a parameter to another function. JavaScript can do this because functions are first-class objects.
Now, because updateTable will call a db/FB API, you need to invoke the callback within the callback for that operation. I don't know the correct syntax for that operation, so my example uses pseudocode.
function updateTable(callback) {
FBAPI.get(something, function (data) {
// do things
callback();
});
}
updateTable(compareRanks);
Note, if compareRanks needs access to the data from the API you would pass in the data to the callback too:
callback(data);

Better go with new Promise object of javascript-
The Promise object is used for deferred and asynchronous computations. A Promise represents an operation that hasn't completed yet, but is expected in the future.
new Promise(executor);
new Promise(function(resolve, reject) { ... });
Check this link for more help-
https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Global_Objects/Promise

The comment from Aless80 holds the key to your answer I think. I don't know how the Facebook API looks like, but whenever you interact with a database or web-server like that it is common for the API call to have a callback in which you can handle anything the server might return to you.
For instance, a simple web-server I am working on handles requests send via AJAX calls from a browser. I am using Jquery to do this, but the structure should be roughly the same.
var dataObject = {};
var dataID = "1234";
$.ajax({
type: "GET", // basic http get request
url: "http://www.mywebserver.com/get-data/" + dataID,
crossDomain: true
}).done(function(response){
// this function runs after the webserver has received our GET request and handled it
// response contains the data we got back from mywebserver
dataObject = response;
});
DoSomething(dataObject);
What happens here is that "DoSomething()" will fire before the dataObject contains any data returned by the database! So if we want to do something with the returned data we should call the function in the "callback" of our ajax request, like so:
var dataObject = {};
var dataID = "1234";
$.ajax({
type: "GET", // basic http get request
url: "http://www.mywebserver.com/get-data/" + dataID,
crossDomain: true
}).done(function(response){
//dataObject = response;
//DoSomething(dataObject);
DoSomething(response);
});
The commented out stuff in this example is for clarity, of course needlessly passing around variables is something we'd like to avoid :)
I also highly recommend looking into JavaScript callbacks. They are tricky to grasp at first but Node.js is basically build upon the concept so it is well worth becoming familiar with them.

Related

simultaneous runs of two JS webworkers: one gets stuck

I'm working on a closed system web application to aid companies in their everyday online commerce chores. That means on the one hand that it won't be open to the public, on the other: it will have to deal with large amounts of data while maintaining a fluent work experience.
This is why I turned to web workers in JS to run all sorts of database access and data loading in the background.
My understanding is, that not only the main UI/main JS remains uninterrupted but also the different web workers run without hindering each other.
I now have the following setup:
mainJS: function statusCheck which runs on pageload:
function statusCheck() {
if(typeof(w__statusCheck) == "undefined") {
var w__statusCheck = new Worker("...statusCheck.js");
w__statusCheck.postMessage("go");
w__statusCheck.onmessage = function(e) {
var message = JSON.parse(e.data);
if(message.text!=undefined) displayMessage(message.text);
}
}
statusCheck.js which is the worker simply goes like this:
function checkStatus() {
console.log("statusCheck started");
// I will leave standard parts out:
// creating and testing the ajax variable against different browsers
ajaxRequest.onreadystatechange = function() {
if(ajaxRequest.readyState == 4) {
self.postMessage(ajaxRequest.responseText);
var timer;
timer = self.setTimeout(function(){
checkStatus();
}, 1000);
}
}
ajaxRequest.open("GET", "...worker_statusCheck.php", true);
ajaxRequest.send(null);
}
this.onmessage = function(e){
checkStatus();
};
As you can see, this restarts itself every second (for now). The intervall might be longer in production.
worker_statusCheck.php simply gets different things from the database and knits them into a JSON object which gives me the system status.
This works beautifully.
Now I have another worker which is supposed to get initiated by a click on a link to effectively call some php to perform actions:
mainJS loadWorker
function loadWorker(url="") {
console.log("loadWorker started");
if(url!="") {
var uniqueID = "XXX" // creating a random ID based on timestamp and Math.random()
if(typeof(window[uniqueID]) == "undefined") {
var variables = { ajaxURL: url };
window[uniqueID] = new Worker("....loadWorker.js");
window[uniqueID].postMessage(JSON.stringify(variables));
window[uniqueID].onmessage = function(e) {
var message = JSON.parse(e.data);
if(message["success"]!=undefined) {
variables["close"] = "yes";
window[uniqueID].postMessage(JSON.stringify(variables));
}
}
}
With every click on a certain link this gets called, creates a uniquely named worker, runs it, receives the data and tells the worker to close().
The php again does its thing and writes a progress update in the DB after each step of the lengthy procedure. These progress updates I fetch from the DB with the above repeating statusCheck.
Now, I can see the entries in the DB with timestamp, so I know they get written each at their time.
So, both workers do their job and run reliably. But I have noticed, that whenever I initiate the manual (randomly named) worker the statusCheck actually stops performing. It just gets stuck... I was able to confirm this with console output from both workers. So it's not the main JS that seems stuck, but the statusCheck actually pauses... and resumes when loadWorker is done.
Am I missing something fundamental here? Any insight would be appreciated since I'm new to this concept of web workers.
Thanx :)
Your question lacks resources to truly figure out what exactly goes wrong. I can concur that two web workers can operate at the same time, even with synchronous operations. I tested this for both for loops and sync XHR requests.
There are multiple things I would recommend though.
First - unless you're processing the data with some CPU heavy algorithm, web workers are waste of time. XHR requests do not block main thread (unless you explicitly ask them to).
In statusCheck() you declare var w__statusCheck which means a local variable. Therefore it will always be null as seen from outer scope. It might get garbage-collected once no code is running in the worker.
Do not use XMLHttpRequest.onreadystatechange. Use onload and onerror.
Random unique ID's for variables are almost always wrong. If you need to store the worker refference at all, either give it a reasonable name (eg. the url it's supposed to load) or use incremental id.
Do NOT stringify data that you post to web worker. It's already done for you by the browser, possibly in more optimal manner. Converting the data to something is a single most common stupid thing people do with web workers.
Also when posting question, at least make sure the code makes some sense. In your post curly braces do not match.
Alright.. I figured it out:
I was looking in all the wrong places. Turns out, I had initialized my php session in all the php scripts which are called by the workers. And my two parallel workers both called one. So the session file was locked by the first php script and the second had to wait until it was back open again. It was not the workers or the JS being hindered, it was the php.
I now took out the session initialization from my statusCheck.php and it works like a charm. I will keep it in those others that handle the user input responses because there it actually makes sense: user clicks on button "compile data XY" which is run by the worker and takes a while. Impatient as he is he already clicks the next button "show this data"... and due to the locked session file I have sort of a neat queue for those actions. :)
I still will take above recommendations to heart and see to it to improve my code. :)

How to run a function when all the data loaded?

I process thousands of points asynchronously in ArcGIS JS API. In the main function, I call functions processing individual features, but I need to finalize the processing when all the features are processed. There should be an event for this, though I didn't find any and I'm afraid it even doesn't exist - it would be hard to state that the last item processed was the last of all. .ajaxStop() should do this, but I don't use jQuery, just Dojo. Closest what I found in Dojo was Fetch and its OnComplete, but as far as I know it's about fetching data from AJAX, not from other JS function.
The only workaround idea I have now is to measure how many features are to be processed and then fire when the output points array reaches desired length, but I need to count the desired number at first. But how to do it at loading? Tracking the data to the point where they are read from server would mean modifying functions I'm not supposed to even know, which is not possible.
EDIT - some of my code:
addData: function (data) {
dojo.addOnLoad(
this.allData = data,
this._myFunction()
);
},
Some comments:
data is an array of graphics
when I view data in debugger, its count is 2000, then 3000, then 4000...
without dojo.addOnLoad, the count started near zero, now it's around 2000, but still a fraction of the real number
_myFunction() processes all the 2000...3000...4000... graphics in this._allData, and returns wrong results because it needs them all to work correctly
I need to delay execution of _myFunction() until all data load, perhaps by some other event instead of dojo.addOnLoad.
Workarounds I already though of:
a) setTimeout()
This is clearly a wrong option - any magic number of miliseconds to wait for would fail to save me if the data contains too much items, and it would delay even cases of a single point in the array.
b) length-based delay
I could replace the event with something like this:
if(data.length == allDataCount) {
this._myFunction();
}
setTimeout(this._thisFunction, someDelay);
or some other implementation of the same, through a loop or a counter incremented in asynchronously called functions. Problem is how to make sure the allDataCount variable is definitive and not just the number of features leaded until now.
EDIT2: pointing to deferreds and promises by #tik27 definitely helped me, but the best I found on converting synchronous code to a deferred was this simple example. I probably misunderstood something, because it doesn't work any better than the original, synchronous code, the this.allData still can't be guaranteed to hold all the data. The loading function now looks like this:
addData: function (data) {
var deferred = new Deferred();
this._addDataSync(data, function (error, result) {
if (error) {
deferred.reject(error);
}
else {
deferred.resolve(result);
}
});
deferred.promise.then(this._myFunction());
},
_addDataSync: function (data, callback) {
callback(this.allData = data);
},
I know most use cases of deferred suppose requesting data from some server. But this is the first time where I can work with data without breaking functions I shouldn't change, so tracking the data back to the request is not an option.
addonload is to wait for the dom.
If you are waiting for a function to complete to run another function deferred/promises are what is used.
Would need more info on your program to give you more specific answers..
I sort of solved my problem, delaying the call of my layer's constructor until the map loads completely and the "onUpdateEnd" event triggers. This is probably the way how it should be properly done, so I post this as an answer and not as an edit of my question. On the other hand, I have no control over other calls of my class and I would prefer to have another line of defense against incomplete inputs, or at least a way to tell whether I should complain about incomplete data or not, so I keep the answer unaccepted and the question open for more answers.
This didn't work when I reloaded the page, but then I figured out how to properly chain event listeners together, so I now can combine "onUpdateEnd" with extent change or any other event. That's perfectly enough for my needs.

Javascript techniques to embed nested callback functions

Im building an app with Nodejs. Im fairly fluent with Front end javascript where asynchronous events rarely get too complex and don't go that deep. But now that I'm using Node which is all event driven, making a lot of calls to different servers and databases that all rely on each other becomes rather clustered.
It seems to be common place to have a next() function passed as a parameter that gets called once the first event has finished. This works great however I'm struggling to keep readable code when needing to have next functions after next functions.
Let me explain through example.
Lets say I have a route defined like so:
app.use('/fetchData', function(req, res) {
});
So before we can return the data I need to make a few async calls.
First to the database to retrieve login details.
Then using the login details i need to make another call to an external server to login in and retrieve the raw information.
Then third I need to go back to the database to do some checks.
And then finally return the data to the user.
How would you do that? Im trying like this but cant get it right nor looking readable:
app.use('/fetchData', function(req, res) {
//First I create a user object to pass information around to each function
var user = {...};
var third = database.doSomeChecks;
var second = server.externalCall(user, third);
//first
database.getLoginDetails(user, second);
});
Obviously second actually runs the function and sets second as the returned value. But I can seem to pass the right information through to second.
One Option i thought could be to pass through an array of callbacks and to always call the last function in the array and remove it.
app.use('/fetchData', function(req, res) {
//First I create a user object to pass information around to each function including the req and res object to finally return information
var user = {...};
var third = database.doSomeChecks;
var second = server.externalCall;
//first
database.getLoginDetails(user, [third, second]);
});
What are your techniques? Is the array idea as pointed out above the best solution?
I'd recommend you to use promises as a personal preference I like to use bluebird it's easy to implement, it has a very nice performance and also it has some cool features to play with.
with promises, it's easier to read the control flow execution (at least to me), a lot of people complain about the callback hell and promises it's one of the possible solutions.
you can do something like this:
from:
var user = {...};
var third = database.doSomeChecks;
var second = server.externalCall(user, third);
to:
var user = {...};
checkDB(query).then(
function(data){
//data checks
return value
}).then(
function(value){
// value returned from the previous promise
return server.externalCall(value);
});
you can take a look to this answer and see how you can deal with nested promises which are far easier than callbacks.
I hope that helps.

How to develop node.js run-time strategy?

Node.js approach is event driven and I was wondering how would you tackle the problem of when to fire off an event?
Lets say that we have some actions on a web application: create some data, serve pages, receive data etc.
How would you lay out these events? In a threaded system the design is rather "simple". You dedicated threads to specific set of tasks and you go down the road of thread synchronization. While these task are at low on demand the threads sit idle and do nothing. When they are needed they run their code. While this road has issues it's well documented and kind of solved.
I find it hard to wrap my head around the node.js event way of doing things.
I have 10 request coming in, but I haven't created any data so I can't serve anying, creating data is a long action and another 5 client wants to send data. What now?
I've created the following untested code which is basically a pile of callbacks which get registered and should be executed. There will be some kind of a pile manager that will run and decide which code does it want to execute now. All the callback created by that callback can be added "naturally" to the even loop. It should also register it's self so the event loop could give the control back to it. Other things like static content and what ever can be bound differently.
How can I register a call back to be the last call in the current event loop state?
Is this a good way to solve this issue?
The most important thing to remember when coming from a threaded environment is that in node you don't wait for an action to finish happening, instead you tell it what to do when it is done. To do this you use a callback, this is a variable which contains a function to execute, or a pointer to a function if you like.
For example:
app.get('/details/:id?', function (req, res) {
var id = req.params.ucid,
publish = function (data) {
res.send(data);
};
service.getDetails(id, publish);
});
You can then invoke the publish method from within your get details method once you have created the required data.
getDetail : function (id, callback) {
var data = makeMyData(id);
callback(data)
}
Which will then publish your data back to the response object. Because of the event loop node will continue to serve requests to this url without interrupting the data generation from the first request
The answer chosen is the most correct, there is but one minor code change and that is:
Change this function from this:
getDetail : function (id, callback) {
var data = makeMyData(id);
callback(data)
}
To that:
getDetail : function (id, callback) {
var data = makeMyData(id);
setTimeout(callback, 0, data);
}
Update 2019:
In order to comply with community standard I've broken off an update to a new answer.
I've used setTimeout because I wanted to defer the callback to the back of the event loop. Another option I've used was process.nextTick(), this helped to defer the callback to the end of the current event processed.
For example:
getDetail : function (id, callback) {
var data = makeMyData(id);
process.nextTick(((info)=> callback(info))(data))
}

Sequential web service call not working

A little (!) bit of background before I can get to the question :
I am implementing a web based search solution. Technology used: javascript (jquery), .net, html etc. etc.
All my web service calls are done through javascript (cross domain ws call). I have few sequential web service calls which all have different success callback function.
I am not able to digest - when i call those ws individually in seperate places they are returning me proper results but sequentially sometime they are giving and sometime not.
sample code: this is not giving expected results all the time.
function submitSearchRequest(_queryString, Stores) {
if (Stores[1].length>0) {
//generate 'searchRequestForArtifact' request object
getSearchResponse("successcallForArtifact", _searchRequestForArtifact);
}
if (Stores[2].length > 0) {
//generate 'searchRequestForPerson' request object
getSearchResponse("successcallForPerson", _searchRequestForPerson);
}
}
function successcallForArtifact(response)
{
//show the results
}
function successcallForPerson(response)
{
//show the results
}
}
If you need sequentially you will need to kick off each search only after one has returned. Currently you are making async calls, meaning it gets kicked off then continues with the code. Currently if the second call is simply faster the order will be off. You will either need to make a sync call or simply have the order enforced by calling the second search from the success function for the artifact.
If you are using JQuery which it seems you are you can set the async parameter to false which will force the order you want but it will slow the overall performance of your page. See this question.

Categories