I'm very new to Javascript programming and was researching ways in dealing with asynchronous functions. I came across really helpful resource which lists this as an example:
var fs = require('fs')
var myNumber = undefined
function addOne(callback) {
fs.readFile('number.txt', function doneReading(err, fileContents) {
myNumber = parseInt(fileContents)
myNumber++
callback()
})
}
function logMyNumber() {
console.log(myNumber)
}
addOne(logMyNumber)
However could you achieve the same result doing this:
var fs = require('fs')
var myNumber = undefined
function addOne() {
fs.readFile('number.txt', function doneReading(err, fileContents) {
myNumber = parseInt(fileContents)
myNumber++
logMyNumber()
})
}
function logMyNumber() {
console.log(myNumber)
}
addOne()
And if you can, what would be the purpose/advantage of using callbacks?
For those interested the article came from here: https://github.com/maxogden/art-of-node#callbacks
When we use call back it depend on situation in to make things dynamic or make sure that a code of piece run after one is complete.in your current code already describe callbacks
your first example clearly state how we define callbacks.
In computer programming, a callback is a piece of executable code that is passed as an argument to other code, which is expected to call back (execute) the argument at some convenient time. The invocation may be immediate as in a synchronous callback, or it might happen at later time as in an asynchronous callback
var fs = require('fs')
var myNumber = undefined
you are using call back here which give you power to run different- different method after reading number.txt successfully
function addOne(callback) {
fs.readFile('number.txt', function doneReading(err, fileContents) {
myNumber = parseInt(fileContents)
myNumber++
callback()
})
}
in your second example there is no callback you are calling logMyNumber() directly, what if we need to run another function something like
function logMyNumber() {
console.log(myNumber)
}
function varifynumber() {
console.log(myNumber)
}
function somthingelse() {
console.log(myNumber)
}
addOne(logMyNumber)
addOne(somthingelse)
addOne(logMyNumber)
and the other best use of callbacks in JavaScript is handle asynchronous tasks, if you noticed
inside your function you are using fs.readFile('number.txt',callback) which is a asynchronous method please have look below example
console.log('start');
fs.readFile('number.txt', function doneReading(err, fileContents) {
// until the file not read completely this section will not run
// this happend because of call back
console.log('Reading complete');
})
console.log('End');
output :
start
End
Reading complete
i hope this will help you
It all depends on what you are trying to achieve. In the first example, the function addOne has no concept on what the callback parameter does, it just invokes it.
However, in the second case, the addOne function knows it will invoke logMyNumber, and therefore has a tighter coupling and concept of what exactly is going on.
The first example is often favorable in most cases, e.g. if you are splitting functions across multiple files, and don't want them to be tightly intertwined.
Related
I have a function that stores several values from a HTML form, and that must work individually in order to store that info in any situation I need (ie before inserting on DB, or before udating info on DB...)
I need to be able to tell the system to execute this function ('storeValues'),and then execute any other (could be 'createNewClass', 'updateExistingClass'... whatever).
How can I sequence this? I tried here to store values first and, WHEN DONE, execute another function aleting about a value, but it says "storeValues() is not defined", and it is defined:
$('.tableClassHeader').on('click', '.createClass', function(){
storeValues().promise().done(function(){
createNewClass();
});
});
function storeValues(){
cl_year = $('.newClassForm').find('select[name=cl_year]').val();
cl_course = $('.newClassForm').find('select[name=cl_course]').val();
}
function createNewClass(){
alert(cl_year);}
I mean that storeValues function SHOULD BE a separate function with the possibility of being called from any other place, I know this problem could be solved by executing "createNewClass" from the "storeValues" function, but there will be times that I need to execute "updateClass" after "storeValues", not "createNewClass"
You can use a callback like this, if your storeValues is not synchronous like in your example:
$('.tableClassHeader').on('click', '.createClass', function(){
storeValues(createNewClass);
});
function storeValues(callback){
cl_year = $('.newClassForm').find('select[name=cl_year]').val();
cl_course = $('.newClassForm').find('select[name=cl_course]').val();
callback();
}
function createNewClass(){
alert(cl_year);
}
If it is synchronous, just calling createNewClass after storeValues is enough.
What this does is:
offers you the ability to pass a function of choice to the storeValues
inside storeValues it calls the callback function passed as parameter
If you need to execute your function with a different scope you can use call or apply.
Another way to do this, without callbacks would be using
http://api.jquery.com/promise/
http://api.jquery.com/jQuery.when/
http://api.jquery.com/deferred.promise/
Example as seen here http://jsfiddle.net/47fXF/1/ :
$('.tableClassHeader').on('click', '.createClass', function(){
$.when(storeValues()).then(createNewClass);
});
function storeValues(){
var dfd = new jQuery.Deferred();
setTimeout(function(){
console.log('storing values');
cl_year = $('.newClassForm').find('select[name=cl_year]').val();
cl_course = $('.newClassForm').find('select[name=cl_course]').val();
dfd.resolve();
}, 1000);
return dfd.promise();
}
function createNewClass(){
alert("trololo");
}
Added the setTimeout to simulate asynchronicity.
If your storeValues is making only one ajax request using jQuery, then you can return it directly as shown in the API documentation.
Also make sure to call resolve(), reject() appropriately.
Call like this . it first call the storeValues after executes the createNewClass function
$('.tableClassHeader').on('click', '.createClass', function(){
storeValues(function() {
createNewClass();
});
});
function storeValues(callback){
cl_year = $('.newClassForm').find('select[name=cl_year]').val();
cl_course = $('.newClassForm').find('select[name=cl_course]').val();
callback();
}
This question already has an answer here:
understanding the concept of javascript callbacks with node.js, especially in loops
(1 answer)
Closed 8 years ago.
I know this has been asked a million times, but I'm really trying to break down async Javascript functions and callbacks and its just not clicking. I'm looking at Max Ogden's Art of Node example which is this:
var fs = require('fs')
var myNumber = undefined
function addOne(callback) {
fs.readFile('number.txt', function doneReading(err, fileContents) {
myNumber = parseInt(fileContents)
myNumber++
callback()
})
}
function logMyNumber() {
console.log(myNumber)
}
addOne(logMyNumber)
Breaking this down, I understand that when addOne is invoked, it first kicks off fs.ReadFile which may take some time to complete.
What I don't get is, won't the code continue to callback() and execute logMyNumber (before myNumber has been added to) anyhow? What's stopping callback() from running before it should, which is the whole point? Or does callback() not happen until doneReading has happened? Are we supposed to assume that doneReading will be invoked when fs.readFile is "done"?
Thank you all for your patience in helping me with this very common question:)
"Are we supposed to assume that doneReading will be invoked when fs.readFile is "done"?"
You dont have to assume it, you can be pretty sure of it.
You can use logging to see how and in what order your code gets executed.
var fs = require('fs')
console.log("starting script");
console.log("message 1");
function addOne(callback) {
fs.readFile('number.txt', function doneReading(err, fileContents) {
console.log("finished loading the file");
console.log("message 2");
callback()
})
}
console.log("message 3");
//logMyNumber will be called after file has read
function logMyNumber() {
console.log("message 4");
}
console.log("message 5");
addOne(logMyNumber)
console.log("message 6");
//______________
A simpler way to understand the asyncronous behavior is to use all familiar timer
console.log("message 1");
var num = 2;
function something() {
console.log("message 2");
}
function somethingElse() {
console.log("message 3");
}
console.log("message 4");
setTimeout(something, 1000);
console.log("message 5");
setTimeout(somethingElse, 500);
//code will run 1 - 4 - 5 - 3- 2 not from top to bottom, and this way its obvious why.
//in file read its the same reason
This is the way the code will flow:
You call to addOne(logMyNumber) will be executed.
addOne will read a file and once the file has been read it will then
execute the code in the "doneReading" function which in turn will call your callback (which is logMyNumber)
See the second argument for fs.readFile? It's a function called doneReading.
fs.readFile will only execute doneReading when it has finished reading the file. When doneReading gets executed, the last line is to call callback(), which is a reference to the logMyNumber function in this case.
fs.readFile will call the given callback, ie. doneReading in your code after the reading has been finished. This is how Node.js generally works with the callbacks: you give the callback which is ran after finishing the asynchronous operation.
// callback is a parameter to addOne
// callback is a function, but functions are just objects in javascript
// so the addOne function just knows that it has one parameter, not that
// callback is a function
function addOne(callback) {
// callback is now captured in the closure for the doneReading function
fs.readFile('number.txt', function doneReading(err, fileContents) {
myNumber = parseInt(fileContents)
myNumber++
// callback is executed here
// But we are inside the doneReading function
// which is itself a callback to the fs.readFile function
// therefore, it does not get executed until the file has finished reading
callback()
})
}
// similarly, logMyNumber has not been called, it has just been defined
// as a function (object)...
function logMyNumber() {
console.log(myNumber)
}
// ...and passing logMyNumber to addOne here does not execute it
addOne(logMyNumber)
Does that clear it up?
Just by seeing what I've wrote now, I can see that one is much smaller, so in terms of code golf Option 2 is the better bet, but as far as which is cleaner, I prefer Option 1. I would really love the community's input on this.
Option 1
something_async({
success: function(data) {
console.log(data);
},
error: function(error) {
console.log(error);
}
});
Option 2
something_async(function(error,data){
if(error){
console.log(error);
}else{
console.log(data);
}
});
They are not exactly the same. Option 2 will still log the (data), whereas Option 1 will only log data on success. (Edit: At least it was that way before you changed the code)
That said, Option 1 is more readable. Programming is not / should not be a competition to see who can write the fewest lines that do the most things. The goal should always be to create maintainable, extendable (if necessary) code --- in my humble opinion.
Many people will find option#1 easier to read and to maintain - two different callback functions for two different purposes. It is commonly used by all Promise Libraries, where two arguments will be passed. Of course, the question Multiple arguments vs. options object is independent from that (while the object is useful in jQuery.ajax, it doesn't make sense for promise.then).
However, option#2 is Node.js convention (see also NodeGuide) and used in many libraries that are influenced by it, for example famous async.js. However, this convention is discussable, top google results I found are WekeRoad: NodeJS Callback Conventions and Stackoverflow: What is the suggested callback style for Node.js libraries?.
The reason for the single callback function with an error argument is that it always reminds the developer to handle errors, which is especially important in serverside applications. Many beginners at clientside ajax functions don't care forget about error handling for example, asking themselves why the success callback doesn't get invoked. On the other hand, promises with then-chaining are based on the optionality of error callbacks, propagating them to the next level - of course it still needs to be catched there.
In all honesty, I prefer to take them one step further, into Promises/Futures/Deferreds/etc...
Or (/and) go into a "custom event" queue, using a Moderator (or an observer/sub-pub, if there is good reason for one particular object to be the source for data).
This isn't a 100% percent of the time thing. Sometimes, you just need a single callback. However, if you have multiple views which need to react to a change (in model data, or to visualize user-interaction), then a single callback with a bunch of hard-coded results isn't appropriate.
moderator.listen("my-model:timeline_update", myView.update);
moderator.listen("ui:data_request", myModel.request);
button.onclick = function () { moderator.notify("ui:data_request", button.value); }
Things are now much less dependent upon one big callback and you can mix and match and reuse code.
If you want to hide the moderator, you can make it a part of your objects:
var A = function () {
var sys = null,
notify = function (msg, data) {
if (sys && sys.notify) { sys.notify(msg, data); }
},
listen = function (msg, callback) {
if (sys && sys.listen) { sys.listen(msg, callback); }
},
attach = function (messenger) { sys = messenger; };
return {
attach : attach
/* ... */
};
},
B = function () { /* ... */ },
shell = Moderator(),
a = A(),
b = B();
a.attach(shell);
b.attach(shell);
a.listen("do something", a.method.bind(a));
b.notify("do something", b.property);
If this looks a little familiar, it's similar behaviour to, say Backbone.js (except that they extend() the behaviour onto objects, and others will bind, where my example has simplified wrappers to show what's going on).
Promises would be the other big-win for usability, maintainable and easy to read code (as long as people know what a "promise" is -- basically it passes around an object which has the callback subscriptions).
// using jQuery's "Deferred"
var ImageLoader = function () {
var cache = {},
public_function = function (url) {
if (cache[url]) { return cache[url].promise(); }
var img = new Image(),
loading = $.Deferred(),
promise = loading.promise();
img.onload = function () { loading.resolve(img); };
img.onerror = function () { loading.reject("error"); };
img.src = url;
cache[url] = loading;
return promise;
};
return public_function;
};
// returns promises
var loadImage = ImageLoader(),
myImg = loadImage("//site.com/img.jpg");
myImg.done( lightbox.showImg );
myImg.done( function (img) { console.log(img.width); } );
Or
var blog_comments = [ /* ... */ ],
comments = BlogComments();
blog_comments.forEach(function (comment) {
var el = makeComment(comment.author, comment.text),
img = loadImage(comment.img);
img.done(el.showAvatar);
comments.add(el);
});
All of the cruft there is to show how powerful promises can be.
Look at the .forEach call there.
I'm using Image loading instead of AJAX, because it might seem a little more obvious in this case:
I can load hundreds of blog comments, if the same user makes multiple posts, the image is cached, and if not, I don't have to wait for images to load, or write nested callbacks. Images load in any order, but still appear in the right spots.
This is 100% applicable to AJAX calls, as well.
Promises have proven to be the way to go as far as async and libraries like bluebird embrace node-style callbacks (using the (err, value) signature). So it seems beneficial to utilize node-style callbacks.
But the examples in the question can be easily be converted into either format with the functions below. (untested)
function mapToNodeStyleCallback(callback) {
return {
success: function(data) {
return callback(null, data)
},
error: function(error) {
return callback(error)
}
}
}
function alterNodeStyleCallback(propertyFuncs) {
return function () {
var args = Array.prototype.slice.call(arguments)
var err = args.shift()
if (err) return propertyFuncs.err.apply(null, [err])
return propertyFuncs.success.apply(null, args)
}
}
The event-driven programming model of node.js makes it somewhat tricky to coordinate the program flow.
Simple sequential execution gets turned into nested callbacks, which is easy enough (though a bit convoluted to write down).
But how about parallel execution? Say you have three tasks A,B,C that can run in parallel and when they are done, you want to send their results to task D.
With a fork/join model this would be
fork A
fork B
fork C
join A,B,C, run D
How do I write that in node.js ? Are there any best practices or cookbooks? Do I have to hand-roll a solution every time, or is there some library with helpers for this?
Nothing is truly parallel in node.js since it is single threaded. However, multiple events can be scheduled and run in a sequence you can't determine beforehand. And some things like database access are actually "parallel" in that the database queries themselves are run in separate threads but are re-integrated into the event stream when completed.
So, how do you schedule a callback on multiple event handlers? Well, this is one common technique used in animations in browser side javascript: use a variable to track the completion.
This sounds like a hack and it is, and it sounds potentially messy leaving a bunch of global variables around doing the tracking and in a lesser language it would be. But in javascript we can use closures:
function fork (async_calls, shared_callback) {
var counter = async_calls.length;
var callback = function () {
counter --;
if (counter == 0) {
shared_callback()
}
}
for (var i=0;i<async_calls.length;i++) {
async_calls[i](callback);
}
}
// usage:
fork([A,B,C],D);
In the example above we keep the code simple by assuming the async and callback functions require no arguments. You can of course modify the code to pass arguments to the async functions and have the callback function accumulate results and pass it to the shared_callback function.
Additional answer:
Actually, even as is, that fork() function can already pass arguments to the async functions using a closure:
fork([
function(callback){ A(1,2,callback) },
function(callback){ B(1,callback) },
function(callback){ C(1,2,callback) }
],D);
the only thing left to do is to accumulate the results from A,B,C and pass them on to D.
Even more additional answer:
I couldn't resist. Kept thinking about this during breakfast. Here's an implementation of fork() that accumulates results (usually passed as arguments to the callback function):
function fork (async_calls, shared_callback) {
var counter = async_calls.length;
var all_results = [];
function makeCallback (index) {
return function () {
counter --;
var results = [];
// we use the arguments object here because some callbacks
// in Node pass in multiple arguments as result.
for (var i=0;i<arguments.length;i++) {
results.push(arguments[i]);
}
all_results[index] = results;
if (counter == 0) {
shared_callback(all_results);
}
}
}
for (var i=0;i<async_calls.length;i++) {
async_calls[i](makeCallback(i));
}
}
That was easy enough. This makes fork() fairly general purpose and can be used to synchronize multiple non-homogeneous events.
Example usage in Node.js:
// Read 3 files in parallel and process them together:
function A (c){ fs.readFile('file1',c) };
function B (c){ fs.readFile('file2',c) };
function C (c){ fs.readFile('file3',c) };
function D (result) {
file1data = result[0][1];
file2data = result[1][1];
file3data = result[2][1];
// process the files together here
}
fork([A,B,C],D);
Update
This code was written before the existence of libraries like async.js or the various promise based libraries. I'd like to believe that async.js was inspired by this but I don't have any proof of it. Anyway.. if you're thinking of doing this today take a look at async.js or promises. Just consider the answer above a good explanation/illustration of how things like async.parallel work.
For completeness sake the following is how you'd do it with async.parallel:
var async = require('async');
async.parallel([A,B,C],D);
Note that async.parallel works exactly the same as the fork function we implemented above. The main difference is it passes an error as the first argument to D and the callback as the second argument as per node.js convention.
Using promises, we'd write it as follows:
// Assuming A, B & C return a promise instead of accepting a callback
Promise.all([A,B,C]).then(D);
I believe that now the "async" module provides this parallel functionality and is roughly the same as the fork function above.
The futures module has a submodule called join that I have liked to use:
Joins asynchronous calls together similar to how pthread_join works for threads.
The readme shows some good examples of using it freestyle or using the future submodule using the Promise pattern. Example from the docs:
var Join = require('join')
, join = Join()
, callbackA = join.add()
, callbackB = join.add()
, callbackC = join.add();
function abcComplete(aArgs, bArgs, cArgs) {
console.log(aArgs[1] + bArgs[1] + cArgs[1]);
}
setTimeout(function () {
callbackA(null, 'Hello');
}, 300);
setTimeout(function () {
callbackB(null, 'World');
}, 500);
setTimeout(function () {
callbackC(null, '!');
}, 400);
// this must be called after all
join.when(abcComplete);
A simple solution might be possible here: http://howtonode.org/control-flow-part-ii scroll to Parallel actions. Another way would be to have A,B, and C all share the same callback function, have that function have an global or at least out-of-the-function incrementor, if all three have called the callback then let it run D, ofcourse you will have to store the results of A,B, and C somewhere as well.
Another option could be the Step module for Node: https://github.com/creationix/step
You may want to try this tiny library: https://www.npmjs.com/package/parallel-io
In addition to popular promises and async-library, there is 3rd elegant way - using "wiring":
var l = new Wire();
funcA(l.branch('post'));
funcB(l.branch('comments'));
funcC(l.branch('links'));
l.success(function(results) {
// result will be object with results:
// { post: ..., comments: ..., links: ...}
});
https://github.com/garmoshka-mo/mo-wire
I'm fairly new to the callback-style of programming in javascript.
Is there a way to force code to wait until a function call finishes via a callback?
Let me explain.
The following function takes a number and returns a result based upon it.
function get_expensive_thing(n) {
return fetch_from_disk(n);
}
So far, easy enough.
But what do I do when fetch_from_disk instead returns its result via a callback?
Like so:
function get_expensive_thing(n) {
fetch_from_disk(n, function(answer) {
return answer; // Does not work
});
}
The above doesn't work because the return is in the scope of the anonymous function,
rather than the get_expensive_thing function.
There are two possible "solutions", but both are inadequate.
One is to refactor get_expensive_thing to itself answer with a callback:
function get_expensive_thing(n, callback) {
fetch_from_disk(n, function(answer) {
callback(answer);
});
}
The other is to recode fetch_from_disk, but this is not an option.
How can we achieve the desired result
while keeping the desired behaviour of get_expensive_thing
-- i.e., wait until fetch_from_disk calls the callback, then return that answer?
Pretty much there's no "waiting" in browser Javascript. It's all about callbacks. Remember that your callbacks can be "closures", which means definitions of functions that "capture" local variables from the context in which they were created.
You'll be a happier person if you embrace this way of doing things.
add in that missing return :)
function get_expensive_thing(n) {
return fetch_from_disk(n, function(answer) {
return answer;
});
}