As an example, suppose I want to fetch a list of files from somewhere, then load the contents of these files and finally display them to the user. In a synchronous model, it would be something like this (pseudocode):
var file_list = fetchFiles(source);
if (!file_list) {
display('failed to fetch list');
} else {
for (file in file_list) { // iteration, not enumeration
var data = loadFile(file);
if (!data) {
display('failed to load: ' + file);
} else {
display(data);
}
}
}
This provides decent feedback to the user and I can move pieces of code into functions if I so deem necessary. Life is simple.
Now, to crush my dreams: fetchFiles() and loadFile() are actually asynchronous. The easy way out is to transform them into synchronous functions. But this is not good if the browser locks up waiting for calls to complete.
How can I handle multiple interdependent and/or layered asynchronous calls without delving deeper and deeper into an endless chain of callbacks, in classic reductio ad spaghettum fashion? Is there a proven paradigm to cleanly handle these while keeping code loosely coupled?
Deferreds are really the way to go here. They capture exactly what you (and a whole lot of async code) want: "go away and do this potentially expensive thing, don't bother me in the meantime, and then do this when you get back."
And you don't need jQuery to use them. An enterprising individual has ported Deferred to underscore, and claims you don't even need underscore to use it.
So your code can look like this:
function fetchFiles(source) {
var dfd = _.Deferred();
// do some kind of thing that takes a long time
doExpensiveThingOne({
source: source,
complete: function(files) {
// this informs the Deferred that it succeeded, and passes
// `files` to all its success ("done") handlers
dfd.resolve(files);
// if you know how to capture an error condition, you can also
// indicate that with dfd.reject(...)
}
});
return dfd;
}
function loadFile(file) {
// same thing!
var dfd = _.Deferred();
doExpensiveThingTwo({
file: file,
complete: function(data) {
dfd.resolve(data);
}
});
return dfd;
}
// and now glue it together
_.when(fetchFiles(source))
.done(function(files) {
for (var file in files) {
_.when(loadFile(file))
.done(function(data) {
display(data);
})
.fail(function() {
display('failed to load: ' + file);
});
}
})
.fail(function() {
display('failed to fetch list');
});
The setup is a little wordier, but once you've written the code to handle the Deferred's state and stuffed it off in a function somewhere you won't have to worry about it again, you can play around with the actual flow of events very easily. For example:
var file_dfds = [];
for (var file in files) {
file_dfds.push(loadFile(file));
}
_.when(file_dfds)
.done(function(datas) {
// this will only run if and when ALL the files have successfully
// loaded!
});
Events
Maybe using events is a good idea. It keeps you from creating code-trees and de-couples your code.
I've used bean as the framework for events.
Example pseudo code:
// async request for files
function fetchFiles(source) {
IO.get(..., function (data, status) {
if(data) {
bean.fire(window, 'fetched_files', data);
} else {
bean.fire(window, 'fetched_files_fail', data, status);
}
});
}
// handler for when we get data
function onFetchedFiles (event, files) {
for (file in files) {
var data = loadFile(file);
if (!data) {
display('failed to load: ' + file);
} else {
display(data);
}
}
}
// handler for failures
function onFetchedFilesFail (event, status) {
display('Failed to fetch list. Reason: ' + status);
}
// subscribe the window to these events
bean.on(window, 'fetched_files', onFetchedFiles);
bean.on(window, 'fetched_files_fail', onFetchedFilesFail);
fetchFiles();
Custom events and this kind of event handling is implemented in virtually all popular JS frameworks.
Sounds like you need jQuery Deferred. Here is some untested code that might help point you in the right direction:
$.when(fetchFiles(source)).then(function(file_list) {
if (!file_list) {
display('failed to fetch list');
} else {
for (file in file_list) {
$.when(loadFile(file)).then(function(data){
if (!data) {
display('failed to load: ' + file);
} else {
display(data);
}
});
}
}
});
I also found another decent post which gives a few uses cases for the Deferred object
If you do not want to use jQuery, what you could use instead are web workers in combination with synchronous requests. Web workers are supported across every major browser with the exception of any Internet Explorer version before 10.
Web Worker browser compatability
Basically, if you're not entirely certain what a web worker is, think of it as a way for browsers to execute specialized JavaScript on a separate thread without impacting the main thread (Caveat: On a single-core CPU, both threads will run in an alternating fashion. Luckily, most computers nowadays come equipped with dual-core CPUs). Usually, web workers are reserved for complex computations or some intense processing task. Just keep in mind that any code within the web worker CANNOT reference the DOM nor can it reference any global data structures that have not been passed to it. Essentially, web workers run independent of the main thread. Any code that the worker executes should be kept separate from the rest of your JavaScript code base, within its own JS file. Furthermore, if the web workers need specific data in order to properly work, you need to pass that data into them upon starting them up.
Yet another important thing worth noting is that any JS libraries that you need to use to load the files will need to be copied directly into the JavaScript file that the worker will execute. That means these libraries should first be minified(if they haven't been already), then copied and pasted into the top of the file.
Anyway, I decided to write up a basic template to show you how to approach this. Check it out below. Feel free to ask questions/criticize/etc.
On the JS file that you want to keep executing on the main thread, you want something like the following code below in order to invoke the worker.
function startWorker(dataObj)
{
var message = {},
worker;
try
{
worker = new Worker('workers/getFileData.js');
}
catch(error)
{
// Throw error
}
message.data = dataObj;
// all data is communicated to the worker in JSON format
message = JSON.stringify(message);
// This is the function that will handle all data returned by the worker
worker.onMessage = function(e)
{
display(JSON.parse(e.data));
}
worker.postMessage(message);
}
Then, in a separate file meant for the worker (as you can see in the code above, I named my file getFileData.js), write something like the following...
function fetchFiles(source)
{
// Put your code here
// Keep in mind that any requests made should be synchronous as this should not
// impact the main thread
}
function loadFile(file)
{
// Put your code here
// Keep in mind that any requests made should be synchronous as this should not
// impact the main thread
}
onmessage = function(e)
{
var response = [],
data = JSON.parse(e.data),
file_list = fetchFiles(data.source),
file, fileData;
if (!file_list)
{
response.push('failed to fetch list');
}
else
{
for (file in file_list)
{ // iteration, not enumeration
fileData = loadFile(file);
if (!fileData)
{
response.push('failed to load: ' + file);
}
else
{
response.push(fileData);
}
}
}
response = JSON.stringify(response);
postMessage(response);
close();
}
PS: Also, I dug up another thread which would better help you understand the pros and cons of using synchronous requests in combination with web workers.
Stack Overflow - Web Workers and Synchronous Requests
async is a popular asynchronous flow control library often used with node.js. I've never personally used it in the browser, but apparently it works there as well.
This example would (theoretically) run your two functions, returning an object of all the filenames and their load status. async.map runs in parallel, while waterfall is a series, passing the results of each step on to the next.
I am assuming here that your two async functions accept callbacks. If they do not, I'd require more info as to how they're intended to be used (do they fire off events on completion? etc).
async.waterfall([
function (done) {
fetchFiles(source, function(list) {
if (!list) done('failed to fetch file list');
else done(null, list);
});
// alternatively you could simply fetchFiles(source, done) here, and handle
// the null result in the next function.
},
function (file_list, done) {
var loadHandler = function (memo, file, cb) {
loadFile(file, function(data) {
if (!data) {
display('failed to load: ' + file);
} else {
display(data);
}
// if any of the callbacks to `map` returned an error, it would halt
// execution and pass that error to the final callback. So we don't pass
// an error here, but rather a tuple of the file and load result.
cb(null, [file, !!data]);
});
};
async.map(file_list, loadHandler, done);
}
], function(err, result) {
if (err) return display(err);
// All files loaded! (or failed to load)
// result would be an array of tuples like [[file, bool file loaded?], ...]
});
waterfall accepts an array of functions and executes them in order, passing the result of each along as the arguments to the next, along with a callback function as the last argument, which you call with either an error, or the resulting data from the function.
You could of course add any number of different async callbacks between or around those two, without having to change the structure of the code at all. waterfall is actually only 1 of 10 different flow control structures, so you have a lot of options (although I almost invariably end up using auto, which allows you to mix parallel and series execution in the same function via a Makefile like requirements syntax).
I had this issue with a webapp I'm working on and here's how I solved it (with no libraries).
Step 1: Wrote a very lightweight pubsub implementation. Nothing fancy. Subscribe, Unsubscribe, Publish and Log. Everything (with comments) adds up 93 lines of Javascript. 2.7kb before gzip.
Step 2: Decoupled the process you were trying to accomplish by letting the pubsub implementation do the heavy lifting. Here's an example:
// listen for when files have been fetched and set up what to do when it comes in
pubsub.notification.subscribe(
"processFetchedResults", // notification to subscribe to
"fetchedFilesProcesser", // subscriber
/* what to do when files have been fetched */
function(params) {
var file_list = params.notificationParams.file_list;
for (file in file_list) { // iteration, not enumeration
var data = loadFile(file);
if (!data) {
display('failed to load: ' + file);
} else {
display(data);
}
}
);
// trigger fetch files
function fetchFiles(source) {
// ajax call to source
// on response code 200 publish "processFetchedResults"
// set publish parameters as ajax call response
pubsub.notification.publish("processFetchedResults", ajaxResponse, "fetchFilesFunction");
}
Of course this is very verbose in the setup and scarce on the magic behind the scenes.
Here's some technical details:
I'm using setTimeout to handle triggering subscriptions. This way they run in a non-blocking fashion.
The call is effectively decoupled from the processing. You can write a different subscription to the notification "processFetchedResults" and do multiple things once the response comes through (for example logging and processing) while keeping them in very separate, tiny and easily-managed code blocks.
The above code sample doesn't address fallbacks or run proper checks. I'm sure it will require a bit of tooling to get to production standards. Just wanted to show you how possible it is and how library-independent your solution can be.
Cheers!
Related
I'm working on a web application project with Flask+Python on the back-end, and Javascript on the front-end. I'd like to take advantage of some of the more modern (ES6/7) styles of things, such as Promises.
I've currently been writing all my javascript using Jquery 3+. Most of the time I'm making single Ajax requests to the server at a time. I've been specifically writing my Ajax requests using $.post and .done() and .fail(), which I know is already promise-based, or promise-like. Most of my code is in the style of
do function setup stuff and checks
make single ajax request
on success
good status, run several success code bits
bad status, run failure code
on failure - run failure code
I always seem to have to account for cases of server failures + cases of server success but it returned the wrong thing, which I usually control with a status argument. I've been looking into the straight Promise syntax with then, catch, resolve, reject, and I have some questions.
Is there any advantage to me switching to this format, from what I currently have, given my simple Ajax requests?
Can it be used to simplify the way I currently write my requests and handle my failure cases?
Here is a simple login example that I have, with a function that is called when a login button is clicked.
$('#loginsubmit').on('click', this, this.login);
// Login function
login() {
const form = $('#loginform').serialize();
$.post(Flask.url_for('index_page.login'), form, 'json')
.done((data)=>{
if (data.result.status < 0) {
// bad submit
this.resetLogin();
} else {
// good submit
if (data.result.message !== ''){
const stat = (data.result.status === 0) ? 'danger' : 'success';
const htmlstr = `<div class='alert alert-${stat}' role='alert'><h4>${data.result.message}</h4></div>`;
$('#loginmessage').html(htmlstr);
}
if (data.result.status === 1){
location.reload(true);
}
}
})
.fail((data)=>{ alert('Bad login attempt'); });
}
And a typical more complex example that I have. In this case, some interactive elements are initialized when a button is toggled on and off.
this.togglediv.on('change', this, this.initDynamic);
// Initialize the Dynamic Interaction upon toggle - makes loading an AJAX request
initDynamic(event) {
let _this = event.data;
if (!_this.togglediv.prop('checked')){
// Turning Off
_this.toggleOff();
} else {
// Turning On
_this.toggleOn();
// check for empty divs
let specempty = _this.graphdiv.is(':empty');
let imageempty = _this.imagediv.is(':empty');
let mapempty = _this.mapdiv.is(':empty');
// send the request if the dynamic divs are empty
if (imageempty) {
// make the form
let keys = ['plateifu', 'toggleon'];
let form = m.utils.buildForm(keys, _this.plateifu, _this.toggleon);
_this.toggleload.show();
$.post(Flask.url_for('galaxy_page.initdynamic'), form, 'json')
.done(function(data) {
let image = data.result.image;
let spaxel = data.result.spectra;
let spectitle = data.result.specmsg;
let maps = data.result.maps;
let mapmsg = data.result.mapmsg;
// Load the Image
_this.initOpenLayers(image);
_this.toggleload.hide();
// Try to load the spaxel
if (data.result.specstatus !== -1) {
_this.loadSpaxel(spaxel, spectitle);
} else {
_this.updateSpecMsg(`Error: ${spectitle}`, data.result.specstatus);
}
// Try to load the Maps
if (data.result.mapstatus !== -1) {
_this.initHeatmap(maps);
} else {
_this.updateMapMsg(`Error: ${mapmsg}`, data.result.mapstatus);
}
})
.fail(function(data) {
_this.updateSpecMsg(`Error: ${data.result.specmsg}`, data.result.specstatus);
_this.updateMapMsg(`Error: ${data.result.mapmsg}`, data.result.mapstatus);
_this.toggleload.hide();
});
}
}
}
I know this is already roughly using promises, but can I make improvements to my code flow by switching to the Promise then catch syntax? As you can see, I end up repeating a lot of the failure case code for real failures and successful failures. Most of my code looks like this, but I've been having a bit of trouble trying to convert these into something that's like
promise_ajax_call
.then(do real success)
.catch(all failure cases)
I always use Bluebird Promises. They have a Promise.resolve function that you can utilize with ajax. One thing to know about Promises, if you throw an error in a then, it will be caught in a chained catch. One way to clean this up a bit might be something like this (keep in mind, this is pseudo)
Promise.resolve($.ajax(...some properties..))
.then((data)=>{
if(data.result.status < 0){
//throw some error
}
// process the data how you need it
})
.catch((error){
// either the ajax failed, or you threw an error in your then. either way, it will end up in this catch
});
in my node server I have a variable,
var clicks = 0;
each time a user clicks in the webapp, a websocket event sends a message. on the server,
clicks++;
if (clicks % 10 == 0) {
saveClicks();
}
function saveClicks() {
var placementData = JSON.stringify({'clicks' : clicks});
fs.writeFile( __dirname + '/clicks.json', placementData, function(err) {
});
}
At what rate do I have to start worrying about overwrites? How would I calculate this math?
(I'm looking at creating a MongoDB json object for each click but I'm curious what a native solution can offer).
From the node.js doc for fs.writeFile():
Note that it is unsafe to use fs.writeFile() multiple times on the
same file without waiting for the callback. For this scenario,
fs.createWriteStream() is strongly recommended.
This isn't a math problem to figure out when this might cause a problem - it's just bad code that gives you the chance of a conflict in circumstances that cannot be predicted. The node.js doc clearly states that this can cause a conflict.
To make sure you don't have a conflict, write the code in a different way so a conflict cannot happen.
If you want to make sure that all writes happen in the proper order of incoming requests so the last request to arrive is always the one who ends up in the file, then you make need to queue your data as it arrives (so order is preserved) and write to the file in a way that opens the file for exclusive access so no other request can write while that prior request is still writing and handle contention errors appropriately.
This is an issue that databases mostly do for you automatically so it may be one reason to use a database.
Assuming you weren't using clustering and thus do not have multiple processes trying to write to this file and that you just want to make sure the last value sent is the one written to the file by this process, you could do something like this:
var saveClicks = (function() {
var isWriting = false;
var lastData;
return function() {
// always save most recent data here
lastData = JSON.stringify({'clicks' : clicks});
if (!isWriting) {
writeData(lastData);
}
function writeData(data) {
isWriting = true;
lastData = null;
fs.writeFile(__dirname + '/clicks.json', data, function(err) {
isWriting = false;
if (err) {
// decide what to do if an error occurs
}
// if more data arrived while we were writing this, then write it now
if (lastData) {
writeData(lastData);
}
});
}
}
})();
#jfriend00 is definitely right about createWriteStream and already made a point about the database, and everything's pretty much said, but I would like to emphasize on the point about databases because basically the file-saving approach seems weird to me.
So, use databases.
Not only would this save you from the headache of tracking such things, but would significantly speed up things (remember that the way stuff is done in node, the numerous file reading-writing processes would be parallelized in a single thread, so basically if one of them lasts for ages, it might slightly affect the overall performance).
Redis is a perfect solution to store key-value data, so you can store data like clicks per user in a Redis database which you'll have to get running alongside anyway when your get enough traffic :)
If you're not convinced yet, take a look at this simple benchmark:
Redis:
var async = require('async');
var redis = require("redis"),
client = redis.createClient();
console.time("To Redis");
async.mapLimit(new Array(100000).fill(0), 1, (el, cb) => client.set("./test", 777, cb), () => {
console.timeEnd("To Redis");
});
To Redis: 5410.383ms
fs:
var async = require('async');
var fs = require('fs');
console.time("To file");
async.mapLimit(new Array(100000).fill(0), 1, (el, cb) => fs.writeFile("./test", 777, cb), () => {
console.timeEnd("To file");
});
To file: 20344.749ms
And, by the way, you can significantly increase the number of clicks after which the progress would be stored (now it's 10) by simply adding this "click-saver" to the socket socket.on('disconnect', ....
This is a follow on question for a further issue I've encountered from this earlier question:
nodejs: read from file and store to db, limit maximum concurrent db operations
Problem:
I want to condtionally reschedule some operations for a later time, however this is breaking my method for handling back-pressure.
Detail:
I have a CSV file that I am reading in as a stream, and using transforms to convert to JSON and then asynchronously store each line to a DB.
As lines are processed by the transform, they are placed onto an async queue which is responsible for issuing the DB operations.
E.g.
parser._transform = function(data, encoding, done) {
var tick = this._parseRow(data);
dbQueue.push(tick, function(err, result) {
if (typeof(err) != 'undefined') { console.log(err) }
});
this.push(tick);
done();
}
Back pressure is handled by pausing and resuming the parser when the queue is saturated/empty:
dbQueue.saturated = function() {
parser.pause();
}
dbQueue.empty = function() {
parser.resume();
}
The change I have been trying to make is that when an item is pulled off the queue, it is conditionally rescheduled for some time (100ms) in future:
var dbQueue = async.queue(function(data, callback) {
if (condition) {
// re-schedule operation by adding back to queue 100ms later
setTimeout(function(data, callback) {
dbQueue.push(data, function(err, result){
});
}, 100, data, callback);
} else {
//execute the db store
... ...
}
}
I believe my problem is that now many operations will spend most their time in setTimeout, so the dbQueue will be empty, and the back-pressure on the transform stream is not being handled as desired.
I have tried a few attempts at using counters such as max_ops and running_ops to ensure the stream is paused/resumed, but unsuccessfully.
Is there a more idiomatic way of handling this in node.js?
Since this looks like it's an external condition and not something related to what dbQueue is doing, instead of re-inserting the data in to the queue when the condition occurs, I would pause simply pause dbQueue. For example, lets say your condition is that the database disconnected for some reason and there's an event you can listen to for that. In that case you can just do something similar to what you're doing when dbQueue is saturated/empty:
db.on('disconnect', function() {
dbQueue.pause();
});
db.on('connect', function() {
dbQueue.resume();
});
This is usually a better approach than waiting for some pre-determined timeout. That being said, sometimes waiting for a timeout is the only option. In that case you could do something similar but, instead of waiting for a separate event to trigger the resume(), simply use setTimeout():
db.on('disconnect', function() {
dbQueue.pause();
setTimeout(function() {
dbQueue.resume();
});
});
Note: If we are really talking about db disconnects here, then you might also want to pause/resume dbQueue if there's a db error in the case that 100ms isn't enough time for the db to re-connect
If you have a more specific condition you're looking for, and you're willing to share what that is, I may be able to give you a better example :)
I just came to this awful situation where I have an array of strings each representing a possibly existing file (e.g. var files = ['file1', 'file2', 'file3']. I need to loop through these file names and try to see if it exists in the current directory, and if it does, stop looping and forget the rest of the remaining files. So basically I want to find the first existing file of those, and fallback to a hard-coded message if nothing was found.
This is what I currently have:
var found = false;
files.forEach(function(file) {
if (found) return false;
fs.readFileSync(path + file, function(err, data) {
if (err) return;
found = true;
continueWithStuff();
});
});
if (found === false) {
// Handle this scenario.
}
This is bad. It's blocking (readFileSync) thus it's slow.
I can't just supply callback methods for fs.readFile, it's not that simple because I need to take the first found item... and the callbacks may be called at any random order. I think one way would be to have a callback that increases a counter and keeps a list of found/not found information and when it reaches the files.length count, then it checks through the found/not found info and decides what to do next.
This is painful. I do see the performance greatness in evented IO, but this is unacceptable. What choices do I have?
Don't use sync stuff in a normal server environment -- things are single threaded and this will completely lock things up while it waits for the results of this io bound loop. CLI utility = probably fine, server = only okay on startup.
A common library for asynchronous flow control is
https://github.com/caolan/async
async.filter(['file1','file2','file3'], path.exists, function(results){
// results now equals an array of the existing files
});
And if you want to say, avoid the extra calls to path.exists, then you could pretty easily write a function 'first' that did the operations until some test succeeded. Similar to https://github.com/caolan/async#until - but you're interested in the output.
The async library is absolutely what you are looking for. It provides pretty much all the types of iteration that you'd want in a nice asynchronous way. You don't have to write your own 'first' function though. Async already provides a 'some' function that does exactly that.
https://github.com/caolan/async#some
async.some(files, path.exists, function(result) {
if (result) {
continueWithStuff();
}
else {
// Handle this scenario
}
});
If you or someone reading this in the future doesn't want to use Async, you can also do your own basic version of 'some.'
function some(arr, func, cb) {
var count = arr.length-1;
(function loop() {
if (count == -1) {
return cb(false);
}
func(arr[count--], function(result) {
if (result) cb(true);
else loop();
});
})();
}
some(files, path.exists, function(found) {
if (found) {
continueWithStuff();
}
else {
// Handle this scenario
}
});
You can do this without third-party libraries by using a recursive function. Pass it the array of filenames and a pointer, initially set to zero. The function should check for the existence of the indicated (by the pointer) file name in the array, and in its callback it should either do the other stuff (if the file exists) or increment the pointer and call itself (if the file doesn't exist).
Use async.waterfall for controlling the async call in node.js for example:
by including async-library and use waterfall call in async:
var async = require('async');
async.waterfall(
[function(callback)
{
callback(null, taskFirst(rootRequest,rootRequestFrom,rootRequestTo, callback, res));
},
function(arg1, callback)
{
if(arg1!==undefined )
{
callback(null, taskSecond(arg1,rootRequest,rootRequestFrom,rootRequestTo,callback, res));
}
}
])
(Edit: removed sync suggestion because it's not a good idea, and we wouldn't want anyone to copy/paste it and use it in production code, would we?)
If you insist on using async stuff, I think a simpler way to implement this than what you described is to do the following:
var path = require('path'), fileCounter = 0;
function existCB(fileExists) {
if (fileExists) {
global.fileExists = fileCounter;
continueWithStuff();
return;
}
fileCounter++;
if (fileCounter >= files.length) {
// none of the files exist, handle stuff
return;
}
path.exists(files[fileCounter], existCB);
}
path.exists(files[0], existCB);
I'm wondering if mutexes/locks are required for data access within Node.js. For example, lets say I've created a simple server. The server provides a couple protocol methods to add to and remove from an internal array. Do I need to protect the internal array with some type of mutex?
I understand Javascript (and thus Node.js) is single threaded. I'm just not clear on how events are handled. Do events interrupt? If that is the case, my app could be in the middle of reading the array, get interrupted to run an event callback which changes the array, and then continue processing the array which has now been changed by the event callback.
Locks and mutexes are indeed necessary sometimes, even if Node.js is single-threaded.
Suppose you have two files that must have the same content and not having the same content is considered an inconsistent state. Now suppose you need to change them without blocking the server. If you do this:
fs.writeFile('file1', 'content', function (error) {
if (error) {
// ...
} else {
fs.writeFile('file2', 'content', function (error) {
if (error) {
// ...
} else {
// ready to continue
}
});
}
});
you fall in an inconsistent state between the two calls, when another function in the same script may be able to read the two files.
The rwlock module is perfect to handle these cases.
I'm wondering if mutexes/locks are required for data access within Node.js.
Nope! Events are handled the moment there's no other code to run, this means there will be no contention, as only the currently running code has access to that internal array. As a side-effect of node being single-threaded, long computations will block all other events until the computation is done.
I understand Javascript (and thus Node.js) is single threaded. I'm just not clear on how events are handled. Do events interrupt?
Nope, events are not interrupted. For example, if you put a while(true){} into your code, it would stop any other code from being executed, because there is always another iteration of the loop to be run.
If you have a long-running computation, it is a good idea to use process.nextTick, as this will allow it to be run when nothing else is running (I'm fuzzy on this: the example below shows that I'm probably right about it running uninterrupted, probably).
If you have any other questions, feel free to stop into #node.js and ask questions. Also, I asked a couple people to look at this and make sure I'm not totally wrong ;)
var count = 0;
var numIterations = 100;
while(numIterations--) {
process.nextTick(function() {
count = count + 1;
});
}
setTimeout(function() {
console.log(count);
}, 2);
//
//=> 100
//
Thanks to AAA_awright of #node.js :)
I was looking for solution for node mutexes. Mutexes are sometimes necessary - you could be running multiple instances of your node application and may want to assure that only one of them is doing some particular thing. All solutions I could find were either not cross-process or depending on redis.
So I made my own solution using file locks: https://github.com/Perennials/mutex-node
Mutexes are definitely necessary for a lot of back end implementations. Consider a class where you need to maintain synchronicity of async execution by constructing a promise chain.
let _ = new WeakMap();
class Foobar {
constructor() {
_.set(this, { pc : Promise.resolve() } );
}
doSomething(x) {
return new Promise( (resolve,reject) => {
_.get(this).pc = _.get(this).pc.then( () => {
y = some value gotten asynchronously
resolve(y);
})
})
}
}
How can you be sure that a promise is not left dangling via race condition? It's frustrating that node hasn't made mutexes native since javascript is so inherently asynchronous and bringing third party modules into the process space is always a security risk.