Related
Working on the IndexedDB API, I'm creating many objectStores that belong to the same database, in one transaction, when the user loads a webpage.
I order to do so, I created an object which contains many objectStores to be created, each one has it's name, data and index.
Then a function runs the object and effectively creates Database, objectStores and indexes for each one.
However of all OS's created, just the last member of the object gets populated. Say of 5 objects to be created and populated, 5 are created but only the last one is populated.
Clearly is a problem of overwriting or some issue related to the JS stack or asynchronicity.
I appreciate any help to make the code populate all OS not the last one.
My browser is Chrome 56, I fetch data from an API whose response is OK, and I'm coding on vanillajs. I appreciate your help in vanillajs, there is no way to use any library or framework different from what the modern Web Platform offers.
Here is the code:
On the HTML side, this is an example of the object:
var datastores = [{osName:'items', osEndpoint: '/api/data/os/1/1', osIndex:'value'}, {osName:'categories', osEndpoint: '/api/data/os/2/1', osIndex: 'idc'}];
On javascript:
var request = indexedDB.open(DB_NAME, DB_VERSION); // open database.
request.onerror = function (e) { // error callback
console.error("error: " + e.target.errorCode);
};
request.onupgradeneeded = function (e) { // the onupgradeneeded event which creates all schema, dataabase, objectstores and populates OS.
var db = this.result;
for (var i in datastores) { // loop the objectStore object.
var objectStore = db.createObjectStore(datastores[i].osName, {keyPath: "id"});
TB_NAME = datastores[i].osName; // instantiate each objectStore name.
objectStore.createIndex(datastores[i].osIndex, datastores[i].osIndex, { unique: false }); // create each index.
objectStore.transaction.oncomplete = function(e) { // oncomplete event, after creating OS...
fetchGet(datastores[i].osEndpoint, popTable); // runs a function to fetch from a designated endpoint and calls a function.
};
}
}
Now the functions: to fetch data and to populate data:
function fetchGet(url, function) { // fetch from API.
fetch(url, {
method: 'GET'
}).then(function(response) {
return response.json();
}).then(function(json) {
popTable (json);
}).catch(function(err) {
console.log('error!', err);
});
}
function popTable(json) {
var m = 0;
var tx = db.transaction(TB_NAME, "readwrite");
tx.oncomplete = function(e) {
console.log("Completed Transaction " + TB_NAME);
};
tx.onerror = function(e) {
console.error("error: " + e.target.errorCode);
};
var txObjectStore = tx.objectStore(TB_NAME);
for (m in json) {
var request = txObjectStore.add(json[m]);
request.onsuccess = function (e) {
console.log('adding... ' );
};
}
}
The for (var i in datastores) loop runs synchronously, updating the global TB_NAME variable every time. When the loop finishes, TB_NAME will be holding the name of the last object store.
By the time the asynchronous popTable calls run, TB_NAME will forever be holding the name of the last store, so that's the only one that will update. Try adding logging to popTable to see this.
You'll need to pass the current value of the store name along somehow (e.g. as an argument to fetchGet). Also note that although you pass popTable as a parameter when calling fetchGet you're not actually accepting it as an argument.
...
Specific changes:
Change how you call fetchGet to include the store name:
fetchGet(datastores[i].osEndpoint, popTable, datastores[i].osName);
Change the fetchGet function to accept the args:
function fetchGet(url, func, name) {
And then instead of calling popTable directly, do:
func(json, name);
And then change the definition of popTable to be:
function popTable(json, name) {
... and use name in the transaction.
I'm trying to use the Steam Community (steamcommunity) npm package along with meteorhacks:npm Meteor package to retreive a user's inventory. My code is as follows:
lib/methods.js:
Meteor.methods({
getSteamInventory: function(steamId) {
// Check arguments for validity
check(steamId, String);
// Require Steam Community module
var SteamCommunity = Meteor.npmRequire('steamcommunity');
var community = new SteamCommunity();
// Get the inventory (730 = CSGO App ID, 2 = Valve Inventory Context)
var inventory = Async.runSync(function(done) {
community.getUserInventory(steamId, 730, 2, true, function(error, inventory, currency) {
done(error, inventory);
});
});
if (inventory.error) {
throw new Meteor.Error('steam-error', inventory.error);
} else {
return inventory.results;
}
}
});
client/views/inventory.js:
Template.Trade.helpers({
inventory: function() {
if (Meteor.user() && !Meteor.loggingIn()) {
var inventory;
Meteor.call('getSteamInventory', Meteor.user().services.steam.id, function(error, result) {
if (!error) {
inventory = result;
}
});
return inventory;
}
}
});
When trying to access the results of the call, nothing is displayed on the client or through the console.
I can add console.log(inventory) inside the callback of the community.getUserInventory function and receive the results on the server.
Relevant docs:
https://github.com/meteorhacks/npm
https://github.com/DoctorMcKay/node-steamcommunity/wiki/CSteamUser#getinventoryappid-contextid-tradableonly-callback
You have to use a reactive data source inside your inventory helper. Otherwise, Meteor doesn't know when to rerun it. You could create a ReactiveVar in the template:
Template.Trade.onCreated(function() {
this.inventory = new ReactiveVar;
});
In the helper, you establish a reactive dependency by getting its value:
Template.Trade.helpers({
inventory() {
return Template.instance().inventory.get();
}
});
Setting the value happens in the Meteor.call callback. You shouldn't call the method inside the helper, by the way. See David Weldon's blog post on common mistakes for details (section Overworked Helpers).
Meteor.call('getSteamInventory', …, function(error, result) {
if (! error) {
// Set the `template` variable in the closure of this handler function.
template.inventory.set(result);
}
});
I think the issue here is you're calling an async function inside your getSteamInventory Meteor method, and thus it will always try to return the result before you actually have the result from the community.getUserInventory call. Luckily, Meteor has WrapAsync for this case, so your method then simply becomes:
Meteor.methods({
getSteamInventory: function(steamId) {
// Check arguments for validity
check(steamId, String);
var community = new SteamCommunity();
var loadInventorySync = Meteor.wrapAsync(community.getUserInventory, community);
//pass in variables to getUserInventory
return loadInventorySync(steamId,730,2, false);
}
});
Note: I moved the SteamCommunity = Npm.require('SteamCommunity') to a global var, so that I wouldn't have to declare it every method call.
You can then just call this method on the client as you have already done in the way chris has outlined.
I'm connecting and making an insert with Node/MongoDB, but due to a scope issue, I can't access the connection from the function. Any idea how to make the 'db' variable global scope?
mongodb.connect("mongodb://localhost:27017/userDB", function(err, db) {
if(!err) {
console.log("We are connected");
} else {
console.log(err);
}
});
function RegisterUser(user, pass) {
var collection = db.collection('users');
var docs = [{username:user}, {password: pass}];
collection.insert(docs, {w:1}, function(err, result) {
collection.find().toArray(function(err, items) {});
socket.emit('message', items);
});
}
/var/www/baseball/app.js:80
var collection = db.collection('users'); <--db is not defined
^
ReferenceError: db is not defined
at RegisterUser (/var/www/baseball/app.js:80:20)
at ParseData (/var/www/baseball/app.js:63:6)
In general, only make the connection once, probably as you app starts up. The mongo driver will handle pooling etc., at least, so I was told by experts from MongoLabs. Note that this is very different than what you might do in Java, etc... Save the db value returned by connect() somewhere, perhaps a global or in app, or in a commonly used one of your modules. Then use it as needed in RegisterUser, DeleteUser, etc.
More of a node question really, but probably worth the tag since it gets asked a bit. So you say there is a scoping issue and you are right as the variable is local to the callback function on the .connect() method and is not visible anywhere else. One way is to dump all your logic inside that callback, so there is no scoping issue, but you probably don't want to do that.
Asking "how do I set a global", is also not really the right approach. Well not directly as there are general funny things about breaking up the "async" pattern of node. So a better approach is with some kind of "singleton" instance where you set the connection only once, but as that is global or can otherwise be "required" in for use in other areas of your application.
Here is one "trivial" approach to demonstrate, but there are many ways to do the same thing:
var async = require('async'),
mongodb = require('mongodb'),
MongoClient = mongodb.MongoClient;
var Model = (function() {
var _db;
var conlock;
return {
getDb: function(callback) {
var err = null;
if ( _db == null && conlock == null ) {
conlock = 1;
MongoClient.connect('mongodb://localhost/test',function(err,db) {
_db = db;
conlock == null;
if (!err) {
console.log("Connected")
}
callback(err,_db);
});
} else if ( conlock != null ) {
var count = 0;
async.whilst(
function() { return ( _db == null ) && (count < 5) },
function(callback) {
count++
setTimeout(callback,500);
},
function(err) {
if ( count == 5 )
err = new Error("connect wait exceeded");
callback(err,_db);
}
);
} else {
callback(err,_db);
}
}
};
})();
async.parallel(
[
function(callback) {
console.log("call model");
Model.getDb(function(err,db) {
if (err) throw err;
if (db != undefined)
console.log("db is defined");
callback();
});
},
function(callback) {
console.log("call model again");
Model.getDb(function(err,db) {
if (err) throw err;
if (db != undefined)
console.log("db is defined here as well");
callback();
})
}
],
function(err) {
Model.getDb(function(err,db) {
db.close();
});
}
);
So out little "Model" object here has a single method in .getDb(), and it also maintains a private variable holding the _db connection once it has been established. The basic logic on that method is to see if _db is defined and where it not then establish a connection with the driver. On the connection callback the _db variable is then set.
The other thing here is the method itself accepts a "callback" so this is how you use it later, where either an error or the current connection will be returned.
The last part is just a demonstration of two functions to be implemented in code. Where in the first call the call to connect to the database is made before following into the callback function provided.
The next time we call though, the connection is already set in the private variable, so that data is merely returned and you don't establish a connection again.
There are various ways to implement this sort of thing, but that is the basic logic pattern to follow. There are many other "helper" implementations that wrap the MongoDB driver to make these sort of things simple, as well as managing connection pools and ensuring the connection is up as well for you, so it may be well worth looking at these even if you are still insistent on doing all the work yourself from the lower level driver base.
First of all, you are only going to be registering users once you have a connection, so do what ever work you need to do in there... so call RegisterUser from within the connection scope. If you want to use the db object within that function you will need to pass in the parameters as db
RegisterUsers(db, user, pass)
you may then use db within the function
I have only recently started developing for node.js, so forgive me if this is a stupid question - I come from Javaland, where objects still live happily sequentially and synchronous. ;)
I have a key generator object that issues keys for database inserts using a variant of the high-low algorithm. Here's my code:
function KeyGenerator() {
var nextKey;
var upperBound;
this.generateKey = function(table, done) {
if (nextKey > upperBound) {
require("../sync/key-series-request").requestKeys(function(err,nextKey,upperBound) {
if (err) { return done(err); }
this.nextKey = nextKey;
this.upperBound = upperBound;
done(nextKey++);
});
} else {
done(nextKey++);
}
}
}
Obviously, when I ask it for a key, I must ensure that it never, ever issues the same key twice. In Java, if I wanted to enable concurrent access, I would make make this synchronized.
In node.js, is there any similar concept, or is it unnecessary? I intend to ask the generator for a bunch of keys for a bulk insert using async.parallel. My expectation is that since node is single-threaded, I need not worry about the same key ever being issued more than once, can someone please confirm this is correct?
Obtaining a new series involves an asynchronous database operation, so if I do 20 simultaneous key requests, but the series has only two keys left, won't I end up with 18 requests for a new series? What can I do to avoid that?
UPDATE
This is the code for requestKeys:
exports.requestKeys = function (done) {
var db = require("../storage/db");
db.query("select next_key, upper_bound from key_generation where type='issue'", function(err,results) {
if (err) { done(err); } else {
if (results.length === 0) {
// Somehow we lost the "issue" row - this should never have happened
done (new Error("Could not find 'issue' row in key generation table"));
} else {
var nextKey = results[0].next_key;
var upperBound = results[0].upper_bound;
db.query("update key_generation set next_key=?, upper_bound=? where type='issue'",
[ nextKey + KEY_SERIES_WIDTH, upperBound + KEY_SERIES_WIDTH],
function (err,results) {
if (err) { done(err); } else {
done(null, nextKey, upperBound);
}
});
}
}
});
}
UPDATE 2
I should probably mention that consuming a key requires db access even if a new series doesn't have to be requested, because the consumed key will have to be marked as used in the database. The code doesn't reflect this because I ran into trouble before I got around to implementing that part.
UPDATE 3
I think I got it using event emitting:
function KeyGenerator() {
var nextKey;
var upperBound;
var emitter = new events.EventEmitter();
var requesting = true;
// Initialize the generator with the stored values
db.query("select * from key_generation where type='use'", function(err, results)
if (err) { throw err; }
if (results.length === 0) {
throw new Error("Could not get key generation parameters: Row is missing");
}
nextKey = results[0].next_key;
upperBound = results[0].upper_bound;
console.log("Setting requesting = false, emitting event");
requesting = false;
emitter.emit("KeysAvailable");
});
this.generateKey = function(table, done) {
console.log("generateKey, state is:\n nextKey: " + nextKey + "\n upperBound:" + upperBound + "\n requesting:" + requesting + " ");
if (nextKey > upperBound) {
if (!requesting) {
requesting = true;
console.log("Requesting new series");
require("../sync/key-series-request").requestSeries(function(err,newNextKey,newUpperBound) {
if (err) { return done(err); }
console.log("New series available:\n nextKey: " + newNextKey + "\n upperBound: " + newUpperBound);
nextKey = newNextKey;
upperBound = newUpperBound;
requesting = false;
emitter.emit("KeysAvailable");
done(null,nextKey++);
});
} else {
console.log("Key request is already underway, deferring");
var that = this;
emitter.once("KeysAvailable", function() { console.log("Executing deferred call"); that.generateKey(table,done); });
}
} else {
done(null,nextKey++);
}
}
}
I've peppered it with logging outputs, and it does do what I want it to.
As another answer mentions, you will potentially end up with results different from what you want. Taking things in order:
function KeyGenerator() {
// at first I was thinking you wanted these as 'class' properties
// and thus would want to proceed them with this. rather than as vars
// but I think you want them as 'private' members variables of the
// class instance. That's dandy, you'll just want to do things differently
// down below
var nextKey;
var upperBound;
this.generateKey = function (table, done) {
if (nextKey > upperBound) {
// truncated the require path below for readability.
// more importantly, renamed parameters to function
require("key-series-request").requestKeys(function(err,nKey,uBound) {
if (err) { return done(err); }
// note that thanks to the miracle of closures, you have access to
// the nextKey and upperBound variables from the enclosing scope
// but I needed to rename the parameters or else they would shadow/
// obscure the variables with the same name.
nextKey = nKey;
upperBound = uBound;
done(nextKey++);
});
} else {
done(nextKey++);
}
}
}
Regarding the .requestKeys function, you will need to somehow introduce some kind of synchronization. This isn't actually terrible in one way because with only one thread of execution, you don't need to sweat the challenge of setting your semaphore in a single operation, but it is challenging to deal with the multiple callers because you will want other callers to effectively (but not really) block waiting for the first call to requestKeys() which is going to the DB to return.
I need to think about this part a bit more. I had a basic solution in mind which involved setting a simple semaphore and queuing the callbacks, but when I was typing it up I realized I was actually introducing a more subtle potential synchronization bug when processing the queued callbacks.
UPDATE:
I was just finishing up one approach as you were writing about your EventEmitter approach, which seems reasonable. See this gist which illustrates the approach. I took. Just run it and you'll see the behavior. It has some console logging to see which calls are getting deferred for a new key block or which can be handled immediately. The primary moving part of the solution is (note that the keyManager provides the stubbed out implementation of your require('key-series-request'):
function KeyGenerator(km) {
this.nextKey = undefined;
this.upperBound = undefined;
this.imWorkingOnIt = false;
this.queuedCallbacks = [];
this.keyManager = km;
this.generateKey = function(table, done) {
if (this.imWorkingOnIt){
this.queuedCallbacks.push(done);
console.log('KG deferred call. Pending CBs: '+this.queuedCallbacks.length);
return;
};
var self=this;
if ((typeof(this.nextKey) ==='undefined') || (this.nextKey > this.upperBound) ){
// set a semaphore & add the callback to the queued callback list
this.imWorkingOnIt = true;
this.queuedCallbacks.push(done);
this.keyManager.requestKeys(function(err,nKey,uBound) {
if (err) { return done(err); }
self.nextKey = nKey;
self.upperBound = uBound;
var theCallbackList = self.queuedCallbacks;
self.queuedCallbacks = [];
self.imWorkingOnIt = false;
theCallbackList.forEach(function(f){
// rather than making the final callback directly,
// call KeyGenerator.generateKey() with the original
// callback
setImmediate(function(){self.generateKey(table,f);});
});
});
} else {
console.log('KG immediate call',self.nextKey);
var z= self.nextKey++;
setImmediate(function(){done(z);});
}
}
};
If your Node.js code to calculate the next key didn't need to execute an async operation then you wouldn't run into synchronization issues because there is only one JavaScript thread executing code. Access to the nextKey/upperBound variables will be done in sequence by only one thread (i.e. request 1 will access first, then request 2, then request 3 et cetera.) In the Java-world you will always need synchronization because multiple threads will be executing even if you didn't make a DB call.
However, in your Node.js code since you are making an async call to get the nextKey you could get strange results. There is still only one JavaScript thread executing your code, but it would be possible for request 1 to make the call to the DB, then Node.js might accept request 2 (while request 1 is getting data from the DB) and this second request will also make a request to the DB to get keys. Let's say that request 2 gets data from the DB quicker than request 1 and update nextKey/upperBound variables with values 100/150. Once request 1 gets its data (say values 50/100) then it will update nextKey/upperBound. This scenario wouldn't result in duplicate keys, but you might see gaps in your keys (for example, not all keys 100 to 150 will be used because request 1 eventually reset the values to 50/100)
This makes me think that you will need a way to sync access, but I am not exactly sure what will be the best way to achieve this.
I'm using mongoose to insert some data into mongodb. The code looks like:
var mongoose = require('mongoose');
mongoose.connect('mongo://localhost/test');
var conn = mongoose.connection;
// insert users
conn.collection('users').insert([{/*user1*/},{/*user2*/}], function(err, docs) {
var user1 = docs[0], user2 = docs[1];
// insert channels
conn.collection('channels').insert([{userId:user1._id},{userId:user2._id}], function(err, docs) {
var channel1 = docs[0], channel2 = docs[1];
// insert articles
conn.collection('articles').insert([{userId:user1._id,channelId:channel1._id},{}], function(err, docs) {
var article1 = docs[0], article2 = docs[1];
}
});
};
You can see there are a lot of nested callbacks there, so I'm trying to use q to refactor it.
I hope the code will look like:
Q.fcall(step1)
.then(step2)
.then(step3)
.then(step4)
.then(function (value4) {
// Do something with value4
}, function (error) {
// Handle any error from step1 through step4
})
.end();
But I don't know how to do it.
You'll want to use Q.nfcall, documented in the README and the Wiki. All Mongoose methods are Node-style. I'll also use .spread instead of manually destructuring .then.
var mongoose = require('mongoose');
mongoose.connect('mongo://localhost/test');
var conn = mongoose.connection;
var users = conn.collection('users');
var channels = conn.collection('channels');
var articles = conn.collection('articles');
function getInsertedArticles() {
return Q.nfcall(users.insert.bind(users), [{/*user1*/},{/*user2*/}]).spread(function (user1, user2) {
return Q.nfcall(channels.insert.bind(channels), [{userId:user1._id},{userId:user2._id}]).spread(function (channel1, channel2) {
return Q.nfcall(articles.insert.bind(articles), [{userId:user1._id,channelId:channel1._id},{}]);
});
})
}
getInsertedArticles()
.spread(function (article1, article2) {
// you only get here if all three of the above steps succeeded
})
.fail(function (error) {
// you get here if any of the above three steps failed
}
);
In practice, you will rarely want to use .spread, since you usually are inserting an array that you don't know the size of. In that case the code can look more like this (here I also illustrate Q.nbind).
To compare with the original one is not quite fair, because your original has no error handling. A corrected Node-style version of the original would be like so:
var mongoose = require('mongoose');
mongoose.connect('mongo://localhost/test');
var conn = mongoose.connection;
function getInsertedArticles(cb) {
// insert users
conn.collection('users').insert([{/*user1*/},{/*user2*/}], function(err, docs) {
if (err) {
cb(err);
return;
}
var user1 = docs[0], user2 = docs[1];
// insert channels
conn.collection('channels').insert([{userId:user1._id},{userId:user2._id}], function(err, docs) {
if (err) {
cb(err);
return;
}
var channel1 = docs[0], channel2 = docs[1];
// insert articles
conn.collection('articles').insert([{userId:user1._id,channelId:channel1._id},{}], function(err, docs) {
if (err) {
cb(err);
return;
}
var article1 = docs[0], article2 = docs[1];
cb(null, [article1, article2]);
}
});
};
}
getInsertedArticles(function (err, articles) {
if (err) {
// you get here if any of the three steps failed.
// `articles` is `undefined`.
} else {
// you get here if all three succeeded.
// `err` is null.
}
});
With alternative deferred promise implementation, you may do it as following:
var mongoose = require('mongoose');
mongoose.connect('mongo://localhost/test');
var conn = mongoose.connection;
// Setup 'pinsert', promise version of 'insert' method
var promisify = require('deferred').promisify
mongoose.Collection.prototype.pinsert = promisify(mongoose.Collection.prototype.insert);
var user1, user2;
// insert users
conn.collection('users').pinsert([{/*user1*/},{/*user2*/}])
// insert channels
.then(function (users) {
user1 = users[0]; user2 = users[1];
return conn.collection('channels').pinsert([{userId:user1._id},{userId:user2._id}]);
})
// insert articles
.match(function (channel1, channel2) {
return conn.collection('articles').pinsert([{userId:user1._id,channelId:channel1._id},{}]);
})
.done(function (articles) {
// Do something with articles
}, function (err) {
// Handle any error that might have occurred on the way
});
Considering Model.save instead of Collection.insert (quite the same in our case).
You don't need to use Q, you can wrap yourself the save method and return directly a Mongoose Promise.
First create an utility method to wrap the save function, that's not very clean but something like:
//Utility function (put it in a better place)
var saveInPromise = function (model) {
var promise = new mongoose.Promise();
model.save(function (err, result) {
promise.resolve(err, result);
});
return promise;
}
Then you can use it instead of save to chain your promises
var User = mongoose.model('User');
var Channel = mongoose.model('Channel');
var Article = mongoose.model('Article');
//Step 1
var user = new User({data: 'value'});
saveInPromise(user).then(function () {
//Step 2
var channel = new Channel({user: user.id})
return saveInPromise(channel);
}).then(function (channel) {
//Step 3
var article = new Article({channel: channel.id})
return saveInPromise(article);
}, function (err) {
//A single place to handle your errors
});
I guess that's the kind of simplicity we are looking for.. right? Of course the utility function can be implemented with better integration with Mongoose.
Let me know what you think about that.
By the way there is an issue about that exact problem in the Mongoose Github:
Add 'promise' return value to model save operation
I hope it's gonna be solved soon. I think it takes some times because they are thinking of switching from mpromise to Q: See here and then here.
Two years later, this question just popped up in my RSS client ...
Things have moved on somewhat since May 2012 and we might choose to solve this one in a different way now. More specifically, the Javascript community has become "reduce-aware" since the decision to include Array.prototype.reduce (and other Array methods) in ECMAScript5. Array.prototype.reduce was always (and still is) available as a polyfill but was little appreciated by many of us at that time. Those who were running ahead of the curve may demur on this point, of course.
The problem posed in the question appears to be formulaic, with rules as follows :
The objects in the array passed as the first param to conn.collection(table).insert() build as follows (where N corresponds to the object's index in an array):
[ {}, ... ]
[ {userId:userN._id}, ... ]
[ {userId:userN._id, channelId:channelN._id}, ... ]
table names (in order) are : users, channels, articles.
the corresopnding object properties are : user, channel, article (ie the table names without the pluralizing 's').
A general pattern from this article by Taoofcode) for making asynchronous call in series is :
function workMyCollection(arr) {
return arr.reduce(function(promise, item) {
return promise.then(function(result) {
return doSomethingAsyncWithResult(item, result);
});
}, q());
}
With quite light adaptation, this pattern can be made to orchestrate the required sequencing :
function cascadeInsert(tables, n) {
/*
/* tables: array of unpluralisd table names
/* n: number of users to insert.
/* returns promise of completion|error
*/
var ids = []; // this outer array is available to the inner functions (to be read and written to).
for(var i=0; i<n; i++) { ids.push({}); } //initialize the ids array with n plain objects.
return tables.reduce(function (promise, t) {
return promise.then(function (docs) {
for(var i=0; i<ids.length; i++) {
if(!docs[i]) throw (new Error(t + ": returned documents list does not match the request"));//or simply `continue;` to be error tolerant (if acceptable server-side).
ids[i][t+'Id'] = docs[i]._id; //progressively add properties to the `ids` objects
}
return insert(ids, t + 's');
});
}, Q());
}
Lastly, here's the promise-returning worker function, insert() :
function insert(ids, t) {
/*
/* ids: array of plain objects with properties as defined by the rules
/* t: table name.
/* returns promise of docs
*/
var dfrd = Q.defer();
conn.collection(t).insert(ids, function(err, docs) {
(err) ? dfrd.reject(err) : dfrd.resolve(docs);
});
return dfrd.promise;
}
Thus, you can specify as parameters passed to cascadeInsert, the actual table/property names and the number of users to insert.
cascadeInsert( ['user', 'channel', 'article'], 2 ).then(function () {
// you get here if everything was successful
}).catch(function (err) {
// you get here if anything failed
});
This works nicely because the tables in the question all have regular plurals (user => users, channel => channels). If any of them was irregular (eg stimulus => stimuli, child => children), then we would need to rethink - (and probably implement a lookup hash). In any case, the adaptation would be fairly trivial.
Today we have mongoose-q as well. A plugin to mongoose that gives you stuff like execQ and saveQ which return Q promises.