Call getUsers() function from MongoDB driver - javascript

I am currently building an api application that checks the status and gets information of various types of dbs(i.e. Mongo, MySQL) using Sailsjs such as users, depending on a user input. Here is a snippet of the code I am working on. The local host is just the test database I am connecting to, but in the future it will be supplied by the user.
var mp = require('mongodb-promise');
var MongoClient = require('mongodb');
mp.MongoClient.connect("mongodb://#localhost:27017/test")
.then(function(db){
db.getUsers().then(function(users){
res.ok(users);
})
})
.fail(function(err) {
console.log(err);
})
I am attempting to use promises for the async issue. The problem I am having is that it doesn't work. It tells me that that Object[object object] has no method 'getUsers'. I have searched and can't seem to find a solution that works.
If I change the function to the below, I get the some data back.
mp.MongoClient.connect("mongodb://#localhost:27017/IMS")
.then(function(db){
db.stats().then(function(stats){
return res.ok(stats);
})
})
.fail(function(err) {
console.log(err);
dbObject.vipUp = false;
})
I am not sure what the issue is or how to solve it.

What you are doing here is using the node native driver methods to connect and inspect the database. There is in fact "no such method" as .getUsers() here in this API or in fact in any other API.
The .getUsers() function is just a "shell helper" that is basically implemented like this:
function (args) {
var cmdObj = {usersInfo: 1};
Object.extend(cmdObj, args);
var res = this.runCommand(cmdObj);
if (!res.ok) {
var authSchemaIncompatibleCode = 69;
if (res.code == authSchemaIncompatibleCode ||
(res.code == null && res.errmsg == "no such cmd: usersInfo")) {
// Working with 2.4 schema user data
return this.system.users.find({}).toArray();
}
throw Error(res.errmsg);
}
return res.users;
}
So what you should be able to see here is that this normally wraps a "command" form, or otherwise falls back for compatibility with MongoDB 2.4 to querying the system.users collection on the current database.
Therefore, instead of calling a method that does not exist, you then need to use the .command() method instead:
mp.MongoClient.connect("mongodb://#localhost:27017/test")
.then(function(db){
db.command({ "usersInfo": 1}).then(function(users){
res.ok(users);
})
})
.fail(function(err) {
console.log(err);
})
Or in the case of connecting to a MongoDB 2.4 instance, then fetch from the .collection():
mp.MongoClient.connect("mongodb://#localhost:27017/test")
.then(function(db){
db.collection('system.users').find().toArray().then(function(users){
res.ok(users);
})
})
.fail(function(err) {
console.log(err);
})
At any rate, you really should be establishing the database connection elsewhere in your application ( or re-using the underlying driver connection from another store ), and then calling methods on the connection already establihed. This is always preferable to creating a connection on the request of the information you want to retrieve.
Also, recent versions of the node native driver support promises right out of the box. So there may be no need to configure in anything else, depending on how you intend to use it.

Related

Connecting two promises in a Firebase web app

I am using a Realtime Database to handle some data in a Firebase web app.
In the code hereafter, I want to insert a record in the DB only if fieldOne and fieldTwo aren't going to be duplicated.
dbReference.orderByChild("fieldOne").equalTo(fieldOneVal).once('value')
.then(function(snapshot) {
if (snapshot.exists()) {
alert('This fieldOne has already been used.')
reject();
}
}).then( // What is the correct way to connect this line and the following ??
dbReference.orderByChild("fieldTwo").equalTo(fieldTwoVal).once('value')
.then(function(snapshot) {
if (snapshot.exists()) {
alert('This NAME has already been used.')
reject();
}
// All is now OK.
.... do the final things here .....
}).catch(function(error) {
// An error happened.
alert('This record cannot be inserted.')
});
At this point, I am able to tweak the code to make things work the way I wish. But my issue is that I am not doing things the proper way (I know that due to some messages I can see in the console). The comment in my code shows where I need to know the correct way to connect the two parts.
What is the .... the following ??
For information the DB looks like this:
MyList
+ -M93j....443cxYYDSN
fieldOne: "asdc..."
fieldTwo: "Gkk...."
+ -M94j.........OZS6FL
fieldOne: "afc..."
fieldTwo: "SDFSk...."
The following Promises chaining combined with errors throwing should do the trick.
dbReference
.orderByChild('fieldOne')
.equalTo(fieldOneVal)
.once('value')
.then(function (snapshot) {
if (snapshot.exists()) {
throw new Error('fieldOneExists');
}
return dbReference
.orderByChild('fieldTwo')
.equalTo(fieldTwoVal)
.once('value');
})
.then(function (snapshot) {
if (snapshot.exists()) {
throw new Error('fieldTwoExists');
}
// All is now OK.
//.... do the final things here .....
})
.catch(function (error) {
if (
error.message === 'fieldOneExists' ||
error.message === 'fieldTwoExists'
) {
console.log('This record cannot be inserted');
} else {
console.log('other error');
}
});
However, it would probably be better to use a Transaction for checking the existence of the two values for the fieldOne and fieldTwo fields.
The problem is that the Realtime Database transactions don't work with queries: you need to exactly know the location of the data to be modified (or to be checked for existence/non-existence). So you would need to adapt your data model if it appears that you really need a transaction (which depends on your exact global requirements).
For example you could create database nodes with the concatenation of the values of fieldOne and fieldTwo values and check the existence of such a node in a Transaction. But, again, the feasibility of this approach depends on your exact global requirements, which we don't know.
Try the hasChild() method.
if(snapshot.hasChild('name') {
// ... do stuff !
}
Or,
create a new node in your firebase real-time db named as usernames and add each username in this list. In future before inserting a new username check if it's present in this list

Cloud HTTPS Functions: returning a Promise which is inside a Promisee

I'm currently working on an HTTPS Cloud Function using Firebase, consisting in deleting the post my Android user requested.
General idea
The workflow is (the whole code is available at the end of this SO Question): 1) Firebase checks the user identity (admin.auth().verifyIdToken) ; 2) Firestore gets data from the post that must be deleted (deleteDbEntry.get().then()) ; 3) Cloud Storage prepares itself to delete the file found in the gotten data (.file(filePath).delete()) ; 4) Firestore prepares a batch to delete the post (batch.delete(deleteDbEntry);) and to update the likes/unlikes using the gotten data (batch.update(updateUserLikes,) ; 5) executes the promise of the deletion of the file and of the batch (return Promise.all([deleteFile, batch_commit])).
Expected behavior
I would want to check the user identity. If it's successful, to get the requested post to delete's data using Firebase. If it's successful, I would want to execute the Firestore batch plus the Cloud Storage file deletion in the same promise (that's why I use Promise.all([deleteFile, batch_commit]).then()). If the identity check fails, or if the data get fails, or if the batch fails, I would want to tell the Android app. If all successes, idem.
As all of these operations are in a Cloud HTTPS Function, I must return a promise. This promise, I think, would correspond to all that operations if they are successful, or to an error if at least one is not (?).
Actual behavior
For the moment, I just return the promise of the Firebase user identity check.
My problem & My question
I can't go from the actual behavior to the expected behavior because:
I think it's not very clear in my mind whether I should return the promise corresponding to "all these operations are successful, or at least one is not" in this Cloud HTTPS Function
As these operations are nested (except the Firestorage file deletion + Firestore post deletion which are present in a batch), I can't return something like Promise.all().
My question
Could you please tell me if I'm right (point 1.) and, if not: what should I do? If yes: how could I do it, because of point 2.?
Whole Firebase Cloud HTTPS Function code
Note: I've removed my input data controls to make my code more lisible.
exports.deletePost = functions.https.onCall((data, context) => {
return admin.auth().verifyIdToken(idToken)
.then(function(decodedToken) {
const uid = decodedToken.uid;
const type_of_post = data.type_of_post;
const the_post = data.the_post;
const deleteDbEntry = admin_firestore.collection('list_of_' + type_of_post).doc(the_post);
const promise = deleteDbEntry.get().then(function(doc) {
const filePath = type_of_post + '/' + uid + '/' + data.stored_image_name;
const deleteFile = storage.bucket('android-f.appspot.com').file(filePath).delete();
const batch = admin.firestore().batch();
batch.delete(deleteDbEntry);
if(doc.data().number_of_likes > 0) {
const updateUserLikes = admin_firestore.collection("users").doc(uid);
batch.update(updateUserLikes, "likes", FieldValue.increment(-doc.data().number_of_likes));
}
const batch_commit = batch.commit();
return Promise.all([deleteFile, batch_commit]).then(function() {
return 1;
}).catch(function(error) {
console.log(error);
throw new functions.https.HttpsError('unknown', 'Unable to delete the post. (2)');
});
}).catch(function(error) {
console.log(error);
throw new functions.https.HttpsError('unknown', 'Unable to delete the post. (1)');
});
return promise;
}).catch(function(error) {
console.log(error);
throw new functions.https.HttpsError('unknown', 'An error occurred while verifying the token.');
});
});
You should note that you are actually defining a Callable Cloud Function and not an HTTPS one, since you do:
exports.deletePost = functions.https.onCall((data, context) => {..});
One of the advantages of a Callable Cloud Function over an HTTPS one is that it "automatically deserializes the request body and validates auth tokens".
So you can simply get the user uid with context.auth.uid;.
Now, regarding the way of "orchestrating" the different calls, IMHO you should just chain the different Promises returned by the asynchronous Firebase methods (the ones of Firestore and the one of Cloud Storage), as follows:
exports.deletePost = functions.https.onCall((data, context) => {
//....
const uid = context.auth.uid;
let number_of_likes;
const type_of_post = data.type_of_post;
const the_post = data.the_post;
const deleteDbEntry = admin_firestore.collection('list_of_' + type_of_post).doc(the_post);
return deleteDbEntry.get()
.then(doc => {
number_of_likes = doc.data().number_of_likes;
const filePath = type_of_post + '/' + uid + '/' + data.stored_image_name;
return storage.bucket('android-f.appspot.com').file(filePath).delete();
})
.then(() => {
const batch = admin.firestore().batch();
batch.delete(deleteDbEntry);
if (number_of_likes > 0) {
const updateUserLikes = admin_firestore.collection("users").doc(uid);
batch.update(updateUserLikes, "likes", FieldValue.increment(-doc.data().number_of_likes));
}
return batch.commit();
}).catch(function (error) {
console.log(error);
throw new functions.https.HttpsError('....', '.....');
});
});
I don't think using Promise.all() will bring any interest, in your case, because, as explained here, "if any of the passed-in promises reject, Promise.all asynchronously rejects with the value of the promise that rejected, whether or not the other promises have resolved".
At the time of writing, there is no way to group all of these asynchronous calls to different Firebase services into one atomic operation.
Even if the batched write at the end is atomic, it could happen that the file in Cloud Storage is correctly deleted but that the batched write to Firestore is not executed, for example because there is a problem with the Firestore service.
Also, note that you only need one exception handler at the end of the Promise chain. If you want to differentiate the cause of the exception, in such a way that you send a different error message to the front-end you could use the approach presented in this article.
The article shows how to define different custom Error classes (derived from the standard built-in Error object) which are used to check the kind of error in the exception handler.

Writing middleware for event handlers

Use case:
I have to handle several events which require an "available client". So in each event handler I first have to try to get an available client. If there is no client available I'll respond with a "Service unavailable" message. Right now I've implemented that requirement like this:
public constructor(consumer: RpcConsumer) {
consumer.on('requestA', this.onRequestA);
}
private onRequestA = async (msg: RpcConsumerMessage) {
const client: RequestClient = this.getClient(msg);
if (client == null) {
return;
}
msg.reply(await client.getResponseA());
}
private getClient(msg: RpcConsumerMessage): RequestClient {
const client: RequestClient= this.clientManager.getClient();
if (client == null) {
const err: Error = new Error('Currently there is no client available to process this request');
msg.reply(undefined, MessageStatus.ServiceUnavailable, err);
return;
}
return client;
}
The problem:
I don't want to check for an available client in all event handlers again and again. Instead I thought a middleware would perfectly fit into this use case. It would check for an available client and passes on the client instance if there is one. If there is not available client it will respond with the error message.
The question:
How would I write such a middleware for this case?
Build a curried method for this:
private withClient(cb: (client: RequestClient) => string | Promise<string>) {
return function(msg: RpcConsumerMessage) {
const client: RequestClient= this.clientManager.getClient();
if (client == null) {
const err: Error = new Error('Currently there is no client available to process this request');
msg.reply(undefined, MessageStatus.ServiceUnavailable, err);
return;
}
msq.reply(await cb(client));
};
}
So you can use it as:
private onRequestA = withClient(client => client.getResponseA());
If I understand correctly I don't think you actually NEED middleware, although you might choose to go that route.
You can just have a module that is in charge of finding a client and serving one up if it is available. This would look something like this:
const _client;
module.exports = {
getClient
}
getClient(){
return _client;
}
function setClient(){
//Your logic to find an available client
//run this whenever a client disconnects (if there is an event for that) or until you are connected to a client
_clent = client; //When you find that client set it to `_client`. You will return this everytime someone calls getClient.
}
The advantage here is that once you find a client, the module will serve up that same client until you are disconnected from it. The trick then is just making sure that you are always trying to connect to client when you are disconnected - even when there are no requests. I hope this makes sense.

PouchDB - Lazily fetch and replicate documents

TL;DR: I want a PouchDB db that acts like Ember Data: fetch from the local store first, and if not found, go to the remote. Replicate only that document in both cases.
I have a single document type called Post in my PouchDB/CouchDB servers. I want PouchDB to look at the local store, and if it has the document, return the document and start replicating. If not, go to the remote CouchDB server, fetch the document, store it in the local PouchDB instance, then start replicating only that document. I don't want to replicate the entire DB in this case, only things the user has already fetched.
I could achieve it by writing something like this:
var local = new PouchDB('local');
var remote = new PouchDB('http://localhost:5984/posts');
function getDocument(id) {
return local.get(id).catch(function(err) {
if (err.status === 404) {
return remote.get(id).then(function(doc) {
return local.put(id);
});
}
throw error;
});
}
This doesn't handle the replication issue either, but it's the general direction of what I want to do.
I can write this code myself I guess, but I'm wondering if there's some built-in way to do this.
Unfortunately what you describe doesn't quite exist (at least as a built-in function). You can definitely fall back from local to remote using the code above (which is perfect BTW :)), but local.put() will give you problems, because the local doc will end up with a different _rev than the remote doc, which could mess with replication later on down the line (it would be interpreted as a conflict).
You should be able to use {revs: true} to fetch the doc with its revision history, then insert with {new_edits: false} to properly replicate the missing doc, while preserving revision history (this is what the replicator does under the hood). That would look like this:
var local = new PouchDB('local');
var remote = new PouchDB('http://localhost:5984/posts');
function getDocument(id) {
return local.get(id).catch(function(err) {
if (err.status === 404) {
// revs: true gives us the critical "_revisions" object,
// which contains the revision history metadata
return remote.get(id, {revs: true}).then(function(doc) {
// new_edits: false inserts the doc while preserving revision
// history, which is equivalent to what replication does
return local.bulkDocs([doc], {new_edits: false});
}).then(function () {
return local.get(id); // finally, return the doc to the user
});
}
throw error;
});
}
That should work! Let me know if that helps.

What is the proper way to manage connections to Mongo with MongoJS?

I'm attempting to use MongoJS as a wrapper for the native Mongo driver in Node. I'm modeling the documents in my collection as JavaScript classes with methods like populate(), save(), etc.
In most languages like C# and Java, I'm used to explicitly connecting and then disconnecting for every query. Most examples only give an example of connecting, but never closing the connection when done. I'm uncertain if the driver is able to manage this on its own or if I need to manually do so myself. Documentation is sparse.
Here's the relevant code:
User.prototype.populate = function(callback) {
var that = this;
this.db = mongo.connect("DuxDB");
this.db.collection(dbName).findOne({email : that.email}, function(err, doc){
if(!err && doc) {
that.firstName = doc.firstName;
that.lastName = doc.lastName;
that.password = doc.password;
}
if (typeof(callback) === "function"){
callback.call(that);
}
that.db.close();
});
};
I'm finding that as soon as I call the close() method on the MongoJS object, I can no longer open a new connection on subsequent calls. However, if I do not call this method, the Node process never terminates once all async calls finish, as if it is waiting to disconnect from Mongo.
What is the proper way to manage connections to Mongo with MongoJS?
You will get better performance from your application if you leave the connection(s) open, rather than disconnecting. Making a TCP connection, and, in the case of MongoDB, discovering the replica set/sharding configuration where appropriate, is relatively expensive compared to the time spent actually processing queries and updates. It is better to "spend" this time once and keep the connection open rather than constantly re-doing this work.
Don't open + close a connection for every query. Open the connection once, and re-use it.
Do something more like this reusing your db connection for all calls
User = function(db) {
this.db = db;
}
User.prototype.populate = function(callback) {
var that = this;
this.db.collection(dbName).findOne({email : that.email}, function(err, doc){
if(!err && doc) {
that.firstName = doc.firstName;
that.lastName = doc.lastName;
that.password = doc.password;
}
if (typeof(callback) === "function"){
callback.call(that);
}
});
};
I believe it actually closes the connection after each request, but it sets {auto_reconnect:true} in the mongodb server config, so it will reopen a new connection whenever one is needed.

Categories