How to know what all methods in mongoDb has an inbuilt promise in it.
eg: "updateOne() , findOne()" these methods have inbuilt promises and and we can access the response using ".then" but for lots of other mongoDB methods lack this feature how can we be sure of which methods dont have inbuilt promise in them?
eg: "find()" has no inbuilt promise so we cannott perform "find().then((response)=>{})" this will give an error.
Whereas "findOne().then((response)=>{})" will work without any issue.
This is inconsistent across the NodeJS MongoDB driver as some methods return more complex objects for manipulating the returned values. For example, .find() returns a FindCursor object which can be used to iterate over results by calling .next() repeatedly.
I would recommend referring the MongoDB documentation for the NodeJS driver (found here, or common usage examples are here) rather frequently. The documentation is reasonably extensive and should help with issues like this.
You can also consider using TypeScript, which I have personally found helpful for cases such as this because you can easily tell what type of object is returned from a function/method call.
I would recommend referring the MongoDB documentation for the NodeJS driver (found here, or common usage examples are here) rather frequently. The documentation is reasonably extensive and should help with issues like this
There is some inconsistency over the find method in MongoDB native driver in node Js. This is because the reason that the finds method returns a cursor. So what we could do here is convert that to an array using the toArray() method.
The best solution here would be to use async await over promise chaining.
This would provide us with a cleaner and easier syntax to work with.
Eg :
Let's say we want to find all the products in the product collection. Below is a function for doing exactly that.
const findAll=async(userId)=>{
const userData= await db.products.find().toArray();
console.log(userData);
return userData;
}
Upon calling the above function we would get all the products in the product collection. Just by looking at the code, we could see that it provides a more readable syntax than chaining promises all over.
Related
I would like to ask the difference between "Promise.all" & "Axios.all" for several API requests.
I found that both work well in my code, but it is not easy to find its reason..
Could you please explain ?
The Axios documentation on NPM says that Axios.all() is deprecated and you should use Promise.all() in its place. I do not believe there is any intended difference between the two.
In fact, if you look in the current Axios source, you see this:
// Expose all/spread
axios.all = function all(promises) {
return Promise.all(promises);
};
So, they are identical.
I presume that Axios.all() existed historically when Axios wanted to be able to run in environments that didn't have full native promise support so they were supplying promise functionality that might now be present.
Since all modern environments contain Promise.all(), Axios.all() is no longer necessary.
I'm currently reading this excellent MDN tutorial.
There seems to be two main different ways to perform queries with mogoose, one being more 'functional', using a chained notation.
My question is, does this kind of notation starting with an empty .find() method starts to import all the collection, then filters the successive pointers inside the app, or does it compile the query (the filtering stuff happening on the mongoDB instance) and only performs the callback locally?
The dot-based notation looks nice and cleaner, but if the filter does not get compiled before requests are being made it seems to be a bad idea to import the whole collection on the app for every simple query. Am I right?
If this is the case, are we in one of these situation where we have to favour object composition over function composition (like Eric Elliott said), or is it not the point here?
I'm working with an AWS lambda function that has a sort of 'map/reduce' feel to it. But the 'map' part of it, that is the part that does multiple calls is async.
Using the Node 6 STD lib, is there a dynamic way to get all results returned to a shared spot.
What I've thought about so far:
await async is a great abstraction but isn't in node 6 to my knowledge, just node 8.
array.reduce says it takes a callback but the structure of an http request seems not to qualify though I certainly may be wrong
I've thought about a really suboptimal solution, where each callback puts into a shared queue or array. And I have a loop after all the requests that checks the length of the array - I don't like this solution though
Could you guys point me in the right direction or show me some code that would do this?
Bluebird is your friend.
To convert callback functions into promises you can use .promisify() or .fromCallback().
To do map/reduce over an array of promises, you can use .map() and .reduce().
I have been struggling a lot to mock database in one of my side projects while writing junits. Can somebody please help me out here. Below is what the scenario looks like:
Before that, the source code is here - https://github.com/sunilkumarc/track-courier
I have created a model using sequelize module in Nodejs. And I access my db through this model.
I want to mock the db calls when running junits. For example findOne method here which returns a promise (https://github.com/sunilkumarc/track-courier/blob/master/models/parcels.js#L4). Basically when running this particular endpoint I want to skip accessing the db.
Any help is appreciated!
Regards, Sunil
While people have pointed out in the comments to a similar question that it's probably wise to include a test Database, I'm going to answer the question directly.
Utilizing Jest, you can do the following to mock individual calls on specific models
const myUser = User.build({
...attributes,
});
jest.spyOn(User, 'findOne').mockImplementation((options) => Promise.resolve(myUser));
const mockedUser = await User.findOne({});
In my experience, I do agree with the commenters. Mocking specific functions like these are not always the most useful as they may not accurately depict the return values of a specific Sequelize function. For example, User#findOne may return null. If you provide rejectOnEmpty, you will have to build your own logic for handling this, which may differ from the exact logic used in the Sequelize library.
Ultimately your code is likely responsible for handling whatever Sequelize returns correctly, and with the level of integration to your data layer, this is going to be something very difficult to mock correctly - not to mention extremely tedious.
Documentation to Model#findAll, where you can see rejectOnEmtpy: https://sequelize.org/api/v6/class/src/model.js~model#static-method-findAll
I used to use Sinon.JS for mocking anything, in this case I did it as:
// assume "Users" is our table
sinon.replace(Users, "create", () => console.log(`mocked "Users" table's "create" method`));
I am developing angular application, in which I used angular.forEach method to iterate over arrays (Not over objects). It is working similar as Array.prototype.forEach. So I am having some questions about using this.
Which method is good for use as per standards.
Which is good as per performance. I tested on jsPerf, It gave same result.
According to the angular.forEach-Documentation, it does prevent TypeError(s).
Unlike ES262's Array.prototype.forEach, Providing 'undefined' or 'null' values for obj will not throw a TypeError, but rather just return the value provided.
So it pretty much does was a Library always does. It wraps up some functionality, like checking for null, undefined etc. and handles those occurences, so the developer hasn't to deal with it.
If you want to handle those TypeErrors yourself go with Array.prototype.forEach otherwise use the Library functions.
It won't differ Performance wise.