How to commit and rollback transaction in parse cloud - javascript

I'm trying to insert an object in multiple table class, if error happens while inserting in any one of the class how to rollback to previous state like Begin, Commit and Rollback transation in MySQL server

Parse does not have roll-back feature based on their post:https://www.parse.com/questions/saveall-and-rollback--2

I don't think you can do transactions like in MySQL Server or as if you where having a unit of work in Entity Framework.
Best option I can think about is to use the promise pattern. With promises, your code can be much cleaner than the nested code you get with callbacks. This way you can deal with the erro handeling cases nicely but still not the same as transactions.

Related

Do sequelize promises await postgres trigger completion before they resolve?

I have a contact postgres table, with a contacted_date column. I recently created a contact_date table, which is meant to store each date a contact was contacted.
Because the codebase frequently refers to the contact table for the most recent contacted date, I haven't migrated all the data out of that table. Instead, contact_date contains each date a given contact was contacted, and a trigger on that table updates contact with the latest date for each contact.
If I use Sequelize to make a change to contact_date, does the promise await the successful completion of the trigger? Or does it only await the successful completion of the change to contact_date?
In most of my code, this isn't a big concern - if I persist a new contact date, then my code already has access to that date, and I can use that date in other calculations if I need it. But I'm running into difficulties with my tests, and it may be because of a race condition. In my tests, I'm creating new contacts with contact dates, and some of my tests are failing, perhaps because the promise is resolving before the trigger has finished doing its work.
Update: This SO post indicates that the trigger is part of the same transaction, implying that the Sequelize promise wouldn't resolve until the trigger had completed, but if someone knows for sure, I'd still like to hear dis/confirmation.
I hope you're using transactions for your queries, without which I feel this would not be possible.
Also, an alternative way to achieve your objective would be to use hooks (basically a trigger at application level) in sequelize, instead of creating trigger at DB level.
Since you can pass the transaction object to hooks, the operation performed under hooks is guaranteed to be part of the same transaction which caused the hook to run.

How to mock sequelize model methods in Node.js

I have been struggling a lot to mock database in one of my side projects while writing junits. Can somebody please help me out here. Below is what the scenario looks like:
Before that, the source code is here - https://github.com/sunilkumarc/track-courier
I have created a model using sequelize module in Nodejs. And I access my db through this model.
I want to mock the db calls when running junits. For example findOne method here which returns a promise (https://github.com/sunilkumarc/track-courier/blob/master/models/parcels.js#L4). Basically when running this particular endpoint I want to skip accessing the db.
Any help is appreciated!
Regards, Sunil
While people have pointed out in the comments to a similar question that it's probably wise to include a test Database, I'm going to answer the question directly.
Utilizing Jest, you can do the following to mock individual calls on specific models
const myUser = User.build({
...attributes,
});
jest.spyOn(User, 'findOne').mockImplementation((options) => Promise.resolve(myUser));
const mockedUser = await User.findOne({});
In my experience, I do agree with the commenters. Mocking specific functions like these are not always the most useful as they may not accurately depict the return values of a specific Sequelize function. For example, User#findOne may return null. If you provide rejectOnEmpty, you will have to build your own logic for handling this, which may differ from the exact logic used in the Sequelize library.
Ultimately your code is likely responsible for handling whatever Sequelize returns correctly, and with the level of integration to your data layer, this is going to be something very difficult to mock correctly - not to mention extremely tedious.
Documentation to Model#findAll, where you can see rejectOnEmtpy: https://sequelize.org/api/v6/class/src/model.js~model#static-method-findAll
I used to use Sinon.JS for mocking anything, in this case I did it as:
// assume "Users" is our table
sinon.replace(Users, "create", () => console.log(`mocked "Users" table's "create" method`));

Multiple DB in meteor keeping reactivity

There's some way to use more than one MongoDB database in meteor keeping reactivity/Oplog working? I've been reading about it (Post1), (Post2) and still I don't see a straighforward way to achieve this. It's possible? What's the right way? Thank you.
As you say; Default Connection isn't really an option as you can only have one DB and DDP is a bit superfluous when you only need a DB and none of the Meteor stuff. I'd think, therefore, your best approach would be to use the MongoInternals option.
The only thing missing from that option is reactivity; a method of enabling oplog tailing for these additional DB connections is mentioned in this answer. It essentially seems to be a case of passing the oplogUrl when creating the RemoteCollectionDriver, here's the example given in their answer:
var driver = new MongoInternals.RemoteCollectionDriver(
"mongodb://localhost:27017/db",
{
oplogUrl: "mongodb://localhost:27017/local"
});
var collection = new Mongo.Collection("Coll", {_driver: driver});
I'm going to write here what I've discovered until now, but I'm not going to mark the question as answered.
Concepts
What is reactivity?
Using reactivity, the data you show to your users will be updated in real time. You can enable or disable reactivity when you subscribe to a collection. Like this:
// enabled by default
Meteor.publish('top', function() {
return Top.find({},{reactive: false});
});
What is oplog?
Oplog is the MongoDB log. When you tell meteor to use oplog, reactivity performance is much better, unless you have a really high volumne of insert operations. In this cases may be wise to keep it disabled. Oplog can optimize your reactive DB calls by ~x5 ~x20.If you are going to use oplog you should optimize your db calls. You can read more about it here.
What methods exists to connect DBS to meteor?
Default connection:
Reactivity? yes. Oplog? yes. DB Limit? One. Description: Meteor creates a default MongoDB database when you run 'meteor'. You can set a different database using enviromental variables, or just MONGO_URL=mongodb://localhost:27017/db_name meteor.
DDP:
Reactivity? yes. Oplog? yes. DB Limit? no. Description: You need a meteor project for every DB. That's about 600MB in memory for every DB. You can read about it here and here.
MongoInternal (SOLUTION: Thanks to carlevans719):
Reactivity? yes. Oplog? yes. DB Limit? no. Description: You can specify a DB in your subscriptions file like:
ar database = new MongoInternals.RemoteCollectionDriver('mongodb://user:password#localhost:27017/meteor', {oplogUrl: "mongodb://localhost:27017/local"});
var numberOfDocs = database.open('boxes').find().count();
Last Words:
MongoInternal If you are not going to use the default db, you have to tell meteor to do not create it. To achieve this you must always run meteor as MONGO_URL=mongodb://localhost:27017 meteor

Non-Transactional SaveChanges with Breeze.js

Imagine you have a UI that consists of a list of items, each with a checkbox beside them. You can select multiple checkboxes and click a button to perform a bulk operation. The desire is to have as many of the rows be processed as possible. So if one row fails, the other selected rows should not roll back.
To do this with Breeze, would it make more sense to send multiple different saves, or is there a way to handle this scenario out of the box?
Sorry. I am new to Breeze, and have been looking through the docs, samples, and API and can't see any clear indication that this is possible. It appears that each call to SaveChanges is transactional. Or is a Named Save required to achieve this behavior?
Thanks in advance!
There is no simple way to do a non-transactional batch save in Breeze. You're easiest course is to save each change individually. You can fire them off in parallel and wait for all to complete if that's important to you.
However, if you're game for some serious programming, it can be done. Here is the outline of the approach.
How to write a non-transactional batch save in Breeze
The easy part is turning off the transaction on the server.
Take a look at the second parameter of ContextProvider.SaveChanges. It's a Breeze TransactionSettings object. If you "new" that up you'll get this
public TransactionSettings()
{
this.IsolationLevel = System.Transactions.IsolationLevel.ReadCommitted;
this.Timeout = TransactionManager.DefaultTimeout;
this.TransactionType = TransactionType.None;
}
You can change create one with any value you like but I'm calling attention to TransactionType.None.
Pass that in your call to SaveChanges
[HttpPost]
public SaveResult SaveChanges(JObject saveBundle)
{
return _contextProvider.SaveChanges(saveBundle, myTransactionSettings);
}
I believe that will prevent the EFContextProvider from saving transactionally.
Now you have to fix things on the client side. I haven't turned off transactions so I'm not familiar with how errors are caught and transmitted to the client.
The real challenge is on the client. You have to do something when the save fails partially. You have to help the Breeze EntityManager figure out which entities succeeded and which failed and process them appropriately ... or your entity cache will become unstable.
You will have to write a custom Data Service Adapter with some very tricky save processing logic. That is not easy!
I've done it in my Breeze Labs Abstract Rest Adapter which is not documented and quite complex. You're welcome to read it and apply its reasoning to your own implementation of the "web api" Data Service Adapter. If you have budget, you might engage IdeaBlade professional services (makers of Breeze)

Is Mongoose not scalable with document array editing and version control?

I am developing a web application with Node.js and MongoDB/Mongoose. Our most used Model, Record, has many subdocument arrays. Some of these, for instance, include "Comment", "Bookings", and "Subscribers".
In the client side application, whenever the user hits the "delete" button it fires off an AJAX request to the delete route for that specific comment. The problem I am running into is that, when many of these AJAX calls come in at once, Mongoose fails with a "Document not found" error on some (but not all) of the calls.
This only happens when the calls are made rapidly and many at a time. I think this is due to the version in Mongoose causing document conflicts. Our current process for a delete is:
Fetch the document using Record.findById()
Remove the subdocument from the appropriate array (using, say, comment.remove())
Call record.save()
I have found a solution where I can manually update the collection using Record.findByIdAndUpdate and then using the $pull operator. However, this means we can't use any of mongoose's middleware and loose the version control entirely. And the more I think about it, the more I realize situations where this would happen and I would have to use Mongoose's wrapper functions like findByIdAndUpdate or findAndRemove. The only other solution I can think of would be to put the removal attempt into a while loop and hope it works, which seems like a very poor fix.
Using the Mongoose wrappers doesn't really solve my problem as it won't allow me to use any sort of Middleware or hooks at all then, which is basically one of the huge benefits of using Mongoose.
Does this mean that Mongoose is essentially useless for anything of with rapid editing and I might as well just use native MongoDB drivers? Am I misunderstanding Mongoose's limitations?
How could I solve this problem?
Mongoose's versioned document array editing is not scalable for the simple reason that it's not an atomic operation. As a result, the more array edit activity you have, the more likely it is that two edits will collide and you'll suffer the overhead of retry/recovery from that in your code.
For scalable document array manipulation, you have to use update with the atomic array update operators: $pull[All], $push[All], $pop, $addToSet, and $. Of course, you can also use these operators with the atomic findAndModify-based methods of findByIdAndUpdate and findOneAndUpdate if you also need the original or resulting doc.
As you mentioned, the big downside of using update instead of findOne+save is that none of your Mongoose middleware and validation is executed during an update. But I don't see that you have any choice if you want a scalable system. I'd much rather manually duplicate some middleware and validation logic for the update case than have to suffer the scalability penalties of using Mongoose's versioned document array editing. Hey, at least you still get the benefits of Mongoose's schema-based type casting on updates!
I think, from our own experiences, the answer to your question is "yes". Mongoose is not scalable for rapid array-based updates.
Background
We're experiencing the same issue at HabitRPG. After a recent surge in user growth (bringing our DB to 6gb), we started experiencing VersionError for many array-based updates (background on VersionError). ensureIndex({_id:1,__v1:1}) helped a bit, but that tapered as yet more users joined. It would appear to me Mongoose is indeed not scalable for array-based updates. You can see our whole investigation process here.
Solution
If you can afford moving from an array to an object, do that. Eg, comments: Schema.Types.Array => comments: Schema.Types.Mixed, and sort by post.comments.{ID}.date, or even a manual post.comments.{ID}.position if necessary.
If you're stuck with arrays:
db.collection.ensureIndex({_id:1,__v:1})
Use your methods described above. You won't benefit from hooks and validations, but there are worse things.
I would strongly suggest pulling those arrays out into new collections. For example, a Comments collection where each document has a record ID to indicate where it belongs. This is a much more scalable solution.
You are correct, Mongoose's array operations are not atomic and therefore do not scale well.
I thought of another idea, which I'm not certain about but seems worth offering: soft-delete.
Mongoose is very concerned about array-structure changes because they make future changes ambiguous. But if you were to just tag a comment subdocument with comment.deleted=true then you might be able to do more such operations without encountering conflicts. Then you could have a cron task that goes through and actually removes those comments.
Oh, an additional idea is to use some sort of memory cache, so if an record has been accessed/edited in the last few minutes, it's available without having to pull it from the server, which means that two requests coming in at the same time are going to be modifying the same object.
Note: I'm not actually sure that either of these are good ideas in general or that they'll solve your problem, so go ahead and edit/comment/downvote if they're bad :)

Categories