No update in the mongoose-cache after change in the collection - javascript

I have a MEAN stack based application and recently, I was trying to implement some caching mechanism for caching the query results. I implemented mongoose-cache.
mongoose-cache configuration
require('mongoose-cache').install(mongoose, {max: 150, maxAge:1000*60*10});
I have two document in sample collection say
{name:'dale', dep:2},
{name:'john', dep:4}
I run a query with mongoose-cache enabled and the maxAge is say 10 minutes.
sample.find().cache().exec(function (err, doc) {
// returns 2 document
});
Next, I inserted one more document say
{name:'rasig', dep:4} and execute the same query
sample.find().cache().exec(function (err, doc) {
// returns 2 document instead of 3
});
I executed the same query twice within 10 minutes, though there was a change in the collection, I got the previous result. Is there any way to drop the cached result once there is a change in the collection. If not, can you suggest something else to implement the same.

I am the author of a new Mongoose module called Monc
Using Monc is quite easy to clean up or purge the whole cache or even the the associated Query objects simple by using:
sample.find().clean().cache().exec(function (err, doc) {});

Related

In the application layer (using sequelize or other) abort SQL transaction if data already read (SELECT) has changed (UPDATE)

Is there a way with sequelize to detect if between my read (SELECT) query and my write (UPDATE) query, if another thread/process/transaction came in a changed the data that my SELECT query read.
A contrived example. Assume we have 2 functions. It seems unless I do the yearsRemaining calculation in SQL, I will always have a race condition.
function1() {
let t1 = sequelize.transaction();
let hobbitAge = sequelize.query(`SELECT age FROM hobbits WHERE name='Bilbo'`, { transaction: t1});
let yearsRemaining = (131-hobbitAge);
sequelize.query(`UPDATAE hobbits SET estimate=${yearsRemaining} WHERE name='Bilbo'`, { transaction: t1 });
t1.commit();
}
function2() {
let t2 = sequelize.transaction()
sequelize.query(`UPDATAE hobbits SET age='111' WHERE name='Bilbo'`, { transaction: t2 });
t2.commit();
}
The race condition is:
Transaction 1 runs first and reads a value of bilbo's age as 100 from the hobbits table
Transaction 2 starts running and updates the hobbits table (111), taking a write lock on the rows it's updating.
Transaction 1 waits until transaction 2 is finished writing to hobbits (same row). Once it has, as it is working off the old value read in the SELECT query (in the application layer), it puts a now incorrect value (81) into the hobbits table.
What is the best way of dealing with this in my application (javascript). Can I craft a query to deal with this in sequelize, or check the validity of my SELECT query before calling UPDATE? Or is the only way to work directly in the database (e.g. pure SQL query or a stored procedure).
Thanks
What you are looking for is called “optimistic locking”.
You can use Sequelize's facilities for that, which works with a version number that is checked for modifications on update.
The alternative is to use PostgreSQL's REPEATABLE READ transaction isolation level. If such a transaction detects that a row has been modified by a concurrent transaction, it will throw a serialization error. This is probably the more efficient way to do it.
In both cases, you repeat the whole transaction if a concurrent modification is detected.

Filter collection by lastModified

I need to fetch sub-set of documents in Firestore collection modified after some moment. I tried going theses ways:
It seems that native filtering can work only with some real fields in stored document - i.e. nevertheless Firestore API internally has DocumentSnapshot.getUpdateTime() I cannot use this information in my query.
I tried adding my _lastModifiedAt 'service field' via server-side firestore cloud function, but ... that updating of _lastModifiedAt causes recursive invocation of the onWrite() function. I.e. is does also not work as needed (recursion finally stops with Error: quota exceeded (Function invocations : per 100 seconds)).
Are there other ideas how to filter collection by 'lastModifiedTime'?
Here is my 'cloud function' for reference
It would work if I could identify who is modifying the document, i.e. ignore own updates of _lastModified field, but I see no way to check for this
_lastModifiedBy is set to null because of current inability of Firestore to provide auth information (see here)
exports.updateLastModifiedBy = functions.firestore.document('/{collId}/{documentId}').onWrite(event => {
console.log(event.data.data());
var lastModified = {
_lastModifiedBy: null,
_lastModifiedAt: now
}
return event.data.ref.set(lastModified, {merge: true});
});
I've found the way to prevent recursion while updating '_lastModifiedAt'.
Note: this will not work reliably if client can also update '_lastModifiedAt'. It does not matter much in my environment, but in general case I think writing to '_lastModifiedAt' should be allowed only to service accounts.
exports.updateLastModifiedBy = functions.firestore.document('/{collId}/{documentId}').onWrite(event => {
var doc = event.data.data();
var prevDoc = event.data.previous.data();
if( doc && prevDoc && (doc._lastModifiedAt != prevDoc._lastModifiedAt) )
// this is my own change
return 0;
var lastModified = getLastModified(event);
return event.data.ref.set(lastModified, {merge: true});
});
Update: Warning - updating lastModified in onWrite() event causes infinite recursion when trying to delete all documents in Firebase console. This happens because onWrite() is also triggered for delete and writing lastModified into deleted document actually resurrects it. That document propagates back into console and is tried to be deleted once again, indefinitely (until WEB page is closed).
To fix that issue above mentioned code has to be specified individually for onCreate() and onUpdate().
How about letting the client write the timestamp with FieldValue.serverTimestamp() and then validate that the value written is equal to time in security rules?
Also see Mike's answer here for an example: Firestore Security Rules: If timestamp (FieldValue.serverTimestamp) equals now
You could try the following function, which will not update the _lastModifiedAt if it has been marked as modified within the last 5 seconds. This should ensure that this function only runs once, per update (as long as you don't update more than once in 5 seconds).
exports.updateLastModifiedBy = functions.firestore.document('/{collId}/{documentId}').onWrite(event => {
console.log(event.data.data());
if ((Date.now() - 5000) < event.data.data()._lastModifiedAt) {return null};
var lastModified = {
_lastModifiedBy: null,
_lastModifiedAt: now
}
return event.data.ref.set(lastModified, {merge: true});
});

Is it possible to fire a delete trigger by adjusting TTL value in CosmosDB?

I have created a pre-delete trigger by using Script Explorer in Azure portal. The below trigger is written in JavaScript:
function markReminderAsPastDue() {
var collection = getContext().getCollection();
var request = getContext().getRequest();
var docToCreate = request.getBody();
docToCreate["pastDue"] = true;
docToCreate["id"] = "";
var accepted = collection.createDocument(collection.getSelfLink(),
docToCreate,
function (err, documentCreated) {
if (err) throw new Error('Error' + err.message);
});
if (!accepted) throw new Error("Document creation not accepted");
}
I set the TTL value for each document in the associated collection. So the TTL value is not equal to -1 and documents are deleted automatically once the time expires.If I delete the document manually, the pre-delete trigger is fired. However, when the document is deleted implicitly because of the TTL value, the trigger is not fired. What should I do to fix this problem? Is it possible to make triggers fire with TTL value?
As far as I know, there's no callback mechanism of any sort in Azure Cosmos DB for TTL. The TTL enforcement is just a background thread that queries every second for documents that have expired, then deletes them.
Combined with your needs, I'd suggest you just mimicking the TTL operation in your application layer, where you can perform whatever extra business logic.
You can set an update time attribute in each document, which updates the update times for each modification. Then do a polling mechanism at your application layer and iterate through the database every once in a while to find out the data when the update time expires and delete them.
In order to reduce the pressure on your application layer, you can give the action of deleting data to Cosmos DB Stored Procedure.
Hope it helps you.

mongodb - capped collections with nodejs

I'm trying to set up and update some capped collections in MongoDB using Node.js (using the native MongoDB driver).
My goal is to, upon running app.js, insert documents into a capped collection, and also to update existing documents in a capped collection. Both of these are running on setInterval(), so every few seconds.
My questions:
I want to create a collection if the collection does not already exist, but if it does I want to insert a document into it instead. What is the correct way to check this?
With capped collections, should I even be explicitly creating them first before inserting anything into them? Normally I believe you can just insert things into a collection without explicitly creating them first, but in this case I need to ensure they are capped. Once the capped collection exists I know how to insert new documents into it, the problem is I need some way to handle the app being used for the first time (on a new server) where the collection doesn't already exist, and I want to do this creation using node and not having to jump into the mongo cli.
The trick here is that the collection needs to be capped, so I can do something like: db.createCollection("collectionName", { capped : true, size : 100000, max : 5000 } ). That will create the capped collection for me, but every time I call it it will call createCollection() instead of updating or inserting - if I call createCollection(), once the collection already exists, will it completely overwrite the existing collection?
An alternative is to turn a collection into a capped one with: db.runCommand({"convertToCapped": "collectionName", size: 100000, max : 5000 });. The problem with this is that node doesn't see runCommand() as a valid function and it errors. Is there something else that I'm meant to be calling to get this to work? It works in the mongo cli but not within node
What type of query do you use to find the first document in a collection? Again, within the mongo cli I can use db.collections.find() with some query, but within node it states that find() is not a valid function
How would I use collection.update() to add some new fields to an existing document? Lets say the document is some simple object like {key1: "value", key2: "value"} but I have an object that contains {key3: "value"}. Key 3 does not exist in the current document, how would I add that to what currently exists? This is somewhat related to #4 above in that I'm not sure what to pass in as the query parameter given that find() doesn't seem to play well with node.
Regarding your questions 1 - 4 about capped collections and creating them automatically, there are several ways to do this. On the one hand, you could run a script to initialise your database so that it has the capped collections available to your client when you run it for the first time. On the other hand, you could have a check to see if there are any documents in the given collection before inserting a document. If there are, you just insert your document and if there aren't, you create the capped collection and then insert the document as a callback to that function. It would work something like this:
var host = "localhost",
port = 27017,
dbName = "so";
var MongoClient = require('mongodb').MongoClient, Server = require('mongodb').Server;
var mongoclient = new MongoClient(new Server(host, port));
var db = mongoclient.db(dbName);
db.open(function(err, db) {
if(err) throw err;
// Capped collection.
var capped = db.collection('capped');
// Document to be inserted.
var document = { "foo": 1, "bar": 1 }
capped.find().count(function(err, count) {
if(err) throw err;
if (count === 0) {
console.log("Creating collection...");
db.createCollection("capped",
{ "capped": true,
"size": 100000,
"max": 5000 },
function(err, collection) {
if(err) throw err;
// Insert a document here.
console.log("Inserting document...");
collection.insert(document, function(err, result) {
if (err) throw err;
});
});
} else {
// Insert your document here without creating collection.
console.log("Inserting document without creating collection...");
capped.insert(document, function(err, result) {
if (err) throw err;
});
}
});
});
Regarding question 5, you can use findOne() to find a document in the collection, though this is not necessarily the first or last. If you want to guarantee the first or last, you can run a find() with a sort() and limit() of 1. Sorting by _id ascending should give you the first document. More information here.
// Sort 1 for ascending, -1 for descending.
capped.find().sort([["_id", 1]]).limit(1).nextObject(function(err, item) {
console.log(item);
});
Finally for question 6, you just use the $set operator with the update() method. More information here.
capped.update({ "foo": 1 }, { "$set": { "bar": 2 } }, {}, function(err, result) {
console.log(result);
});
Note that you can only update documents in place for capped collections, so you cannot do the insert of the extra field you mention. There are other restrictions enumerated here that you might want to be aware of.
[EDIT: Add updating nested fields in last document.]
If you want to update a nested field in the first or last document (use 1 or -1 in the sort, respectively), you can fetch the document, extract the _id, then perform an atomic update on that document. Something like this:
capped.find().sort([["_id", -1]]).limit(1).nextObject(function(err, item) {
if(err) throw err;
capped.update({ "_id": item._id },
{ "$set": { "timeCollected": 15, "publicIP.ip" : "127.0.0.1" }},
function(err, result) {
if(err) throw err;
console.log(result);
});
});
Note that even when updating a field that exists in a document in a capped collection, you need to ensure that the new value fits in the space allocated for the document. So, for example, updating a string value from "1" to "127.0.0.1" will not necessarily work.

Meteor Leaderboard example: resetting the scores

I've been trying to do Meteor's leaderboard example, and I'm stuck at the second exercise, resetting the scores. So far, the furthest I've got is this:
// On server startup, create some players if the database is empty.
if (Meteor.isServer) {
Meteor.startup(function () {
if (Players.find().count() === 0) {
var names = ["Ada Lovelace",
"Grace Hopper",
"Marie Curie",
"Carl Friedrich Gauss",
"Nikola Tesla",
"Claude Shannon"];
for (var i = 0; i < names.length; i++)
Players.insert({name: names[i]}, {score: Math.floor(Random.fraction()*10)*5});
}
});
Meteor.methods({
whymanwhy: function(){
Players.update({},{score: Math.floor(Random.fraction()*10)*5});
},
}
)};
And then to use the whymanwhy method I have a section like this in if(Meteor.isClient)
Template.leaderboard.events({
'click input#resetscore': function(){Meteor.call("whymanwhy"); }
});
The problem with this is that {} is supposed to select all the documents in MongoDB collection, but instead it creates a new blank scientist with a random score. Why? {} is supposed to select everything. I tried "_id" : { $exists : true }, but it's a kludge, I think. Plus it behaved the same as {}.
Is there a more elegant way to do this? The meteor webpage says:
Make a button that resets everyone's score to a random number. (There
is already code to do this in the server startup code. Can you factor
some of this code out and have it run on both the client and the
server?)
Well, to run this on the client first, instead of using a method to the server and having the results pushed back to the client, I would need to explicitly specify the _ids of each document in the collection, otherwise I will run into the "Error: Not permitted. Untrusted code may only update documents by ID. [403]". But how can I get that? Or should I just make it easy and use collection.allow()? Or is that the only way?
I think you are missing two things:
you need to pass the option, {multi: true}, to update or it will only ever change one record.
if you only want to change some fields of a document you need to use $set. Otherwise update assumes you are providing the complete new document you want and replaces the original.
So I think the correct function is:
Players.update({},{$set: {score: Math.floor(Random.fraction()*10)*5}}, {multi:true});
The documentation on this is pretty thorough.

Categories