I want to create ticketing system . Where ticket get cancelled after given period of time ?.For deleting after some time I am going to use indexing feature by MongoDb . But before it gets expire or after the expiry of that particular ticket I want to retrive or save it in different collection for future is it possible using mongodb ?
In the current version of MongoDB, 5.0.8 as I'm writing this, it's not directly supported, but it will be in MongoDB 6.0 (CF this jira ticket) and there is a workaround that you can use in the meantime (keep reading!).
Let me explain. What you are trying to do is:
set up a TTL index that will remove automatically the docs in your MongoDB collection when the time is passed by X seconds.
set up a Change Streams on this collection with a filter set to only keep delete operations.
In 5.0.8, this change stream event will only contain the _id field of the deleted document, and nothing else, as that's the only information currently available in the oplog.
In 6.0, you will be able to access the previous state of this document (so it's last state before being deleted).
That being said, there is a workaround that Pavel Duchovny explained in his blog post. You can easily accommodate his notification system to achieve your desired behaviour.
Related
I'm building an app using Algolia and Firebase.
To use Algolia, I'm using the following firebase extension so that everytime new data is added, edited, updated to firestore, the same record is kept in Algolia.
Everything works well except it takes around 1-2 minute to store the record in Algolia (currently has around 15000+ records).
I'm currently reading and displaying data straight from Algolia and updating it whenever the data gets changed. However it seems absurd to wait 1-2 minutes before user can finally see the updated details.
I'm using Algolia because I need more flexible search options + page offset pagination. If I could do that with just firestore, I would happily just read data straight from firestore.
But since that's not possible Can anyone see a better option?
I've already tried writing a custom trigger instead of the extension but the speed seems to be the same.
That seems to be the expected behaviour. Algolia's documentation has a dedicated page for that topic:
Why aren't my objects immediately available for search after being added?.
When you add or update a record, our servers reply to your request as soon as they understand the operation, but the actual indexing starts a few seconds later, asynchronously.
They also say it may take a few seconds (or minutes) for the new documents to be available. You can read more about how fast is the indexing?. It says, indexing may be slower on shared clusters. Upgrading to dedicated clusters may be helpful. That being said, time taken for indexing does not depend on whether you are using the extension, custom cloud function or your own servers to add data in Algolia.
I am building a job board using Node.JS / MongoDB. After a job listing is purchased by the user, it is added to the database and, using a TTL index, it deletes after 30 days. I’m wondering if there’s a way to change a field vs. deleting the entire document? I ask because I would want to give the user the option to “renew” their listing after the expiration period. What would be the best way to approach this?
You can add a field called "renewalDate", initialize it to the create date and then query items that have a renewalDate that is less than 30 days back from now for items to display. Then, to "renew" a listing, you just set the renewalDate to a more current date so it will appear in the query again.
You could then run a periodic task (once a night or once a week) to permanently delete any documents that are old enough that they aren't even eligible for renewal any more. Or you could use the TTL feature to manage this.
I have a node.js API that is responsible for 3 things:
Registering a buyer
Getting a buyer with ID
Finding the matching buyer's offer based on some criteria
Details here
Since I'm new to Redis, I started the implementation like this:
JSON.stringify the buyer and store it with SET
Store all buyer's offers as ordered set (this is for the third endpoint, which requires the offer with the highest value) - this set contains string that represents the name of a hash
Then, that hash stores strings that represent the names of sets that have certain values and a location which the user will be redirected to after these conditions have been fulfilled (buyer1_devices, buyer1_hours, etc.)
Now, here is the problem:
I need to get GET /route working. As described on GitHub page that I have provided, I have 3 parameters: a timestamp, devices, and states. I have to browse through all the sets and get the appropriate location to redirect a user to. The location is stored in a hash, but I have to browse through all the sets. Since this is probably a bad implementation, where did it all go wrong and to go about implementing this?
Note that this is a redis problem, not a node one. I need instructions on how to go about implementing this in Redis, and then I will be ready to code it in Node.
Thank you in advance
The first rule of Redis: store the data just like you want to read it.
To answer the /route query you need "filteration" on two attributes of from the buyers' offers - state and device. There is more than one way to skin that cat, so here's one: use many Sorted Sets for the offers.
Each such offers Sorted Set key name could look like this: <device>:<state> (so the example offered in the git will be added to the key desktop:CA).
To query, use the route's arguments to compose your key's name, then proceed regularly to find the highest-scored offer and resolve the buyer's details in the Hash.
Now go get that job!
Im developing a client-server real-time program and I need my server to be up-to-date always. Unill now I have been implement a GET request from the server every X seconds that returns all the entities from the MongoDB.
Now I have big amount of entities and I need to GET only the entities which have been updated since the last GET request.
I think about running sequence in the db for each entity and check every X seconds if the sequens have been increased.
But I will prefer a better way.
Is there any way to get only the recent changes from mongo? Or any nicer architecture ?
You can have a last updated time in the collection. In the client side, you can maintain a last get time.
In the subsequent requests, get all the documents from collection where last updated time is greater than last get time. This way you will get the documents that got updated or inserted since you last get the data (I.e. delta).
Edit:
MongoDB maintains the date object in UTC format. As long as the date at client side is maintained in UTC and send the same data in the subsequent request, it should retrieve the latest updated records.
I am a newbie for MongoDB.
Please help me with the following query:
I am using oplogs for adding Triggers for MongoDB operation. On doing insert/update operations, I receive the complete information about all fields added/updated in the collection.
My problem is:
When I do delete operations in MongoDB, the oplog triggers received contains ONLY the object_id.
Can somebody please point out to some example where I can receive complete information in all fields - for the deleted row in the trigger.
Thanks
You have to fetch that document by its ObjectID, which will not be possible on the current node you are tailing the oplog from because by the time you have received the delete operation from the oplog, the document is gone. Which I believe means you have two choices:
Make sure that all deletes are preceded by an update operation which allows you to see the document fields you require prior to deletion (this will make deletes more expensive of course)
Run a secondary with a slave delay and then query that node for the document that has been deleted (either directly or by using tags).
For number 2, the issue is having a delay that is long enough to guarantee that you can fetch the document and short enough to make sure you are getting an up to date version of the document. Unless you add versioning to the document as a check (which is then getting similar to option 1, you would likely want to update the version before deleting), this would have to be essentially an optimistic, best efforts solution.