Maybe this subject often question to ask, but I not find what I want it.
I have a collection with keys and values like this:
{
_id: ObjectId("6142f3a47245125aef7ffcc0"),
addTime: '2021-09-16 14:35:00',
editTime: '2021-09-16 14:35:00'
}
I want to dump before August 2021. What syntax do I have to use?
I trying in roboT and mongo shell, before dump I try find first.
db.collections.find({addTime:{$lte:new Date("2021-09-01")}})
and the result is
Fetched 0 record(s) in 3714ms
You can make use of the $expr operator to temporarily convert the string to a date-time value and then perform your query.
db.collection.find({
"$expr": {
"$lte": [
{
"$dateFromString": {
"dateString": "$addTime",
"format": "%Y-%m-%d %H:%M:%S", // <- Your date time format in string
"onNull": new Date("9999-09-01"), // <- Default calue if query key doesnt exist (Dont modify this value, unless required)
}
},
new Date("2021-09-01"), // <- Your datetime query value (can also be `ISODate` format)
],
}
})
Mongo Playground Sample Execution
Related
I have some object that was returned from my database. When i build up my table, these are the dtypes that i declared:
festivalId = BIGINT
performanceId = BIGINT
startTime = TIME
endTime = TIME
When i wanted to extract some records with a specific performanceId, my query looks something like this:
SELECT performanceId, to_char(startTime, 'HH24MI') AS startTime, to_char(endTime, 'HH24MI') AS endTime FROM Performance WHERE festivalId = $1
And the entire result returned from postgresql was concated into an object.
Right now the problem is that my javascript for my middleware is reading each performanceId that is found in each object(meaning each unique performanceId) as a string instead of a integer type.
e.g. performanceId 1234567890 is printed out as "1234567890" instead of 1234567890.
Is there anyway that i can convert performanceId output as a integer?
I tried to convert it to a integer when doing my sql SELECT statement,
SELECT CAST(performanceId AS INTEGER) AS performanceId, to_char(startTime, 'HH24MI') AS startTime, to_char(endTime, 'HH24MI') AS endTime FROM Performance WHERE festivalId = $1
but i realise its different as there is a different acceptance range for INTEGER and BIGINT.
I don't know if my above post was misleading or not, so i decided to put some dummy output here:
This is the unintended output:
{
"result": [
{
"performanceid": "9999999999",
"starttime": "0900",
"endtime": "1200"
}
]
}
This is the expected output:
{
"result": [
{
"performanceid": 999999999,
"starttime": "0900",
"endtime": "1200"
}
]
}
The JS BigInt is fairly new, but it's also not part of the JSON standard.
Accordingly, your output is correct, because while a string can contain any amount of digits/chars, a JSON with 9007199254740991n as value would likely fail all over the place.
One way to to fix this, is to use a JSON.parse revival function, and return whatever you like when the key is performanceid
JSON.parse(data, (k, v) => k == 'performanceid' ? BigInt(v) : v);
You are, at this point, in charge of being sure the browser/JS engine is compatible with BigInt, and eventually provide a fallback if it's not (likely keeping it as a string).
If you are using fetch(data).then(b => b.json()) you should also bypass it and use fetch(data).then(b => b.text()).then(data => JSON.parse(data, reviver)) instead, so that your fetch operation will produce the desired outcome.
I have documents in mongoDB that look like this:
{username:String, paymentYear:Int, paymentMonth:Int}
I would like to find the latest document of a username, that means, the closest date to our Date.now(). What is the best way of accomplishing it? Is there any query of mongo I can use or should I write my own code?
Thanks
This is achievable using MongoDB's $dateFromParts stage Aggregation pipeline.
db.test.aggregate([
{
"$addFields": {
"tempDate": {
"$dateFromParts": {
year : '$paymentYear',
month : '$paymentMonth',
}
}
}
},
{
"$sort": {"tempDate": -1} // Change from `-1` to `1` For Ascending Order
},
{
"$limit": 1 // Number of documents to be returned based on the sort order
},
])
Use can implement this in $project stage instead of $addFields stage based on your needs for better optimization.
Note: This will work only for MongoDB version 3.6 and above.
I'm writing a mongo query to show records based on the current time. However, the query returns 0 records when I try to query against date and time. Below is the query:
let now = momenttz.tz(moment(),tz).toDate();
tmpl.listSelectorFilter('scheduledVisits', {
$gte: now,
$lte: moment.utc(today, 'MM/DD/YYYY').endOf('week').toDate()
});
Note: If I set the time to zero hours, it works.
How do I change this query in order to make it work? Any help is appreciated.
Does the data you're querying against have timestamps or a dateTime object to compare? Essentially you aren't comparing the records to any time, so there is no way for mongo to know what to filter by.
I would suggest you do something like a find instead and compare to dates within the record with the appropriate date field:
example:
db.collection.find({
{ $and: [ { data.date: { $gte: now } }, { data.date: { $lte: endOfWeek } } ] }
})
also keep in mind that in mongo land you can't use functions like you would do with moment, hence why I put "endOfWeek" which would be a variable you set similar to now:
let now = momenttz.tz(moment(),tz).toDate();
let endOfWeek = moment.utc(today, 'MM/DD/YYYY').endOf('week').toDate()
Because GraphQL doesn't use the DateTime scalar, I have decided to convert all the fields in my MongoDB database collections that are of the type, DateTime, into Integer, representing DateTime as milliseconds. I have about 8,000+ documents that need to be modified and created a script to do the work.
The script was supposed to create a new field "publishedID", Integer scalar, to correspond with the "published" field. When I loaded my script, all the documents were over written, leaving only the DateTime field - although, as I intended, in milliseconds - but all the other fields, such as, "title", "image", "body", "subtitle", including other DateTime types like, "modified" and "created", etc. were deleted.
Below is the script I ran:
db.Post.find().forEach(function(myDoc) {
let currentDate = new Date(myDoc.published);
print(currentDate);
db.Post.update(
{ published: currentDate },
{ publishedID: currentDate.valueOf() }
);
});
I had hoped the ISO DateTimes, previously set for the "published" field would just have been converted to milliseconds, I got that. But I did not want everything else in the document deleted.
Here, you are missing the $set operator in the update query which is the root cause of all fields deletion in the documents
db.Post.find().forEach(function(myDoc) {
let currentDate = new Date(myDoc.published);
print(currentDate);
db.Post.update(
{ published: currentDate },
{ $set: {publishedID: currentDate.valueOf()} } // $set: {} added
);
});
$set operator updates the given fields but here we have added one field only which will replace the existing fields in the matched collections.
I have a program where I'm requesting weather data from a server, processing the data, and then saving it to an mlab account using mongoose. I'm gathering 10 years of data, but the API that I'm requesting the data from only allows about a year at a time to be requested.
I'm using findOndAndUpdate to create/update the document for each weather station, but am having trouble updating the arrays within the data object. (Probably not the best way to describe it...)
For example, here's the model:
const stnDataSchema = new Schema(
{
station: { type: String, default: null },
elevation: { type: String, default: null },
timeZone: { type: String, default: null },
dates: {},
data: {}
},
{ collection: 'stndata' },
{ runSettersOnQuery: true }
)
where the dates object looks like this:
dates: ["2007-01-01",
"2007-01-02",
"2007-01-03",
"2007-01-04",
"2007-01-05",
"2007-01-06",
"2007-01-07",
"2007-01-08",
"2007-01-09"]
and the data object like this:
"data": [
{
"maxT": [
0,
null,
4.4,
0,
-2.7,
etc.....
what I want to have happen is when I run findOneAndUpdate I want to find the document based on the station, and then append new maxT values and dates to the respective arrays. I have it working for the date array, but am running into trouble with the data array as the elements I'm updated are nested.
I tried this:
const update = {
$set: {'station': station, 'elevation': elevation, 'timeZone': timeZone},
$push: {'dates': datesTest, 'data.0.maxT': testMaxT}};
StnData.findOneAndUpdate( query, update, {upsert: true} ,
function(err, doc) {
if (err) {
console.log("error in updateStation", err)
throw new Error('error in updateStation')
}
else {
console.log('saved')
but got an output into mlab like this:
"data": {
"0": {
"maxT": [
"a",
"b",
the issue is that I get a "0" instead of an array of one element. I tried 'data[0].maxT' but nothing happens when I do that.
The issue is that the first time I run the data for a station, I want to create a new document with data object of the format in my third code block, and then on subsequent runs, once that document already exists, update the maxT array with new values. Any ideas?
You are getting this output:
"data": {
"0": {
"maxT": [
"a",
"b",
because you are upserting the document. Upserting gets a bit complicated when dealing with arrays of documents.
When updating an array, MongoDB knows that data.0 refers to the first element in the array. However, when inserting, MongoDB can't tell if it's meant to be an array or an object. So it assumes it's an object. So rather than inserting ["val"], it inserts {"0": "val"}.
Simplest Solution
Don't use an upsert. Insert a document for each new weather station then use findOndAndUpdate to push values into the arrays in the documents. As long as you insert the arrays correctly the first time, you will be able to push to them without them turning into objects.
Alternative Simple Solution if data just Contains one Object
From your question, it looks like you only have one object in data. If that is the case, you could just make the maxT array top-level, instead of being a property of a single document in an array. Then it would act just like dates.
More Complicated MongoDB 3.6 Solution
If you truly cannot do without upserts, MongoDB 3.6 introduced the filtered positional operator $[<identifier>]. You can use this operator to update specific elements in an array which match a query. Unlike the simple positional operator $, the new $[<identifier>] operator can be used to upsert as long as an exact match is used.
You can read more about this operator here: https://docs.mongodb.com/manual/reference/operator/update/positional-filtered/
So your data objects will need to have a field which can be matched exactly on (say name). An example query would look something like this:
let query = {
_id: 'idOfDocument',
data: [{name: 'subobjectName'}] // Need this for an exact match
}
let update = {$push: {'data.$[el].maxT': testMaxT}}
let options = {upsert: true, arrayFilters: [{'el.name': 'subobjectName'}]}
StnData.findOneAndUpdate(query, update, options, callbackFn)
As you can see this adds much more complexity. It would be much easier to forget about trying to do upserts. Just do one insert then update.
Moreover mLab currently does not support MongoDB 3.6. So this method won't be viable when using mLab until 3.6 is supported.