NodeJS calculating difference between timestamps based on specific conditions - javascript

Have an interesting problem.
I have a SQL database, 50000+ entries containing data that looks as follows when queried with KnexJS:
[{
id: 65532,
status: 0.00,
timestamp: 1656124205000,
},
{
id: 65533,
status: 49.81,
timestamp: 1656124503000,
},
{
id: 65534,
status: 0.00,
timestamp: 1656124909000,
}
.... 60000+ more entries
]
I'm using Knex as the query builder.
I want be able to find every object where status is 0 and then get the following object where status is not 0 and then calculate the time difference for each occurrence where that happened.
Usually 0.00 won't be consecutive, but there's no guarantee for that, else I would've just grabbed the id/index where status is 0.00 and + 1.
Any suggestions where to get started with this kinda logic?

Related

Manipulating MongoDB values for a stock chart

I currently have a simple line chart that is using a 'value' and 'date' to be generated. This is being pulled straight from my database (mongodb)
{
product: 'title',
value: 123,
date: ISODate("2022-08-25T06:30:12.713Z")
}
I would like to now change this chart to a stock chart but instead of a 'value' and 'date' i'll need an 'open', 'close', 'high' and 'low'.
{
product: 'title',
open: 123,
close: 125,
high: 130,
low: 120
}
What is the best and most performant way for me to approach this?
a) Mass data manipulator/processing on the front end after recieving the data from the api?
b) A huge aggregation query to calculate and group every price, for every hour that has passed since October?
c) Mass database update retrofitting and calculating all of the values and storing them as the fields I need?
d) Something else I've not considered?
Any advice greatly appreciated.

How do I do a MongoDB left join, keeping documents with no match on the right?

I'm a little new to this, so forgive me if this is actually trivial. I'm trying to do a left outer join in MongoDB/NodeJS. I would like all matching documents from the left table, even if they have no match on the right. I have a collection of questions:
{_id: 'question1', text: 'This is a question.'}
{_id: 'question2', text: 'This is another question.'}
And a collection of responses, each tied to a user and a question:
{_id: 'response1', QID: 'question1', UID: 'player1', response: 'This is my answer.'}
Now, I'd like to get a list of questions and a user's response to each one, including ones where there is no recorded response, so what I'd want from the documents above might be...
{_id: 'question1', text: 'This is a question.', response: {_id: 'response1', QID: 'question1', UID: 'player1', response: 'This is my answer'}}
{_id: 'question2', text: 'This is another question.', response: []}
Is there a way to do this in the aggregation pipeline? When I use lookup to join responses to questions, and then match the UID, the second question disappears because there's no response tied to it (and thus, fails to satisfy the UID match).
Edit: another thing I tried was to use a let/pipeline in the lookup stage:
const month = 11;
const year = 2020;
const UID = 'aaaaaaa';
//Only get responses by UID 'aaaaaaa'
const myResponses = await Question.aggregate([
{ $match: { year, month } },
{
$lookup: {
from: Response.collection.name,
let: { questionID: '$_id' },
pipeline: [
{
$match: {
$expr: {
$and: [{ $eq: ['$$questionID', '$QID'] }, { $eq: ['$UID', UID] }],
},
},
},
],
as: 'Response',
},
},
This was close, but for some reason, matching the UIDs doesn't seem to work, as it returns an empty array for the Response on every clue. Taking out the last $eq condition and just matching $$questionID and $QID gets every response to every question, as I would've expected, but trying to check for equality to a constant isn't working.
Thanks in advance!
https://docs.mongodb.com/manual/reference/operator/aggregation/lookup/ per the docs
Performs a left outer join

update multiple different documents in mongodb (nodejs backend) [duplicate]

This question already has answers here:
MongoDB: How to update multiple documents with a single command?
(13 answers)
Closed 3 years ago.
I looked at other questions and I feel mine was different enough to ask.
I am sending a (potentially) large amount of information back to my backend, here is an example data set:
[ { orders: [Array],
_id: '5c919285bde87b1fc32b7553',
name: 'Test',
date: '2019-03-19',
customerName: 'Amego',
customerPhone: '9991112222',
customerStreet: 'Lost Ave',
customerCity: 'WestZone',
driver: 'CoolCat',
driverReq: false, // this is always false when it is ready to print
isPrinted: false, // < this is important
deliveryCost: '3',
total: '38.48',
taxTotal: '5.00',
finalTotal: '43.48',
__v: 0 },
{ orders: [Array],
_id: '5c919233bde87b1fc32b7552',
name: 'Test',
date: '2019-03-19',
customerName: 'Foo',
customerPhone: '9991112222',
customerStreet: 'Found Ave',
customerCity: 'EastZone',
driver: 'ChillDog',
driverReq: false,// this is always false when it is ready to print
isPrinted: false, // < this is important
deliveryCost: '3',
total: '9.99',
taxTotal: '1.30',
finalTotal: '11.29',
__v: 0 },
{ orders: [Array],
_id: '5c91903b6e0b7f1f4afc5c43',
name: 'Test',
date: '2019-03-19',
customerName: 'Boobert',
customerPhone: '9991112222',
customerStreet: 'Narnia',
customerCity: 'SouthSzone',
driver: 'SadSeal',
driverReq: false,// this is always false when it is ready to print
isPrinted: false, // < this is important
deliveryCost: '3',
total: '41.78',
taxTotal: '5.43',
finalTotal: '47.21',
__v: 0 } ] }
My front end can find all the orders that include isPrinted:false, I then allow the end user to 'print' all the orders that are prepared, in which, I need to change isPrinted into true, that way when I pull up a next batch I won't have reprints.
I was looking at db.test.updateMany({foo: "bar"}, {$set: {isPrinted: true}}), and I currently allow each order to set a new driver, which I update by:
Order.update({
_id: mongoose.Types.ObjectId(req.body.id)
},
{
$set: {
driver:req.body.driver, driverReq:false
}
which is pretty straight forward, as only 1 order comes back at a time.
I have considered my front end doing a foreach and posting each order individually, then updating the isPrinted individually but that seems quite inefficient. Is there a elegant solutions within mongo for this?
I'm not sure how I would user updateMany considering each _id is unique, unless I grab all the order's who are both driverReq:false and isPrinted:false (because that is the case where they are ready to print.
I found a solution, that was in fact using UpdateMany.
Order.updateMany({
isPrinted: false, driverReq:false
},
{
$set: {
isPrinted: true
}
consider there this special case where both are false when it needs to be changed too true. But I do wonder if there is a way to iterate over multiple document id's with ease.

Split object into multiple ordered arrays based on identity of 1 spesific value

Not quite sure if that title was the best I could do.
I'm a pretty new to js and keep running into problems ... I hope some of you have the time to give me a pointer or two on this scenario.
I have several objects that looks pretty much like this - except from the fact that there are 28 instances of every "room" type. I need to split this object into multiple objects - one for each "room" type. In some of my objects there are only one room type - whilst in others there are 3 or 4.
[ { id: 1
created: 2018-12-29T13:18:05.788Z,
room: 'Double Room'
type: 'Standard'
price: 500
},
{ id: 29
created: 2018-12-29T13:18:05.788Z,
room: 'Twin Room'
type: 'Standard'
price: 500
},
{ id: 58
created: 2018-12-29T13:18:05.788Z,
room: 'Family Room'
type: 'Standard'
price: 900
},
]
Oh, and it's important that the instances don't "loose" their order in the array - since it's date related and need to be presentet in an ascending order. And vanilla js only.
Is array.map() the function I'm looking for to solve this problem? Is it posible to do this without iteration?
My final goal is to create some kind of generic function that can sort this out for all my objects.
And guys: happy hollidays!
You could take an object as hash table for the wanted groups. Then iterate the objects and assign the object to the group. If the group does not exist, create a new group with an array.
function groupBy(array, key) {
var groups = Object.create(null);
array.forEach(o => (groups[o[key]] = groups[o[key]] || []).push(o));
return groups;
}
var data = [{ id: 1, created: '2018-12-29T13:18:05.788Z', room: 'Double Room', type: 'Standard', price: 500 }, { id: 29, created: '2018-12-29T13:18:05.788Z', room: 'Twin Room', type: 'Standard', price: 500 }, { id: 58, created: '2018-12-29T13:18:05.788Z', room: 'Family Room', type: 'Standard', price: 900 }],
groupedByRoom = groupBy(data, 'room');
console.log(groupedByRoom);
.as-console-wrapper { max-height: 100% !important; top: 0; }

Prevent Javascript function running out of memory because too many objects

I'm building a web scraper in nodeJS that uses request and cheerio to parse the DOM. While I am using node, I believe this is more of a general javascript question.
tl;dr - creating ~60,000 - 100,000 objects, uses up all my computer's RAM, get an out of memory error in node.
Here's how the scraper works. It's loops within loops, I've never designed anything this complex before so there might be way better ways to do this.
Loop 1: Creates 10 objects in array called 'sitesArr'. Each object represents one website to scrape.
var sitesArr = [
{
name: 'store name',
baseURL: 'www.basedomain.com',
categoryFunct: '(function(){ // do stuff })();',
gender: 'mens',
currency: 'USD',
title_selector: 'h1',
description_selector: 'p.description'
},
// ... x10
]
Loop 2: Loops through 'sitesArr'. For each site it goes to the homepage via 'request' and gets a list of category links, usually 30-70 URLs. Appends these URLs to the current 'sitesArr' object to which they belong, in an array property whose name is 'categories'.
var sitesArr = [
{
name: 'store name',
baseURL: 'www.basedomain.com',
categoryFunct: '(function(){ // do stuff })();',
gender: 'mens',
currency: 'USD',
title_selector: 'h1',
description_selector: 'p.description',
categories: [
{
name: 'shoes',
url: 'www.basedomain.com/shoes'
},{
name: 'socks',
url: 'www.basedomain.com/socks'
} // x 50
]
},
// ... x10
]
Loop 3: Loops through each 'category'. For each URL it gets a list of products links and puts them in an array. Usually ~300-1000 products per category
var sitesArr = [
{
name: 'store name',
baseURL: 'www.basedomain.com',
categoryFunct: '(function(){ // do stuff })();',
gender: 'mens',
currency: 'USD',
title_selector: 'h1',
description_selector: 'p.description',
categories: [
{
name: 'shoes',
url: 'www.basedomain.com/shoes',
products: [
'www.basedomain.com/shoes/product1.html',
'www.basedomain.com/shoes/product2.html',
'www.basedomain.com/shoes/product3.html',
// x 300
]
},// x 50
]
},
// ... x10
]
Loop 4: Loops through each of the 'products' array, goes to each URL and creates an object for each.
var product = {
infoLink: "www.basedomain.com/shoes/product1.html",
description: "This is a description for the object",
title: "Product 1",
Category: "Shoes",
imgs: ['http://foo.com/img.jpg','http://foo.com/img2.jpg','http://foo.com/img3.jpg'],
price: 60,
currency: 'USD'
}
Then, for each product object I'm shipping them off to a MongoDB function which does an upsert into my database
THE ISSUE
This all worked just fine, until the process got large. I'm creating about 60,000 product objects every time this script runs, and after a little while all of my computer's RAM is being used up. What's more, after getting about halfway through my process I get the following error in Node:
FATAL ERROR: CALL_AND_RETRY_2 Allocation failed - process out of memory
I'm very much of the mind that this is a code design issue. Should I be "deleting" the objects once I'm done with them? What's the best way to tackle this?

Categories