"Bulk" Updating with Postgres DB and JS/Knex/Express Question - javascript

I have an update endpoint that when an incoming (request) contains a site name that matches any site name in my job site's table, I change all those particular DB entries status to "Pending Transfer" and essentially clear their site location data.
I've been able to make this work with the following:
async function bulkUpdate(req, res){
const site = req.body.data;
const data = await knex('assets')
.whereRaw(`location ->> 'site' = '${site.physical_site_name}'`)
.update({
status: "Pending Transfer",
location: {
site: site.physical_site_name,
site_loc: { first_octet: site.first_octet, mdc: '', shelf: '', unit: ''} //remove IP address
},
//history: ''
}) //todo: update history as well
.returning('*')
.then((results) => results[0]);
res.status(200).json({ data });
}
I also want to update history (any action we ever take on an object like a job site is stored in a JSON object, basically used as an array.
As you can see, history is commented out, but as this function essentially "sweeps" over all job sites that match the criteria and makes the change, I would also like to "push" an entry onto the existing history column here as well. I've done this in other situations where I destructure the existing history data, and add the new entry, etc. But as we are "sweeping" over the data, I'm wondering if there is a way to just push this data onto that array without having to pull each individual's history data via destructuring?
The shape of an entry in history column is like so:
[{"action_date":"\"2022-09-06T22:41:10.232Z\"","action_taken":"Bulk Upload","action_by":"Davi","action_by_id":120,"action_comment":"Initial Upload","action_key":"PRtW2o3OoosRK9oiUUMnByM4V"}]
So ideally I would like to "push" a new object onto this array without having (or overwriting) the previous data.
I'm newer at this, so thank you for all the help.

I had to convert the column from json to jsonb type, but this did the trick (with the concat operator)...
history: knex.raw(`history || ?::jsonb`, JSON.stringify({ newObj: newObjData }))

Related

Upsert in prisma without ID creates duplicate records (promise error?)

I need to insert records in a PG database from a json file with 500k records.
Since the file is huge I'm creating a Read Stream and using JSONStream.parse to send json objects over to pipe.
So far so good. This is not the problem, I'm just providing context.
After parsing the object, I need to insert the information using prisma but I cannot insert records if certain field is already in the table. So first thing I think is that I should use an upsert.
The problem is that this field is not a unique key of that table, therefore I cannot use it in the where clause of the prisma upsert.
Then, I did the following:
await prisma.pets
.findFirst({
where: { name: nameFromJson },
})
.then(async (existing_pet) => {
if (!existing_pet) {
await prisma.pets.create({
data: {
name: nameFromJson,
legs: numberOfLegs,
isAlive: isAlive,
},
})
}
})
.catch((error) => {
throw new Error(error)
})
My idea is to find the record with the same field first and when that promise is resolved, send the result to another promise where I can check the record and if it does not exist, then just shoot the create.
But I keep getting duplicates in the table.
I'd like to understand what I'm doing wrong. And of course, what would be the right way to proceed in a scenario like this.

Sorting Firebase Data Based on Child Values

I'm trying to sort my Firebase query by the timestamps on each post child. Instead, I'm just getting the data as it's stored in the database, unsorted. I'm using the firebase npm package.
The data is structured as followed:
posts
-Lsx-tFbXe83gANXP3TD
-timestamp: 1466171493193
-Lsx-sWzXe83gANWNM3R
-timestamp: 1466171493111
Here is my javascript code that I wrote using: https://firebase.google.com/docs/database/web/lists-of-data
firebase.database()
.ref("posts")
.orderByChild("timestamp")
.on("value", function(snapshot) {
_this.setState({
posts: Object.values(snapshot.val()),
loading: false
});
});
Thanks in advance!
The snapshot you get back contains three pieces of information about the child nodes that match your query:
The key
The value
Their relative position to each other
As soon as you call snapshot.val() all information about ordering is lost, since a JSON object can only contain keys and values.
To maintain the order, you'll want to convert the information to an array:
var values = [];
snapshot.forEach(function(child) {
values.push(child.val());
})

Generate a custom unique id and check it does not exist before creating a new user with MySQL and sequelize

I'm trying to create a unique id that is 8 characters long for each new user added to a MySQL database. I am using Sequelize along with express to create users. I've created my own custom function: idGen() that simply returns a randomized 8 character string. Using express router I can handle/validate all the form data used to create a new user. The issue I am having is when I generate a new ID I want to check to make sure that ID does not already exist in the database. So far I have this solution:
Users.findAll().then( data => {
tableData = data.map(id => id.get('id'));
while( tableData.includes(uid) ){
try {
uid = idGen(8);
} catch( error ){
return res.status(400).json( error )
}
}
}).then( () => {
Users.create({
id: uid,
name: req.body.name,
email: req.body.email
})
}).then( user => res.json(user) );
This block of code is actually working and saving the new user in the DB, but I am almost certain that this is not the best / right way of doing this. Is anyone able to point me in the right direction and show me a better/proper way to check the random generated ID, and recall idGen(if needed) in a loop before adding a new user?
Many Thanks!
instead of find all and then filter in Javascript, why don't you select from the database right away?
an alternative way I could think of is using a filter like bloom or cuckoo.the false positive rate should be low.
load ids to redis, probably with redis bloom (https://github.com/RedisBloom/RedisBloom)
check the new generated id with bloom filter.
if exists => re-generate id. if not, insert. there could be false positive but the rate is low and you can handle it just the same.
pros:
- no need to check again database every time.
- checking with bloom filter is probably much faster than db.
- scaling redis is easier than db.
cons:
- need redis and redis bloom.

firebase simple - data structure questions

I have been reading a little about how to structure your firebase database and i understand that you need to split your data into pieces so you don't forced the client to download all of the 'users' data.
So in this example you will get all the users data when you write
ref.('users').once('value', function(snap)...
/users/uid
/users/uid/email
/users/uid/messages
/users/uid/widgets
but what if you specifically write the path to the location instead like
ref.('users/uid/email').once('value', function(snap)...
Will you still get all the users data or only the data in email ?
In firebase, you set the ref to be the reference for your database (the whole database) and then you got methods to iterate through each piece of data of your database object, hence, a good practice is to set all your database as the ref and then work from there to go through what you want to go.
// will select the whole db
const firebaseRef = firebase.database().ref();
// will select the whole app object of your db
const firebaseRef = firebase.database().ref().child('app');
// will select the whole users object of your db
const firebaseRef = firebase.database().ref().child('app/users');
So, it is a good practice to set a variable like firebaseRef to be your whole firebase db and then iterate from there.
Now, if you write:
firebaseRef.child(`users/${uid}/email`).once('value').then(snapshot => {
console.log('User email: ', snapshot.val());
}, e => {
console.log('Unable to fetch value');
});
Yes, you will get what you're asking, but you need to use the child method to get to the objects of your firebase ref.

Mongoose - Updating a referenced Document when saving

If I have a Schema which has an Array of references to another Schema, is there a way I can update both Documents with one endpoint?
This is my Schema:
CompanySchema = new Schema({
addresses: [{
type: Schema.Types.ObjectId,
ref: 'Address'
}]
});
I want to send a Company with the full Address object to /companies/:id/edit. With this endpoint, I want to edit attributes on Company and Address at the same time.
In Rails you can use something like nested attributes to do one big UPDATE call, and it will update the Company and update or add the Address as well.
Any idea how would you do this in Mongoose?
Cascade saves are not natively supported in Mongoose (issue).
But there are plugins (example: cascading-relations) that implement this behavior on nested populate objects.
Take in mind that mongodb is not a fully transactional database, and the "big save" is achieved with various insert()/update() op calls and you (or the plugin) have to handle errors and rollback.
Example of cascade save:
company.save()
.then(() => Promise.all(company.addresses.map(address => {
/* update fkeys if needed */
return address.save()
}))
.catch(err => console.error('something went wrong...', err))

Categories