How to use MongoDB find without a hexadecimal _id - javascript

I am having a lot of trouble finding a record in my database with the _id: "AAE45/0RQfm/VUrywfb1Gw=="
(eg. db.collection.find( {_id: new BinData(3, "AAE45/0RQfm/VUrywfb1Gw==") }) ).
It works fine using a BinData converter in the mongo console, but refuses to work from inside a javascript file (I am using node.js) even though I have installed the BinData npm and "required" it.
I have also tried the Binary() function, but it keeps telling me it needs to be hexadecimal or 12-byte binary or something. .hex, .str and .toString() don't work either.
I found this somewhere:
{"$binary": "AAE45/0RQfm/VUrywfb1Gw==", "$type": "03"}
which looks promising, but I have no idea how to implement it.
I hope this makes sense. Any suggestions would be very much appreciated, if anyone has any insight on what process I should follow (eg: convert to binary, then hex, then use ...) that would be fantastic.

You'll have to convert the base64 string to a byte array, then use Binary to create the corresponding mongodb object. Here's some working sample code that inserts a document with the given id in a mongodb collection:
var MongoClient = require('mongodb').MongoClient;
var Binary = require('mongodb').Binary;
MongoClient.connect("mongodb://localhost:27017/example", function (err, db) {
if (err) { return console.dir(err); }
var collection = db.collection('test');
// decode the base64 string into a buffer
var buf = new Buffer("AAE45/0RQfm/VUrywfb1Gw==", 'base64');
// create a mongo 'binary' object w/ subtype 3
var uuid = new Binary(buf, 3);
var doc1 = { 'hello': 'foo bar', '_id' : uuid };
collection.insert(doc1, { w: 1 }, function (err, result) { });
});
You might want to ensure you really want to use subtype 3, because it's the old UUID type.

Related

using Javascript and node csv-parse to parse csv file into an array

I have a project where I have to process an input CSV file and store it into an array that I can add to, then print it out into a CSV file. I then use the transaction data for the rest of my project so being able to complete this part is vital as testing will be performed with other CSV files.
My issue is that whilst using csv-parse if I use console.table(results); it shows the csv objects when I run the .js file in my command terminal so I know its parsing but no matter what I do I cannot get the objects to go into my array variable.
console.table(results);
Please can someone give me a hint as to where I've gone wrong:
var fs = require('fs');
var parse = require('csv-parse');
var transactionValues = []; //Need an array to hold transactions
//constuctor for transactions
function addData (id, accountType, initiatorType, dateTime, transactions) {
var data = {
"AccountID" : id,
"AccountType" : accountType,
"InitiatorType" : initiatorType,
"DateTime" : dateTime,
"TransactionValues" : transactions
}
transactionValues.push(data); //should add a new line
}
var parser = parse({columns: true}, function (err, results) {
console.table(results);
addData(results.index[0].AccountID, results.index[0].AccountType, results.index[0].InitiatorType, results.index[0].DateTime, results.index[0].TransactionValue, 0);
}); //attempted to save the objects into the array but no success
fs.createReadStream(__dirname+'/testData/customer-1234567-ledger.csv').pipe(parser)
console.log(transactionValues); // array is empty
I believe results is already a normal array as it comes back from csv-parse. You are trying to access the first element with results.index[0], but it would just be results[0]. Another issue is that fs.createReadStream(...).pipe(...) is asynchronous. That means your console.log will run before it is done parsing. You would need to put any code that has to run after parsing in the callback of your parse function. Something like this:
var parser = parse({columns: true}, function (err, results) {
console.table(results);
for (const row of results) { //loop through each object parsed from the csv
addData(row.AccountID, row.AccountType, row.InitiatorType, row.DateTime, row.TransactionValue, 0);
}
console.log(transactionValues); // this should be populated properly
/* Do anything that needs to use transactionValues here */
});

Using MongoDB Stitch webhook service function, how can I convert the returned "insertedId" to string?

I'm creating MongoDB Stitch http services (using the 3rd Party Services > Incoming Webhooks > Function Editor) for a simple CRUD application and I am trying to insertOne() document and then return the document id as a string.
The insertion works as expected and I await the result and store it in a variable.
const result = await collection.insertOne(payload.query)
const id = result.insertedId
The problem is trying to convert this id to string. I'm guessing it's due to my lack of understanding the MongoDB Extended JSON and BSON types, etc. I've read all parts of the documentation that I thought could provide a solution here.
I've tried the following to convert the return value to a string:
id.toString() // logs as a string, but is an object { $oid: '507c7f79bcf86cd7994f6c0e' }
id.valueOf() // logs as a string, but is an object { $oid: '507c7f79bcf86cd7994f6c0e' }
id['$oid'] // undefined
Object.values(id.insertedId) // throws error: {"message":"'values' is not a function"}
id.str // undefined
result.str // undefined
Not sure what else to try and would really appreciate if someone can point me in the right direction here to provide a solution.
Thank you in advance.
Found a solution after noticing this section of the documentation:
MONGODB STITCH > Functions > Reference > JSON & BSON
const result = await collection.insertOne(payload.query)
const id = JSON.stringify(
JSON.parse(
EJSON.stringify(result.insertedId)
).$oid
).replace(/\W/g,'');
Seems a bit hacky just for getting the id of the document into a string, but it will work for now.

Changing an array entry within a json object

I've pulled json from a File using NodeJS's fs.createReadStream() and I'm now finding difficulty writing data back into the File (already parsing and then stringifying as appropriate).
The Discord-Bot I'm developing deletes text-channels then 'recreates' them (with the same title) to clear chat - it grabs the channel IDs dynamically and puts them in a file, until the channels are deleted.
However, the file-writing procedure ends up in errors.
This was my first attempt:
let channels_json = fs.createReadStream()
//let channels_json = fs.readFileSync(`${__dirname}\\..\\json\\channels.json`);
let obj = (JSON.parse(channels_json)).channelsToClear;
let i = 0;
obj.forEach(id => {
i++;
if(id === originalId){
obj[i] = channela.id;
}
});
obj += (JSON.parse(channels_json)).infoChannel;
obj += "abc";
json = JSON.stringify(obj);
channels_json.write(json);
This was my second:
let id_to_replace = message.guild.channels.get(channels[channel]).id;
//let channels_json = fs.readFileSync(`${__dirname}\\..\\json\\channels.json`);
let obj;
let channels_json = fs.createReadStream(`${__dirname}\\..\\json\\channels.json`,function(err,data){
if (err) throw err;
obj = JSON.parse(data);
if (obj["channelsToClear"].indexOf(id_to_replace) > -1) {
obj["channelsToClear"][obj["channelsToClear"].indexOf(id_to_replace)] = channela.id;
//then replace the json file with new parsed one
channels_json.writeFile(`${__dirname}\\..\\json\\channels.json`, JSON.stringify(obj), function(){
console.log("Successfully replaced channels.json contents");
});
//channels_json.end();
}
});
The final outcome was to update the 'channelsToClear' array within the json file with new channel-IDs. Console/Node output varied, all of which had to do with "create channels with an options object" or "Buffer.write" (all irrelevant) - the json file remained unchanged..
You're using Streams incorrectly. You can't write back out through a read stream.
For a simple script type thing, streaming is probably overkill. Streams are quite a bit more complicated and while worth it for high-efficienty applications but not for something that looks like it is going to be relatively quick.
Use the fs.readFileSync and fs.writeFileSync to get and write the data instead.
As far as your actual searching for and replacing the channel, I think either approach would work, but assuming there is only ever going to be one replacement, the second approach is probably better.

Deleting a row based on ID from a local csv file - javascript

I'm not a super experienced coder, so forgive me if the question is rather too simple.
I have a csv with many rows, and one of its columns is 'id'. How can I remove just one row based on the id (i.e. code should search for id and delete that row)?
I got the following so far (not too helpful since on one day I may need to remove id 5 and on another I may need to remove id 2...) Thank you so much!
var fs = require('fs')
fs.readFile(filename, 'utf8', function(err, data)
{
if (err)
{
// check and handle err
}
var linesExceptFirst = data.split('\n').slice(1).join('\n');
fs.writeFile(filename, linesExceptFirst);
});
PS: it must be in javascript as the code is running on a nodejs server
You'll need to parse the CSV which is simple with Array.prototype.map()
Then you'll need to use Array.prototype.filter() to find the column value you are after.
It is just a couple lines of code and you are all set:
var fs = require('fs')
// Set this up someplace
var idToSearchFor = 2;
// read the file
fs.readFile('csv.csv', 'utf8', function(err, data)
{
if (err)
{
// check and handle err
}
// Get an array of comma separated lines`
let linesExceptFirst = data.split('\n').slice(1);
// Turn that into a data structure we can parse (array of arrays)
let linesArr = linesExceptFirst.map(line=>line.split(','));
// Use filter to find the matching ID then return only those that don't matching
// deleting the found match
// Join then into a string with new lines
let output = linesArr.filter(line=>parseInt(line[0]) !== idToSearchFor).join("\n");
// Write out new file
fs.writeFileSync('new.csv', output);
});
Note that I removed the call to .join() so we can operate on the array created from the call to .split(). The rest is commented.
And finally, a working example can be found here: https://repl.it/#randycasburn/Parse-CSV-and-Find-row-by-column-value-ID
EDIT: The code will now return all rows except the found id. Hence, in essence, deleting the row. (Per OPs comment request).
EDIT2: Now outputting to new CSV file per request.

Node.js / MongoDB / Mongoose: Buffer Comparison

First, a little background:
I'm trying to check to see if an image's binary data has already been saved in Mongo. Given the following schema:
var mongoose = require('mongoose')
, Schema = mongoose.Schema;
var imageSchema = new Schema({
mime: String,
bin: { type: Buffer, index: { unique: true }},
uses : [{type: Schema.Types.ObjectId}]
});
module.exports = mongoose.model('Image', imageSchema);
...I want to query to see if an image exists, if it does add a reference that my object is using it, and then update it. If it doesn't, I want to create (upsert) it.
Given the case that it does not exist, the below code works perfectly. If it does, the below code does not and adds another Image document to Mongo. I feel like it is probably a comparison issue for the Mongo Buffer type vs node Buffer, but I can't figure out how to properly compare them. Please let me know how to update the below! Thanks!
Image.findOneAndUpdate({
mime : contentType,
bin : image
}, {
$pushAll : {
uses : [ myObject._id ]
}
}, {
upsert : true
}, function(err, image) {
if (err)
console.log(err);
// !!!image is created always, never updated!!!
});
Mongoose converts Buffer elements destined to be stored to mongodb Binary, but it performs the appropriate casts when doing queries.
The expected behavior is also checked in units tests (also the storage and retrieval of a node.js Buffer).
Are you sure you are passing a node.js Buffer?
In any case I think the best approach to handle the initial problem (check if an image is already in the db) would be storing a strong hash digest (sha1, sha256, ...) of the binary data and check that (using the crypto module).
When querying, as a preliminary test you could also check the binary length to avoid unnecessary computations.
For an example of how to get the digest for your image before storing/querying it:
var crypto = require('crypto');
...
// be sure image is a node.js Buffer
var image_digest = crypto.createHash('sha256');
image_digest.update(image);
image_digest = image_digest.digest('base64');
It is not a good idea to query for your image by the node.js Buffer that contains the image data. You're right that it's probably an issue between the BSON binary data type and a node Buffer, but does your application really require such a comparison?
Instead, I'd add an imageID or slug field to your schema, add an index to this field, and query on it instead of bin in your findOneAndUpdate call:
var imageSchema = new Schema({
imageID: { type: String, index: { unique: true }},
mime: String,
bin: Buffer,
uses : [{type: Schema.Types.ObjectId}]
});
the hash does work, another filter I have used is the exif data for the image.
As this is structured information, if you have a match on exif data, you could then go to the next step of checking for a match on the hash or file size...
heaps of node modules to get the exif data nice and easily for your storage :)
example code to get exif data for node

Categories