Deleting a row based on ID from a local csv file - javascript - javascript

I'm not a super experienced coder, so forgive me if the question is rather too simple.
I have a csv with many rows, and one of its columns is 'id'. How can I remove just one row based on the id (i.e. code should search for id and delete that row)?
I got the following so far (not too helpful since on one day I may need to remove id 5 and on another I may need to remove id 2...) Thank you so much!
var fs = require('fs')
fs.readFile(filename, 'utf8', function(err, data)
{
if (err)
{
// check and handle err
}
var linesExceptFirst = data.split('\n').slice(1).join('\n');
fs.writeFile(filename, linesExceptFirst);
});
PS: it must be in javascript as the code is running on a nodejs server

You'll need to parse the CSV which is simple with Array.prototype.map()
Then you'll need to use Array.prototype.filter() to find the column value you are after.
It is just a couple lines of code and you are all set:
var fs = require('fs')
// Set this up someplace
var idToSearchFor = 2;
// read the file
fs.readFile('csv.csv', 'utf8', function(err, data)
{
if (err)
{
// check and handle err
}
// Get an array of comma separated lines`
let linesExceptFirst = data.split('\n').slice(1);
// Turn that into a data structure we can parse (array of arrays)
let linesArr = linesExceptFirst.map(line=>line.split(','));
// Use filter to find the matching ID then return only those that don't matching
// deleting the found match
// Join then into a string with new lines
let output = linesArr.filter(line=>parseInt(line[0]) !== idToSearchFor).join("\n");
// Write out new file
fs.writeFileSync('new.csv', output);
});
Note that I removed the call to .join() so we can operate on the array created from the call to .split(). The rest is commented.
And finally, a working example can be found here: https://repl.it/#randycasburn/Parse-CSV-and-Find-row-by-column-value-ID
EDIT: The code will now return all rows except the found id. Hence, in essence, deleting the row. (Per OPs comment request).
EDIT2: Now outputting to new CSV file per request.

Related

using Javascript and node csv-parse to parse csv file into an array

I have a project where I have to process an input CSV file and store it into an array that I can add to, then print it out into a CSV file. I then use the transaction data for the rest of my project so being able to complete this part is vital as testing will be performed with other CSV files.
My issue is that whilst using csv-parse if I use console.table(results); it shows the csv objects when I run the .js file in my command terminal so I know its parsing but no matter what I do I cannot get the objects to go into my array variable.
console.table(results);
Please can someone give me a hint as to where I've gone wrong:
var fs = require('fs');
var parse = require('csv-parse');
var transactionValues = []; //Need an array to hold transactions
//constuctor for transactions
function addData (id, accountType, initiatorType, dateTime, transactions) {
var data = {
"AccountID" : id,
"AccountType" : accountType,
"InitiatorType" : initiatorType,
"DateTime" : dateTime,
"TransactionValues" : transactions
}
transactionValues.push(data); //should add a new line
}
var parser = parse({columns: true}, function (err, results) {
console.table(results);
addData(results.index[0].AccountID, results.index[0].AccountType, results.index[0].InitiatorType, results.index[0].DateTime, results.index[0].TransactionValue, 0);
}); //attempted to save the objects into the array but no success
fs.createReadStream(__dirname+'/testData/customer-1234567-ledger.csv').pipe(parser)
console.log(transactionValues); // array is empty
I believe results is already a normal array as it comes back from csv-parse. You are trying to access the first element with results.index[0], but it would just be results[0]. Another issue is that fs.createReadStream(...).pipe(...) is asynchronous. That means your console.log will run before it is done parsing. You would need to put any code that has to run after parsing in the callback of your parse function. Something like this:
var parser = parse({columns: true}, function (err, results) {
console.table(results);
for (const row of results) { //loop through each object parsed from the csv
addData(row.AccountID, row.AccountType, row.InitiatorType, row.DateTime, row.TransactionValue, 0);
}
console.log(transactionValues); // this should be populated properly
/* Do anything that needs to use transactionValues here */
});

BigQuery stream to front-end with Express

I'm trying to read a query from BigQuery and stream it to the front-end. In Node.js-land with Express, this would be:
app.get('/endpoint', (req, res) => {
bigQuery.createQueryStream(query).pipe(res);
});
However, createQueryStream() does not create a Node.js stream, instead it's a custom stream object that returns table rows and as such it fails:
(node:21236) UnhandledPromiseRejectionWarning: TypeError [ERR_INVALID_ARG_TYPE]: The first argument must be one of type string or Buffer. Received type object
This is confirmed in the official documentation:
bigquery.createQueryStream(query)
.on('data', function(row) {
// row is a result from your query.
})
So, is there a way to stream BigQuery data to the front-end? I've thought two potential solutions but wanted to know if anyone knows a better way:
JSON.stringify() the row and return JSONL instead of plain JSON. This adds a front-end burden to decode it, but makes it fairly easy on both sides.
Move to the REST API and do actual streaming with request like: request(url, { body: { query, params } }).pipe(res) (or whatever is the specific API, haven't dug there yet).
I was confused that a Node.js library that says that it does streaming doesn't work with Node.js native streams, but this seems to be the case.
BigQuery is intended to be used with a wide array of different client libraries written for different programming languages, and therefore, it does not return nodejs-specific data structures, but rather, more general structures which are common to mostly any structured programming language, such as objects. Answering to your questions, yes, there is a way to stream BigQuery data to the front-end, but this is a rather personal choice, because all it entails is converting from one data type to another. However, I would say the most straight-forward way to do this is by calling JSON.stringify(), which you have already mentioned.
I hope that helps.
We ended up making an implementation that stitched together the reply from BigQuery into a big JSON array:
exports.stream = (query, params, res) => {
// Light testing for descriptive errors in the parameters
for (let key in params) {
if (typeof params[key] === "number" && isNaN(params[key])) {
throw new TypeError(`The parameter "${key}" should be a number`);
}
}
return new Promise((resolve, reject) => {
let prev = false;
const onData = row => {
try {
// Only handle it when there's a row
if (!row) return;
// There was a previous row written before, so add a comma
if (prev) {
res.write(",");
}
res.write(stringify(row));
prev = true;
} catch (error) {
console.error("Cannot parse row:", error);
// Just ignore it, don't write this frame
}
};
const onEnd = () => {
res.write("]");
res.end();
resolve();
};
res.writeHead(200, { "Content-Type": "application/json" });
res.write("[");
bigQuery
.createQueryStream({ query, params })
.on("error", reject)
.on("data", onData)
.on("end", onEnd);
});
};
It will build a large JSON array by stitching together:
[ // <- First character sent
stringify(row1) // <- First row
, // <- add comma on second row iteration
stringify(row2) // <- Second row
...
stringify(rowN) // <- Last row
] // <- Send the "]" character to close the array
This has the advantages:
The data is sent as soon as available, so the bandwidth needs are lower.
(depends on BigQuery implementation) lower memory needs on the server side since not all the data is hold at once in memory, only small chunks.

Changing an array entry within a json object

I've pulled json from a File using NodeJS's fs.createReadStream() and I'm now finding difficulty writing data back into the File (already parsing and then stringifying as appropriate).
The Discord-Bot I'm developing deletes text-channels then 'recreates' them (with the same title) to clear chat - it grabs the channel IDs dynamically and puts them in a file, until the channels are deleted.
However, the file-writing procedure ends up in errors.
This was my first attempt:
let channels_json = fs.createReadStream()
//let channels_json = fs.readFileSync(`${__dirname}\\..\\json\\channels.json`);
let obj = (JSON.parse(channels_json)).channelsToClear;
let i = 0;
obj.forEach(id => {
i++;
if(id === originalId){
obj[i] = channela.id;
}
});
obj += (JSON.parse(channels_json)).infoChannel;
obj += "abc";
json = JSON.stringify(obj);
channels_json.write(json);
This was my second:
let id_to_replace = message.guild.channels.get(channels[channel]).id;
//let channels_json = fs.readFileSync(`${__dirname}\\..\\json\\channels.json`);
let obj;
let channels_json = fs.createReadStream(`${__dirname}\\..\\json\\channels.json`,function(err,data){
if (err) throw err;
obj = JSON.parse(data);
if (obj["channelsToClear"].indexOf(id_to_replace) > -1) {
obj["channelsToClear"][obj["channelsToClear"].indexOf(id_to_replace)] = channela.id;
//then replace the json file with new parsed one
channels_json.writeFile(`${__dirname}\\..\\json\\channels.json`, JSON.stringify(obj), function(){
console.log("Successfully replaced channels.json contents");
});
//channels_json.end();
}
});
The final outcome was to update the 'channelsToClear' array within the json file with new channel-IDs. Console/Node output varied, all of which had to do with "create channels with an options object" or "Buffer.write" (all irrelevant) - the json file remained unchanged..
You're using Streams incorrectly. You can't write back out through a read stream.
For a simple script type thing, streaming is probably overkill. Streams are quite a bit more complicated and while worth it for high-efficienty applications but not for something that looks like it is going to be relatively quick.
Use the fs.readFileSync and fs.writeFileSync to get and write the data instead.
As far as your actual searching for and replacing the channel, I think either approach would work, but assuming there is only ever going to be one replacement, the second approach is probably better.

discord.js: randomise contents of array stored as json file

I'm working on a small Discord bot to try and teach myself a bit of js, but I've run into something I can't really work out or find an answer to.
I have a command that has the bot post a quote at random from an array stored in a separate json file, and this works as intended.
var config = require("./settings.json");
var quotes = config.quotes;
function randomQuote() {
return quotes[Math.floor(Math.random() * quotes.length)];
};
if(message.content.toLowerCase() === prefix + "quote") {
message.channel.send(randomQuote());
}
I'm now trying to do something similar, except rather than the array being a part of the json file, it is the json file. Essentially, the user can tell the bot to save a certain message, and it takes the message author & content and adds them as an entry to the array. It should then do the same as the quote function, randomise the entries in the array and then print one into the chat.
var mess;
let saveMess = JSON.parse(fs.readFileSync("./saveMess.json", "utf8"));
function randomMess() {
return saveMess[Math.floor(Math.random() * saveMess.length)];
};
if(message.content.toLowerCase().startsWith(prefix + "saveMess")) {
mess = message.author + " told me the following:\n" + message.content;
fs.readFile("./saveMess.json", function (err, data) {
var json = JSON.parse(data)
json.push(mess)
fs.writeFile("./saveMess.json", JSON.stringify(json), (err) => {
if (err) console.error(err)
});
})
}
if(message.content.toLowerCase() === prefix + "printMess") {
message.channel.send(randomMess());
}
From my testing, I know it's being correctly stored in an array, and asking to print the entire file correctly (i.e. message.channel.send(saveMess)) displays all the saved entries; however, when attempting to do the same randomising function, I get the error "cannot send empty message". Clearly, there's something I'm missing that sets an array contained within a json file apart from an array that is a json file; does anyone have any ideas on how I can get this working as intended?
For clarification, here's how the two json files look, to demonstrate the difference I'm talking about:
//settings.json - the quote file
{ "token" : "BotTokenHere",
"prefix" : "|",
"quotes" : [
"Quote 1",
"Quote 2",
"Quote 3"
]
}
//savemess.json - the saved messages file
[
"<#user1> told me the following:\n|saveMess test",
"<#user2> told me the following:\n|saveMess test2",
"<#user3> told me the following:\n|saveMess test3"
]
According to the JSON guidelines at json.org, a json file must be an object, it cannot be an array. It is possible that JSON.parse is not throwing an error, however returning an empty array/object. You should stick with using the original settings.json.

How to properly use ON DUPLICATE KEY UPDATE in Node.js MySQL

I'm using Node.js to push values to a MySQL table like:
for (var i = 0; i < data.length; i++) {
flattenedData.push([data[i].id, data[i].adult, data[i].backdrop_path, JSON.stringify(data[i].genre_ids), data[i].original_language, data[i].original_title, data[i].overview, data[i].popularity, data[i].poster_path, data[i].release_date, data[i].title, data[i].video, data[i].vote_average, data[i].vote_count]);
//console.log(flattenedData);
}
db.query("INSERT INTO movies (id, adult, backdrop_path, genre_ids, original_language, original_title, overview, popularity, poster_path, release_date, title, video, vote_average, vote_count ) values ?", [flattenedData], function (err, result) {
if (err) {
throw err;
}
else {
console.log('data inserted' + result);
}
});
I want to add ON DUPLICATE KEY UPDATE to the query, but I keep getting syntax errors, can anyone show me the proper way?
I'm going to short this to three columns, and assume id is the only unique column.
db.query("INSERT INTO movies (id, adult, backdrop_path) VALUES (?, ?, ?)
ON DUPLICATE KEY UPDATE adult=VALUES(adult), backdrop_path=VALUES(backdrop_path)",
flattenedData, function (err, result) {
...
This means if the insert results in a duplicate on the primary/unique column (id), then copy the other columns from the values you tried to insert in the VALUES clause into their respective column, overriding what their value was previously in the existing row.
There's no shortcut for that; you have to spell out all such column assignments.
I'm pretty sure the argument to query() for parameters should be an array, but it looks like your flattenedData is already an array since you're pushing to it. So I don't think you need to put it in square-brackets.
I was confused by how to get this to work in a react/redux app and eventually came to the "correct" method.
My implementation required me to update one field value per record for an arbitrary number of records in a table with 21 fields.
If you are passing data as an array structure it like [['dataString',666.66],['dataString2',666666.66],['dataString3',666666666.66]] and then make sure you pass this whole thing as an array to the query function. See itemQtyData in my code sample below.
Another thing that tripped me up was the use of brackets around the values replacement string. I didn't need them. The examples I looked at showed implementations that needed them. I also only used a single ? for all the values. So instead of using (?,?) to represent the values in the query, which didn't work, I used ?.
I found it unnecessary to supply all the field names and the corresponding values for the table. MySQL will warn you if fields don't have a default value. I haven't found this to be an issue in this case.
You can console.log the formatted sql in SqlString.format function in the file node_modules/sqlstring/lib/SqlString.js. I found this useful to see exactly why the query wasn't working and to have something that I could plug into MySQL Workbench to mess around with.
Edit: You can also do this console.log(connection.query(yourQuery, [someData], callback)) and you get the sql and lot's more when the function executes. Might make more sense than adding console.log calls to the module code.
Hope this helps!
let itemQtyData = order.map(item => {
return [
`${item.id}`,
`${Number(item.products_quantity - Number(item.quantity_to_add))}`
];
});
const updateQtyQuery =`INSERT INTO products (products_id, products_quantity) VALUES ? ON DUPLICATE KEY UPDATE products_quantity=VALUES(products_quantity)`;
connectionOSC.query(
updateQtyQuery,
[itemQtyData],
(error, results, fields) => {
if (error) throw error;
const response = {
statusCode: 200,
headers: {
"Access-Control-Allow-Origin": "*"
},
body: saleId,
response: results,
isBase64Encoded: false
};
context.succeed(response);
});

Categories