Problem generating buffer for nodejs csv file creation - javascript

Iam able to generate a csv file with the data below. I am using a nodejs library "csv-writer" that generates the file quite well. My problem is that I need a way to return back a buffer instead of the file itself. Reason being I need to upload the file to a remote server via sftp.
How do I go ab bout modifying this piece of code to enable buffer response? Thanks.
...
const csvWriter = createCsvWriter({
path: 'AuthHistoryReport.csv',
header: [
{id: 'NAME', title: 'msg_datetime_date'},
{id: 'AGE', title: 'msg_datetime'}
]
});
var rows = [
{ NAME: "Paul", AGE:21 },
{ NAME: "Charles", AGE:28 },
{ NAME: "Teresa", AGE:27 },
];
csvWriter
.writeRecords(rows)
.then(() => {
console.log('The CSV file was written successfully');
});
...

Read your own file with fs.readFile('AuthHistoryReport.csv', data => ... );. If you don't specify an encoding, then the returned data is a buffer, not a string.
fs.readFile('AuthHistoryReport.csv', 'utf8', data => ... ); Returns a string
fs.readFile('AuthHistoryReport.csv', data => ... ); Returns a buffer
Nodejs file system #fs.readFile

You need to store your created file in a buffer using the native package fs
const fs = require('fs');
const buffer = fs.readFileSync('AuthHistoryReport.csv');

Related

How to format JSON file data by filtering only info needed?

I recently exported all my user data from Firebase and now I want to format the JSON file to filter only the relevant field I need for my data model.
The file I got on Firebase is currently stored like this:
{
"Users": {
"00uniqueuserid3": {
"1UserName": "Pusername",
"2Password": "password",
"3Email": "email#gmail.com",
"4City": "dubai"
}
}
}
The issue is that the JSON file got over 5,000 users and I cannot get possibly manual format them how I want them. Is there any Javascript script or tool I can use to reformat all the data in the file, I would like to format them as such:
{"id": uniqueid , "name": name, "email": email, "city": city}
You can create a new NodeJS project (npm init -y) and install mongodb. Then read the JSON file and modify the format using JS:
const mongodb = require("mongodb")
const exportedData = require("./myFirebaseExport.json");
const db = "" // <-- Mongo Client
const data = Object.values(exportedData.Users);
const parsedData = data.map((d) => {
// modify the data array as required;
return {
name: d.username, // replace with your field names
email: d.Email,
}
})
await db.collection("users").insertMany(parsedData);

How to split a big ODS file without causing memory leaks?

I'm working with a MYSQL database, and have two types of files to import:
First one is a CSV file that I can use
LOAD DATA INFILE 'path-to-csv_file'
The second type of file is ODS (OpenDocument Spreadsheet) that MYSQL doesn't support for LOAD DATA INFILE.
My solution was to convert ODS to CSV using xlsx package that have a XLSX.readfile command and then using csv-writer. But, when working with large ODS files, my program was crashing cause it was using to much memory. I searched for solutions and found streams but xlsx package doesn't have read streams. After this, I tried to use fs cause it has a fs.createReadStream command, but this module doesn't support ODS files. An example is comparing both returns in fs.readFile and xlsx.readFile.
fs.readFile:
PK♥♦m�IQ�l9�.mimetypeapplication/vnd.oasis.opendocument.spreadsheetPK♥♦m�IQM◄ˋ%�%↑Thumbnails/thumbnail.png�PNG
→
IHDR�♥A�-=♥PLTE►►☼§¶►∟↓*.!/<22/8768:G6AN>AM>BP>MaC:;A?GOE?EFJGJRJQ[TJEQOQ\QJYWYKVeX\dX]p\bkXetaNJgTEe[Wp^Wa_aja\ue\hfgektjqztkeqnpyqlwwvco�jw�j}�v{�q⌂�~�⌂{��t��t��u��z��y��|��{��{��}���o]�od�vj�|v�⌂n�⌂r��{��n��x��~��~������
XLSX.readFile:
J323: { t: 's', v: '79770000', w: '79770000' },
K323: { t: 's', v: '20200115', w: '20200115' },
Working with XLSX module is easy, cause I can pick up only the data that I want in this ODS file. Using a javascript code, I extract three columns and put it in an array:
const xlsx = require('xlsx');
let posts = [];
let post = {};
for(let i = 0; i < 1; i++){
let filePath = `C:\\Users\\me\\Downloads\\file_users.ODS`;
let workbook = xlsx.readFile(filePath);
let worksheet = workbook.Sheets[workbook.SheetNames[0]];
for (let cell in worksheet) {
const cellAsString = cell.toString();
cellAsString[0] === 'A' ? post['ID'] = worksheet[cell].v :
cellAsString[0] === 'C' ? post['USER NAME'] = worksheet[cell].v : null;
if (cellAsString[0] === 'J') {
post['USER EMAIL'] = worksheet[cell].v;
Object.keys(post).length == 3 ? posts.push(post) : null;
post = {}
}
}
}
...returns:
{
ID: '1',
'USER NAME': 'John Paul',
'USER EMAIL': 'Paul.John12#hotmail.com'
},
{
ID: '2',
'USER NAME': 'Julia',
'USER EMAIL': 'lejulie31312#outlook.com'
},
{
ID: '3',
'USER NAME': 'Greg Norton',
'USER EMAIL': 'thenorton31031#hotmail.com'
},
... 44660 more items
So, my problem is when working with large ODS files. The return above is when using this script with 78MB file, and is using 1.600MB of RAM. When I try to use this with 900MB files, my memory reaches the limit (4000MB+) and I got the error: 'ERR_STRING_TOO_LONG'
I tried to use readline package for parse the data, but it needs a stream.
If I have to slice the ODS files into small pieces, how could I read the file for this without crashing my vs code?

How to upload an image to AWS S3 using GraphQL?

I'm uploading a base64 string but the GraphQL gets hung. If I slice the string to less than 50,000 characters it works. After 50,000 characters, graphQL never makes it to the resolve function, yet does not give an error. On the smaller strings, it works just fine.
const file = e.target.files[0];
const reader = new FileReader();
reader.readAsDataURL(file);
reader.onloadend = () => {
const imageArray = reader.result;
this.context.fetch('/graphql', {
body: JSON.stringify({
query: `mutation s3Upload($img: String!) {
s3Upload(file: $img) {
logo,
}
}`,
variables: {
img: imageArray,
},
}),
}).then(response => response.json())
.then(({ data }) => {
console.log(data);
});
}
const s3Upload = {
type: S3Type,
args: {
file: { type: new NonNull(StringType) },
},
resolve: (root, args, { user }) => upload(root, args, user),
};
const S3Type = new ObjectType({
name: 'S3',
fields: {
logo: { type: StringType },
},
});
The correct approach here is to perform an actual S3 upload via a complex type using AWS AppSync - what you illustrate here looks more like you are attempting to save a base64 encoded image as a string to a field in what I can only assume to be a DynamoDB table entry. For this to work, though, you need to modify your mutation such that the file field is not a String!, but an S3ObjectInput.
There's a few moving parts under the hood you need to make sure you have in place before this "just works" (TM). First of all, you need to make sure you have an appropriate input and type for an S3 object defined in your GraphQL schema
enum Visibility {
public
private
}
input S3ObjectInput {
bucket: String!
region: String!
localUri: String
visibility: Visibility
key: String
mimeType: String
}
type S3Object {
bucket: String!
region: String!
key: String!
}
The S3ObjectInput type, of course, is for use when uploading a new file - either by way of creating or updating a model within which said S3 object metadata is embedded. It can be handled in the request resolver of a mutation via the following:
{
"version": "2017-02-28",
"operation": "PutItem",
"key": {
"id": $util.dynamodb.toDynamoDBJson($ctx.args.input.id),
},
#set( $attribs = $util.dynamodb.toMapValues($ctx.args.input) )
#set( $file = $ctx.args.input.file )
#set( $attribs.file = $util.dynamodb.toS3Object($file.key, $file.bucket, $file.region, $file.version) )
"attributeValues": $util.toJson($attribs)
}
This is making the assumption that the S3 file object is a child field of a model attached to a DynamoDB datasource. Note that the call to $utils.dynamodb.toS3Object() sets up the complex S3 object file, which is a field of the model with a type of S3ObjectInput. Setting up the request resolver in this way handles the upload of a file to S3 (when all the credentials are set up correctly - we'll touch on that in a moment), but it doesn't address how to get the S3Object back. This is where a field level resolver attached to a local datasource becomes necessary. In essence, you need to create a local datasource in AppSync and connect it to the model's file field in the schema with the following request and response resolvers:
## Request Resolver ##
{
"version": "2017-02-28",
"payload": {}
}
## Response Resolver ##
$util.toJson($util.dynamodb.fromS3ObjectJson($context.source.file))
This resolver simply tells AppSync that we want to take the JSON string that is stored in DynamoDB for the file field of the model and parse it into an S3Object - this way, when you do a query of the model, instead of returning the string stored in the file field, you get an object containing the bucket, region, and key properties that you can use to build a URL to access the S3 Object (either directly via S3 or using a CDN - that's really dependent on your configuration).
Do make sure you have credentials set up for complex objects, however (told you I'd get back to this). I'll use a React example to illustrate this - when defining your AppSync parameters (endpoint, auth, etc.), there is an additional property called complexObjectCredentials that needs to be defined to tell the client what AWS credentials to use to handle S3 uploads, e.g.:
const client = new AWSAppSyncClient({
url: AppSync.graphqlEndpoint,
region: AppSync.region,
auth: {
type: AUTH_TYPE.AWS_IAM,
credentials: () => Auth.currentCredentials()
},
complexObjectsCredentials: () => Auth.currentCredentials(),
});
Assuming all of these things are in place, S3 uploads and downloads via AppSync should work.
AWS AppSync (https://aws.amazon.com/appsync/) provides this with functionality known as "Complex Objects" where you can have a types for the S3 Object and the input:
type S3Object {
bucket: String!
key: String!
region: String!
}
input S3ObjectInput {
bucket: String!
key: String!
region: String!
localUri: String
mimeType: String
}
You could then do something like this to define this object as part of another type:
type UserProfile {
id: ID!
name: String
file: S3Object
}
And then specify a mutation to add it:
type Mutation {
addUser(id: ID! name: String file: S3ObjectInput): UserProfile!
}
Your client operations would need to specify the appropriate bucket, key (with file extension), region, etc.
More here: https://docs.aws.amazon.com/appsync/latest/devguide/building-a-client-app-react.html#complex-objects

How to create a CSV file on the server in nodejs

I have the following code which works well, but I'd like this to save and write to a CSV file in the folder I'm in. I'm running the JS in node. Big thanks!
var jsonexport = require('jsonexport');
json = [ { uniq_id: [ 'test' ],
product_url: [ 'http://www.here.com' ],
manufacturer: [ 'Disney' ]}]
jsonexport(json,function(err, csv){
if(err) return console.log(err);
console.log(csv);
});
Note: jsonexport is a JSON-to-CSV converter.
UPDATE you can Use something like this.
jsonexport(json,function(err, csv){
fs.writeFile("/tmp/test.csv", csv, function(err) {
if(err) {}
});
});

Writing text to File in Node.JS

I'm new to Node.js. I have a JSON object which looks like the following:
var results = [
{ key: 'Name 1', value: '1' },
{ key: 'Name 2', value: '25%' },
{ key: 'Name 3', value: 'some string' },
...
];
The above object may or may not have different values. Still, I need to get them into a format that looks exactly like the following:
{"Name 1":"1","Name 2":"25%","Name 3":"some string"}
In other words, I'm looping through each key/value pair in results and adding it to a single line. From my understanding this single line approach (with double quotes) is called "JSON Event" syntax. Regardless, I have to print my JSON object out in that way into a text file. If the text file exists, I need to append to it.
I do not know how to append to a text file in Node.js. How do I append to a text file in Node.js?
Thank you!
You can use JSON.stringify to convert a JavaScript object to JSON and fs.appendFile to append the JSON string to a file.
// write all the data to the file
var fs = require('fs');
var str = JSON.stringify(results);
fs.appendFile('file.json', str, function(err) {
if(err) {
console.log('there was an error: ', err);
return;
}
console.log('data was appended to file');
});
If you want to add just one item at a time, just do
// Just pick the first element
var fs = require('fs');
var str = JSON.stringify(results[0]);

Categories