Convert response to JSON/String and Write to a File - javascript

I'm new on java and node, so after 2 days trying to do this... i wrote this question.
I'm using a git (https://github.com/gigobyte/HLTV) and trying to make files with the responses i get from this api, but all i got so far is to write the results in the console.
import HLTV from './index'
const fs = require('fs');
function sleep(ms: number): Promise<void> {
return new Promise(resolve => setTimeout(resolve, ms))
}
sleep (1000)
//HLTV.getPlayerByName({ name: "chrisJ" }).then(res => this.Teste = res );
var Text = HLTV.getMatches().then(data => {console.log(JSON.stringify(data)); })
//var Texto = HLTV.getTeamRanking({ country: 'Brazil' });
//then(data => { console.log(JSON.stringify(data)); })
sleep(3000)
fs.writeFileSync('MyFile.json', Text)
console.log('Scoreboard update!')
Is there any way to convert it directry and write a file with the string?

you have to do it in the then call
HLTV.getMatches().then(data => {
var txt = JSON.stringify(data);
fs.writeFile('MyFile.json', txt, function (err) {
if (err) return console.log(err);
console.log('Data Saved');
});
});

Related

AWS S3 CreateReadStream in a loop only reads and writes 1 file

I am trying to retrieve multiple files from S3 using a readstream, and insert them into a single file locally.
Below, the 'output' variable is the single writestream I wish to append to using the downloaded S3 file data.
I am looping through days where the nextDay variable is used for the S3 key. The fileservice.s3Handler.getS3Obj returns an S3 object which allows a readstream for a single file and appending to the output file.
However, no other files are being read and are not showing the console either using the on('data', ()) method.
I tried to wrap the readstream in a promise to try to wait until the read was finished but it is running the same error.
More recently I keep get this error: "ERR_STREAM_WRITE_AFTER_END"
Not sure what is going wrong here.
async fetchCSV(req, res) {
const output = fs.createWriteStream(outputPathWithFile, {
'flags': 'a'});
let nextDay = startDate;
while (nextDay !== endDate) {
const s3path = path.join(`${req.params.stationId}`, `${nextDay}.csv`);
const file = await this.fileService.s3Handler.getS3Obj(s3path);
await this.completePipe(file, output);
nextDay = await getTomorrow(nextDay);
}
}
completePipe(file, output) {
return new Promise((resolve) => {
file.createReadStream().on('finish', () => {
resolve();
}).on('error', (err) => {
resolve();
}).on('data', (data) => {
console.log(data.toString());
}).pipe(output);
})
}
}
getS3Obj(file) {
return new Promise(async (resolve) => {
const getParams = {
Bucket: this.bucket,
Key: file
};
resolve(this.s3.getObject(getParams, (err) => {
if (err) {
console.log('Error in getS3 object')
}
}));
})
}
Please help me?
Solved it.
Did a couple things:
Added a tag to the pipe method.
stream.pipe(output, {end: false})
Instead of creating a new function for the promise I just put this code in instead:
await new Promise((resolve) => {
stream.once('finish', () => {
resolve();
});
});
But the tag was what made it work, the promise was just a tidy up.
Yay.

Problem with fs.writeFile in reduce with fetch

I need some help with this helper I'm writing. For some reason using reduction within an async on a readFile, when trying to write results to a file it won't advance to the next item of the array. However, if I use a console.log, it works just fine.
const neatCsv = require('neat-csv');
const fetch = require('node-fetch');
const fs = require('fs');
fs.readFile('./codes.csv', async (err, data) => {
if (err) { throw err; }
let baseUrl = 'https://hostname/orders?from=2019-10-21T00:00:00.001Z&to=2019-12-31T23:59:59.000Z&promo=';
const starterPromise = Promise.resolve(null);
const promos = await neatCsv(data);
const logger = (item, result) => console.log(item, result);
function write (item, result) {
return new Promise((resolve, reject) => {
fs.writeFile(`./output/${item.PROMO}.json`, JSON.stringify(result), (err) => {
if (err) { throw err; }
console.log(`Wrote file ${item.PROMO}`);
});
})
}
function asyncFetch(item) {
console.log(`runTask <---------${item.PROMO}---------`);
return fetch(`${baseUrl}${item.PROMO}`, { headers: { 'x-apikey': 'xyz' }})
.then(res => (res.json())
.then(json => json))
}
await promos.reduce(
(p, item) => p.then(() => asyncFetch(item).then(result => write(item, result))),
starterPromise
)
});
The csv file is just a basic layout like so..
PROMO
12345
56789
98765
...
The goal is to iterate over these, make a REST call to get the json results and write those to a file with the name of the current promo, then move to the next, making a new call and saving that one into a different file with its respective code.
In the reduce, if you call logger instead of write, it works fine. Calling write, it just makes the same call over and over and overwriting to the same file, forcing me to kill it. Please help, I'm losing my mind here...
You might have a better time using async functions everywhere, the fs promises API and a simple while loop to consume the CSV items. Dry-coded, naturally, since I don't have your CSV or API.
(Your original problem is probably due to the fact you don't resolve/reject in the write function, but the reduce hell isn't needed either...)
const neatCsv = require("neat-csv");
const fetch = require("node-fetch");
const fsp = require("fs").promises;
const logger = (item, result) => console.log(item, result);
const baseUrl = "https://hostname/orders?from=2019-10-21T00:00:00.001Z&to=2019-12-31T23:59:59.000Z&promo=";
async function asyncFetch(item) {
console.log(`runTask <---------${item.PROMO}---------`);
const res = await fetch(`${baseUrl}${item.PROMO}`, { headers: { "x-apikey": "xyz" } });
const json = await res.json();
return json;
}
async function write(item, result) {
await fsp.writeFile(`./output/${item.PROMO}.json`, JSON.stringify(result));
console.log(`Wrote file ${item.PROMO}`);
}
async function process() {
const data = await fsp.readFile("./codes.csv");
const promos = await neatCsv(data);
while (promos.length) {
const item = promos.shift();
const result = await asyncFetch(item);
await write(item, result);
}
}
process().then(() => {
console.log("done!");
});
A version that uses mock data and the JSON Placeholder service, works just fine:
const fetch = require("node-fetch");
const fsp = require("fs").promises;
const baseUrl = "https://jsonplaceholder.typicode.com/comments/";
async function asyncFetch(item) {
console.log(`runTask <---------${item.PROMO}---------`);
const res = await fetch(`${baseUrl}${item.PROMO}`);
return await res.json();
}
async function write(item, result) {
const data = JSON.stringify(result);
await fsp.writeFile(`./output/${item.PROMO}.json`, data);
console.log(`Wrote file ${item.PROMO}: ${data}`);
}
async function getItemList() {
return [
{PROMO: '193'},
{PROMO: '197'},
{PROMO: '256'},
];
}
async function process() {
const promos = await getItemList();
while (promos.length) {
const item = promos.shift();
const result = await asyncFetch(item);
await write(item, result);
}
}
process().then(() => {
console.log("done!");
});

Read and write to csv file with Node.js fast-csv library

I may be lacking some in depth understanding of streams in general. However, I would like to know how efficiently what I need should work.
I want to implement so that a csv file would be read, then to each row a query to the database (or api) is made and data is attached. After that the row with attached data is written to a new csv file. I am using fast-csv node library for this.
Here is my implementation:
const fs = require("fs");
const csv = require("fast-csv");
const delay = t => new Promise(resolve => setTimeout(resolve, t));
const asyncFunction = async (row, csvStream) => {
// Imitate some stuff with database
await delay(1200);
row.data = "data";
csvStream.write(row);
};
const array = [];
const csvStream = csv.format({ headers: true });
const writeStream = fs.createWriteStream("output.csv");
csvStream.pipe(writeStream).on("finish", () => {
console.log("End of writing");
});
fs.createReadStream("input.csv")
.pipe(csv.parse({ headers: true }))
.transform(async function(row, next) {
array.push(asyncFunction(row, csvStream));
next();
})
.on("finish", function() {
console.log("finished reading file");
//Wait for all database requests and writings to be finished to close write stream
Promise.all(array).then(() => {
csvStream.end();
console.log("finished writing file");
});
});
Particularly I would like to know are there ways to optimize what I am doing here, because I feel that I am missing something important on how this library can be used for these type of cases
Regards,
Rokas
I was able to find a solution in fast-csv issues section. A good person doug-martin, provided this gist, on how you can do efficiently this kind of operation via Transform stream:
const path = require('path');
const fs = require('fs');
const { Transform } = require('stream');
const csv = require('fast-csv');
class PersistStream extends Transform {
constructor(args) {
super({ objectMode: true, ...(args || {}) });
this.batchSize = 100;
this.batch = [];
if (args && args.batchSize) {
this.batchSize = args.batchSize;
}
}
_transform(record, encoding, callback) {
this.batch.push(record);
if (this.shouldSaveBatch) {
// we have hit our batch size to process the records as a batch
this.processRecords()
// we successfully processed the records so callback
.then(() => callback())
// An error occurred!
.catch(err => err(err));
return;
}
// we shouldnt persist so ignore
callback();
}
_flush(callback) {
if (this.batch.length) {
// handle any leftover records that were not persisted because the batch was too small
this.processRecords()
// we successfully processed the records so callback
.then(() => callback())
// An error occurred!
.catch(err => err(err));
return;
}
// no records to persist so just call callback
callback();
}
pushRecords(records) {
// emit each record for down stream processing
records.forEach(r => this.push(r));
}
get shouldSaveBatch() {
// this could be any check, for this example is is record cont
return this.batch.length >= this.batchSize;
}
async processRecords() {
// save the records
const records = await this.saveBatch();
// besure to emit them
this.pushRecords(records);
return records;
}
async saveBatch() {
const records = this.batch;
this.batch = [];
console.log(`Saving batch [noOfRecords=${records.length}]`);
// This is where you should save/update/delete the records
return new Promise(res => {
setTimeout(() => res(records), 100);
});
}
}
const processCsv = ({ file, batchSize }) =>
new Promise((res, rej) => {
let recordCount = 0;
fs.createReadStream(file)
// catch file read errors
.on('error', err => rej(err))
.pipe(csv.parse({ headers: true }))
// catch an parsing errors
.on('error', err => rej(err))
// pipe into our processing stream
.pipe(new PersistStream({ batchSize }))
.on('error', err => rej(err))
.on('data', () => {
recordCount += 1;
})
.on('end', () => res({ event: 'end', recordCount }));
});
const file = path.resolve(__dirname, `batch_write.csv`);
// end early after 30000 records
processCsv({ file, batchSize: 5 })
.then(({ event, recordCount }) => {
console.log(`Done Processing [event=${event}] [recordCount=${recordCount}]`);
})
.catch(e => {
console.error(e.stack);
});
https://gist.github.com/doug-martin/b434a04f164c81da82165f4adcb144ec

Firebase Google Cloud Function: createReadStream results in empty file

I try to process a Video file (stored in Google Firebase storage) through a Google Cloud Function. I have working code that download the entire video files into the NodeJS Google cloud function: await bucket.file(filePath).download({ destination: tempFile }).
But the goal is only to read the framerate, therefore the headers of the videofile would suffice. But createReadStream gives me an empty tempFile. Any advise much appreciated!
exports.checkFramerate = functions.region('europe-west1').storage.object().onFinalize(async (object, context) => {
const bucket = admin.storage().bucket(object.bucket); // Bucket class
const filePath = object.name; // videos/xbEXdMNFb1Blbd9r2E8m/comp_test.mp4
const fileName = filePath.split('/').pop(); // comp_test.mp4
const bucketDir = path.dirname(filePath); // videos/xbEXdMNFb1Blbd9r2E8m
const tempFile = path.join(os.tmpdir(), 'temp.mp4')
fs.closeSync(fs.openSync(tempFile, 'w'))
console.log("tempFile size1", fs.statSync(tempFile).size)
// await bucket.file(filePath).download({ destination: tempFile }); // this works: tempFile size2 = 3180152
await bucket.file(filePath).createReadStream({ // this does not work: tempFile size2 = 0
start: 10000,
end: 20000
})
.on('error', function(err) {console.log(err)})
.pipe(fs.createWriteStream(tempFile));
console.log("tempFile size2", fs.statSync(tempFile).size)
mi(tempFile).then(data => {
console.log("frameRate", data[0].general.frame_rate[0])
return data[0].general.frame_rate[0];
}).catch(e => {console.error(e)});
});
I tried implementing even the example of https://googleapis.dev/nodejs/storage/latest/File.html#createReadStream but to no avail. remoteFile.download works beautifully but remoteFile.createReadStream gives me empty files...
const remoteFile = bucket.file(filePath);
const localFilename = tempFile;
remoteFile.createReadStream()
.on('error', function(err) {})
.on('response', function(response) {})
.on('end', function() {})
.pipe(fs.createWriteStream(localFilename));
fs.stat(localFilename, (err, stats) => {
if (err) {console.log(err)}
return console.log("stats async",stats.size)
})
as mentioned, promise should be used
reading json file example
let buf = '';
const loadData = async () => {
return await new Promise((resolve, reject) => {
storage.bucket('bucket-name').file('test-config.json')
.createReadStream()
.on('error', reject)
.on('data', function(d) {
buf += d;
}).on('end', function() {
resolve(buf)
});
})
}
const data = await loadData()
Your problem is that the stream API isn't promisifed. So, the await does nothing, and your function continues before the stream is piped, and the file is still zero-length when you stat it the second time.
The download method works just fine because it returns a Promise.
This answer outlines the general approach you need to take. In summary though, you basically want the section of your code that does the piping to read like this:
const stream = bucket.file(filePath).createReadStream({
start: 10000,
end: 20000
})
.pipe(fs.createWriteStream(tempFile));
await new Promise((resolve, reject) => {
stream.on('finish', resolve);
stream.on('error', reject);
});
console.log("tempFile size2", fs.statSync(tempFile).size)
Your function will then wait until the finish event occurs when the piping is complete and the stream is closed. Obviously you probably want to do something more clever with the error handler too, but this is the general form of what you need.

Can I use crawled from Node.js in javaScript?

I'm new to javaScript and trying to crawl a website with node.js. I could check the data in console log, but want to use the data in another javaScript file. How can I fetch the data?
The problem is I've never used node.js. I do javaScript so I know how to write the code, but I don't know how the back-end or server works.
I've tried to open it in my local host but the node method (e.g., require()) didn't work. I found out it's because node doesn't work in browser.(See? very new to js)
Should I use bundler or something?
The steps I thought were,
somehow send the data as json
somehow fetch the json data and render
Here is the crawling code file.
const axios = require("axios");
const cheerio = require("cheerio");
const log = console.log;
const getHtml = async () => {
try {
return await axios.get(URL);
} catch (error) {
console.error(error);
}
};
getHtml()
.then(html => {
let ulList = [];
const $ = cheerio.load(html.data);
const $bodyList = $("div.info-timetable ul").children("li");
$bodyList.each(function(i, elem) {
ulList[i] = {
screen: $(this).find('a').attr('data-screenname'),
time: $(this).find('a').attr('data-playstarttime')
};
});
const data = ulList.filter(n => n.time);
return data;
})
.then(res => log(res));
Could you please explain what steps should I take?
Also, it would be great if I can get understood WHY the steps are needed.
Thanks alot!
you can try writing your data to a JSON file and proceed, that's one way, then you can use the data as an object in any js file
const appendFile = (file, contents) =>
new Promise((resolve, reject) => {
fs.appendFile(
file,
contents,
'utf8',
err => (err ? reject(err) : resolve()),
);
});
getHtml()
.then(html => {
let ulList = [];
const $ = cheerio.load(html.data);
const $bodyList = $("div.info-timetable ul").children("li");
$bodyList.each(function(i, elem) {
ulList[i] = {
screen: $(this).find('a').attr('data-screenname'),
time: $(this).find('a').attr('data-playstarttime')
};
});
const data = ulList.filter(n => n.time);
return data;
})
.then(res => {
return appendFile('./data.json',res.toString())
}))
.then(done => {log('updated data json')});

Categories