Unexpected end of JSON input at parse (<anonymous>) - javascript

I'm tring to write to an exist JSON and i've noticed that I should read the file insert the new item and then writing again , its working on most times but sometimes I recieve this error " Unexpected end of JSON input at parse ()" , and hole data in json are deleted .
app.post(APIClonePath,(req,res)=>{
const body = req.body
let cloneTicket:Ticket = body.ticket
let ticketID = body.id
let fs = require('fs')
// #ts-ignore
fs.readFile('./data.json',(err,data) =>{
if(err){
throw new Error('ERROR in reading JSON')
// res.send(false)
}
let ticketsInJSON = JSON.parse(data)
let indexOfOriginTicket
for(let index = 0; index < ticketsInJSON.length ; index++){
if(ticketsInJSON[index].id.valueOf() === ticketID){
indexOfOriginTicket = index
break
}
}
// #ts-ignore
ticketsInJSON.splice(indexOfOriginTicket,0,cloneTicket)
let writingArray = JSON.stringify(ticketsInJSON)
writeFile('./data.json',writingArray,function (){})
})
res.send(true)
})

1st. mode fs declaration to the file header with all other requires.
2nd. you use // #ts-ignore as this would be TS file, and you are using require instead of import.
3rd. you answer user TRUE before you even read the file, you also dont have any precaution against many file access in this code. Imagine more than one ppl are executing your code, one read the file, the the same guy starts writing to the file, while other guy just started reading the not saved file - you never know what can happen.
4th. you ignore anything from writeFile function which can tell you information about errors while writing file.
5th. if you do not find index, you try to splice with undefined instead of index - that also can make problems

Related

Where does readstream stores the file in nodejs

I read that createRreadStream doesn't put the whole file into the memory, instead it works with chunks. However I have a situation where I am simultaneously writing and reading from a file; Write gets finished first, then I delete the file from disk. Somehow, readstream was able to complete reading whole file without any error.
Does anyone have any explanation for this ? Am I wrong to think that streams doesn't load the file into memory?
Here's the code for writing to a file
const fs = require('fs');
const file = fs.createWriteStream('./bigFile4.txt');
function write(stream,data) {
if(!stream.write(data))
return new Promise(resolve=>stream.once('drain',resolve));
return true;
}
(async() => {
for(let i=0; i<1e6; i++) {
const res = write(file,'a')
if(res instanceof Promise)
await res;
}
write(file,'success');
})();
For Reading I used this,
const file = fs.createReadStream('bigFile4.txt')
file.on('data',(chunk)=>{
console.log(chunk.toString())
})
file.on('end',()=>{
console.log('done')
})
At least on UNIX-type OS'es, if you open a file and then remove it, the file data will still be available to read until you close the file.

Creating text files from array, using FS

I created an array of strings and placed them into an array called schoolsArray. I want to be able to create a text file for each school, using fs.
For some reason, I just can't get this to work. It seems that the issue is with using the value from schoolsArray[0] as a string in the path name. Please take a look at my series of tests. This first code snippet all works, but I added it just to help you understand that I import 'fs' and create a directory first.
Update - Added schoolArray creation per request
var fs = require('fs');
// Read all schools into array (read from text file)
const schoolFile = "./assets/dispatch/state/schools/county_name.txt";
fileInput = fs.readFileSync(schoolFile, "utf-8");
const schoolArray = fileInput.split("\n");
// Variable for chat logs directory
const chatDir = "./chat-logs";
// Create directory if it doesn't exist
if(!fs.existsSync(chatDir)){
fs.mkdirSync(chatDir);
}
The directory is created, now try make a file attempt #1
var schoolTextFile = chatDir + "/" + schoolArray[0] + ".txt";
fs.writeFileSync(schoolTextFile, "");
Uncaught Error: ENOENT: no such file or directory, open 'C:\Users\PC\Desktop\Node_Webkit_Test\chat-logs\Test School Name.txt'
at Object.fs.openSync (fs.js:653:18)
at Object.fs.writeFileSync (fs.js:1300:33)
Okay, so that doesn't work for some reason. Attempt #2 - I have come to think that the schoolArray[0] value isn't being read as a string, so I tried this:
var schoolTextFile = chatDir + "/" + toString(schoolArray[0]) + ".txt";
fs.writeFileSync(schoolTextFile, "");
There are no errors here, but the output is an undefined object:
Attempt #3 was to simply try a text string instead of using the value from my array. This worked exactly as intended.
var schoolTextFile = chatDir + "/" + "some Text 1234" + ".txt";
fs.writeFileSync(schoolTextFile, "");
Thus, the issue is pinpointed to be with the schoolArray[0] value being entered into the path. It seems silly to even test, but I did this anyway...
var somestring = "some text 1234";
console.log(typeof somestring);
// The log says that this is a string.
console.log(typeof schoolArray[0]);
// The log says that this is a string.
So then, why does one string work here, and the other causes path issues? Thanks in advance!
You must have some forbidden characters in schoolArray. Typically \r. Try
schoolArray = fileInput.split("\n").map( line => line.replace(/\r/g,''));

How to import a file , where the part of the file name get changes very frequently?

I am trying to import a file , where the part of the file name get changes very frequently. Current date is the part of the file name. so it get changes very frequently.
Below is my code the import the file. since its name changes
How do I import the file without changing the file name every day ?
var CusInfo = path.join(__dirname,'
CusInfo_2018_05-17.txt');
Here is an example of where we use the fs library to read the contents of a directory and search for files containing 'CusInfo'.
import fs from 'fs';
let contents = fs.readDirSync('./foo');
for(let x = 0; x < contents.length; x++){
if(contents[x].includes('CusInfo')){
// Do more checks or use the file or save to another variable.
}
}
Now if the file name changes because of the date you can still find the file or files.
Given that the name of the file you want to read changes every day, you need to describe the same naming rules in your code, if you don't want hardcode the name.
Since (I'm assuming) you just want today's data to be in the path of the txt file, just put that into the file name. You can do that with moment like so:
const moment = require('moment');
const fs = require('fs');
const now = moment();
const fileName = `CusInfo_${now.format('YYYY_MM-DD')}.txt`;
fs.readFileSync(fileName);
Now this might very well throw an error, because there is no guarantee that the txt file have been created yet.
To handle this, use fs.access() before actually reading the file
fs.access(fileName, fs.constants.F_OK, (err) => {
if (err) {
fileName = `CusInfo_${now.add(-1, 'days').format('YYYY_MM-DD')}.txt`;
}
fs.readFileSync(fileName)
}
This code above will check if the txt file for today exsists, and if not, will change the file name yesterday txtfile.

write array to file after looping node js

I have yet to find a answer to my problem with the examples from questions others have asked on this.
I wrote a little web scraper that stores data to 1 array and would like to write it (the arrays) to a file. I'm having trouble setting things up correctly.
I am using nodejs. Could someone write a sample that takes an array content then writes to a file. please break it down to basic, I am still new at programming.
Thanks the code is below
var content = [];
var request = require('request');
var cheerio = require('cheerio');
var URL = 'http://www.amazon.com';
request(URL, function(error, response, html){
if (error){
consol.log('Error:', error);
}
if (response.statusCode !== 200) {
console.log('Invalid Status Code Returned:', response.statusCode);
}
//console.log(html);
var $ = cheerio.load(html);
$('td').each(function (i, element) {
var a = $(this).next();
var trimmed_a = a.text();
trimmed_a = trimmed_a.trim();
var str = trimmed_a.replace(/\s\s+/g,"");
var newStr = str.trim();
content.push(newStr);
});
console.log(content);
})
Simplest possible answer:
var fs = require('fs');
var arr = ['cat','dog','bird'];
var filename = 'output.txt';
var str = JSON.stringify(arr, null, 4);
fs.writeFile(filename, str, function(err){
if(err) {
console.log(err)
} else {
console.log('File written!');
}
});
Here, arr, would be your array of data, that your casting to a string because fs.writeFile expects a string. I used the null,4 additional arguments to make it pretty print so you can see it with a four space indent.
Hope this helps!
It's not possible to store a real array/object in a file – its contents are bytes, however you can store a stringified format of this object, then parse this same format, using JSON for example (I think this ref equivals to Node.js):
json_format = JSON.stringify(content)
var json_format
So, if you want to read the array in the file after getting its contents
JSON.parse(json_format)
Remind, in JSON there is no kind of function declarations, all primitive values are supported, except NaN, Infinity, undefined (that is not a value), etc., and still include special number syntaxes (exponent (+ | -), ...): JSON. All values that JSON doesn't support, JSON.stringify treats them as null. I'm not sure how it exactly works between different platforms, though (I only use browser JS).
Now, to save/write the file we currently have
asynchronous fs.writeFile and synchronous fs.writeFileSync. I don't know much about Node.js, though. When using these methods you must include the File System in somewhere, normally so (File System is in a module):
fs = require('fs'); var fs

node.js fs - stream file "backwards" - from bottom to top

Using Node.js, what is the best way to stream a file from a filesystem into Node.js, but reading it backwards, from bottom to top? I have a large file, and there doesn't seem to be much sense in reading from the top if I only want the last 10 lines. Is this possible?
Right now I have this horrible code, where we do a GET request with a browser to view the server logs, and pass a query string parameter to tell the server how many lines at the end of the log file we want to read:
function get(req, res, next) {
var numOfLinesToRespondWith = req.query.num_lines || 10;
var fileStream = fs.createReadStream(stderr_path, {encoding: 'utf8'});
var jsonData = []; //where jsonData gets populated
var ret = [];
fileStream.on('data', function processLineOfFileData(chunk) {
jsonData.push(String(chunk));
})
.on('end', function handleEndOfFileData(err) {
if (err) {
log.error(colors.bgRed(err));
res.status(500).send({"error reading from smartconnect_stdout_log": err.toString()});
}
else {
for(var i = 0; i < numOfLinesToRespondWith; i++){
ret.push(jsonData.pop());
}
res.status(200).send({"smartconnect_stdout_log": ret});
}
});
}
the code above reads the whole file and then adds the number of lines requested to the response after reading the whole file. This is bad, is there a better way to do this? Any recommendations will be met gladly.
(one problem with the code above is that it's writing out the last lines of the log but the lines are in reverse order...)
One potential way to do this is:
process.exec('tail -r ' + file_path).pipe(process.stdout);
but that syntax is incorrect - so my question there would be - how do I pipe the result of that command into an array in Node.js and eventually into a JSON HTTP response?
I created a module called fs-backwards-stream that could may meet your needs. https://www.npmjs.com/package/fs-backwards-stream
If you need the result parsed by lines rather than byte chunks you should use the module fs-reverse https://www.npmjs.com/package/fs-reverse or
both of these modules stream you could simply read the last n bytes of a file.
here is an example using plain node fs apis and no dependencies.
https://gist.github.com/soldair/f250fb497ce592c3694a
hope that helps.
One easy way if you're on a linux computer would be to execute the tac command in node as process.exec("tac yourfile.dat") and pipe it to your write stream
You could also use slice-file and then reverse the order yourself.
Also, look at what #alexmills said in the comments
this is the best answer I got, for now
the tail command on Mac/UNIX reads files from the end and pipes to stdout (correct me if this is loose language)
var cp = require('child_process');
module.exports = function get(req, res, next) {
var numOfLinesToRespondWith = req.query.num_lines || 100;
cp.exec('tail -n 5 ' + stderr_path, function(err,stdout,stderr){
if(err){
log.error(colors.bgRed(err));
res.status(500).send({"error reading from smartconnect_stderr_log": err.toString()});
}
else{
var data = String(stdout).split('\n');
res.status(200).send({"stderr_log": data});
}
});
}
this seems to work really well - it does, however, run on separate process which is expensive in it's own way, but probably better than reading an entire 10,000 line log file.

Categories