We are using winston-daily-rotate node.js module for logging. We are maintaining the logs date wise folder. So whenever the date changes, we are using rotate event listener and moving the file to new date folder by using the rename functionality. After renaming the file in rotate event call back function, the next rotate event is getting older filename but not the renamed filename.
Below is my code for winston rotate module producing logs on 24hr basis.
what changes i need to do in below code?
var installedDate=getTodayDate();// will give the date when service was started
let fileTransport = new (winston.transports.DailyRotateFile)({
filename: getININLogPath(), //will give the path to save log files.
datePattern: 'YYYY-MM-DD',
zippedArchive: true,
maxSize: '20k',// size=20kb
maxFiles: '14d'
});
fileTransport.setMaxListeners(30);
fileTransport.on('rotate', (oldFilename, newFilename) => {
let currentDate=getTodayDate(); //getTodayDate will give the current date.
let newFile = getININLogPath(); // will give the path to save log files.
if (currentDate!=installedDate && process.env.ININ_TRACE_ROOT) {
installedDate = currentDate; //compare the old date with new one and then rename old date folder to new one
fs.rename(newFilename, newFile, (err) => {
// eslint-disable-next-line no-console
if (err) { console.log('Failed to move the new file into todays folder', err); }
else { console.log('Successfully renamed file name to ', newFile); }
});
}
});
But this after renaming function (fs.rename(newFilename, newFile, (err)) its coming in previous date folder
ex-if installed date is 2022-05-23 and next 2022-05-24 , logs of 24th also coming in 23th date.
Please let me know if any change required in this.
Any help is much appreciated.
My current setup:
Convert canvas to blob.
Ask the user for a file path.
Save the blob at the location given by the user.
However, I can't get step 3 to work. I'm currently trying to use fs to do the job, but it doesn't really seem to save the file.
Current code:
canvas.toBlob(blob => {
remote.dialog.showSaveDialog({ defaultPath: "file.png" }).then((canceled, filepath) => {
if (filepath) { // Using filepath because canceled is always true for some reason
blob.arrayBuffer().then(arrayBuffer => {
console.log(arrayBuffer);
fs.writeFile(filepath, Buffer.from(arrayBuffer), err => {
if (err) throw err;
});
});
}
});
}, "image/png");
Are there any flaws in my code? I tried changing the Buffer to Uint8Array and Int8Array, but they didn't work either.
I solved the problem by changing canceled/filepath argument into one result argument which holds all of them, turns out I was just using showSaveDialog in a wrong way.
I'm running a lambda function which takes an mp4 video, and adds a watermark of a png image over the top of it in the bottom right hand corner (with a 10px margin). It then outputs that image to a temporary location. It keeps failing with Error code 1, but that isn't very helpful. I'm using a binary version of ffmpeg that is specified in the main directory of the code. I know that ffmpeg is set up correctly due to using it in another lambda function in this way, which works. But adding an overlay fails. Here is the relevant part of my code:
function addWatermark(next) {
var ffmpeg = child_process.spawn("ffmpeg", [
"-i", target, // url to stream from
"-i", watermarkPath,
"-filter_complex" ,"overlay=x=W-w-10:y=H-h-10:format=rgb,format=yuv420p",
"-c:a", "copy",
"pipe:1"
]);
ffmpeg.on("error", function(err) {
console.log(err);
})
ffmpeg.on("close", function(code) {
if (code != 0 ) {
console.log("child process exited with code " + code); // Always exits here.
} else {
console.log("Processing finished !");
}
tmpFile.end();
next(code);
});
tmpFile.on("error", function(err) {
console.log("stream err: ", err);
});
ffmpeg.on("end", function() {
tmpFile.end();
})
ffmpeg.stdout.pipe(tmpFile)
.on("error", function(err){
console.log("error while writing: ",err);
});
}
Can anyone spot what may be wrong?
UPDATE
I've managed to print out some more logs, I'm getting the error:
[NULL # 0x42923e0] Unable to find a suitable output format for 'pipe:1'
You have to tell ffmpeg what format you want the output in using the -f format option. Run ffmpeg -formats to get the list of supported formats.
From the ffmpeg documentation:
-f fmt (input/output)
Force input or output file format. The format is normally auto detected for input files and guessed from the file extension for output files, so this option is not needed in most cases.
For example, if you want the output as MPEG-4, then your call to ffmpeg should look like this:
var ffmpeg = child_process.spawn("ffmpeg", [
"-i", target, // url to stream from
"-i", watermarkPath,
"-filter_complex" ,"overlay=x=W-w-10:y=H-h-10:format=rgb,format=yuv420p",
"-c:a", "copy",
"-f", "m4v",
"pipe:1"
]);
Im trying to write into a text file in node.js.
Im doing this the following way:
fs.writeFile("persistence\\announce.txt", string, function (err) {
if (err) {
return console.log("Error writing file: " + err);
}
});
whereas string is a variable.
This function will begin it`s writing always at the beginning of a file, so it will overwrite previous content.
I have a problem in the following case:
old content:
Hello Stackoverflow
new write:
Hi Stackoverflow
Now the following content will be in the file:
Hi stackoverflowlow
The new write was shorter then the previous content, so part of the old content is still persistent.
My question:
What do I need to do, so that the old content of a file will be completely removed before the new write is made?
You can try truncating the file first:
fs.truncate("persistence\\announce.txt", 0, function() {
fs.writeFile("persistence\\announce.txt", string, function (err) {
if (err) {
return console.log("Error writing file: " + err);
}
});
});
Rename the old file and append to therefore a non existent file (creating a new one). This way you have on the one hand a backup and on other hand a fresh updated file ./config.json:
fs.renameSync('./config.json', './config.json.bak')
fs.appendFileSync('./config.json', text)
(sync version, might throw)
I'm trying to implement a routine for Node.js that would allow one to open a file, that is being appended to by some other process at this very time, and then return chunks of data immediately as they are appended to file. It can be thought as similar to tail -f UNIX command, however acting immediately as chunks are available, instead of polling for changes over time. Alternatively, one can think of it as of working with a file as you do with socket — expecting on('data') to trigger from time to time until a file is closed explicitly.
In C land, if I were to implement this, I would just open the file, feed its file descriptor to select() (or any alternative function with similar designation), and then just read chunks as file descriptor is marked "readable". So, when there is nothing to be read, it won't be readable, and when something is appended to file, it's readable again.
I somewhat expected this kind of behavior for following code sample in Javascript:
function readThatFile(filename) {
const stream = fs.createReadStream(filename, {
flags: 'r',
encoding: 'utf8',
autoClose: false // I thought this would prevent file closing on EOF too
});
stream.on('error', function(err) {
// handle error
});
stream.on('open', function(fd) {
// save fd, so I can close it later
});
stream.on('data', function(chunk) {
// process chunk
// fs.close() if I no longer need this file
});
}
However, this code sample just bails out when EOF is encountered, so I can't wait for new chunk to arrive. Of course, I could reimplement this using fs.open and fs.read, but that somewhat defeats Node.js purpose. Alternatively, I could fs.watch() file for changes, but it won't work over network, and I don't like an idea of reopening file all the time instead of just keeping it open.
I've tried to do this:
const fd = fs.openSync(filename, 'r'); // sync for readability' sake
const stream = net.Socket({ fd: fd, readable: true, writable: false });
But had no luck — net.Socket isn't happy and throws TypeError: Unsupported fd type: FILE.
So, any solutions?
UPD: this isn't possible, my answer explains why.
I haven't looked into the internals of the read streams for files, but it's possible that they don't support waiting for a file to have more data written to it. However, the fs package definitely supports this with its most basic functionality.
To explain how tailing would work, I've written a somewhat hacky tail function which will read an entire file and invoke a callback for every line (separated by \n only) and then wait for the file to have more lines written to it. Note that a more efficient way of doing this would be to have a fixed size line buffer and just shuffle bytes into it (with a special case for extremely long lines), rather than modifying JavaScript strings.
var fs = require('fs');
function tail(path, callback) {
var descriptor, bytes = 0, buffer = new Buffer(256), line = '';
function parse(err, bytesRead, buffer) {
if (err) {
callback(err, null);
return;
}
// Keep track of the bytes we have consumed already.
bytes += bytesRead;
// Combine the buffered line with the new string data.
line += buffer.toString('utf-8', 0, bytesRead);
var i = 0, j;
while ((j = line.indexOf('\n', i)) != -1) {
// Callback with a single line at a time.
callback(null, line.substring(i, j));
// Skip the newline character.
i = j + 1;
}
// Only keep the unparsed string contents for next iteration.
line = line.substr(i);
// Keep reading in the next tick (avoids CPU hogging).
process.nextTick(read);
}
function read() {
var stat = fs.fstatSync(descriptor);
if (stat.size <= bytes) {
// We're currently at the end of the file. Check again in 500 ms.
setTimeout(read, 500);
return;
}
fs.read(descriptor, buffer, 0, buffer.length, bytes, parse);
}
fs.open(path, 'r', function (err, fd) {
if (err) {
callback(err, null);
} else {
descriptor = fd;
read();
}
});
return {close: function close(callback) {
fs.close(descriptor, callback);
}};
}
// This will tail the system log on a Mac.
var t = tail('/var/log/system.log', function (err, line) {
console.log(err, line);
});
// Unceremoniously close the file handle after one minute.
setTimeout(t.close, 60000);
All that said, you should also try to leverage the NPM community. With some searching, I found the tail-stream package which might do what you want, with streams.
Previous answers have mentioned tail-stream's approach which uses fs.watch, fs.read and fs.stat together to create the effect of streaming the contents of the file. You can see that code in action here.
Another, perhaps hackier, approach might be to just use tail by spawning a child process with it. This of course comes with the limitation that tail must exist on the target platform, but one of node's strengths is using it to do asynchronous systems development via spawn and even on windows, you can execute node in an alternate shell like msysgit or cygwin to get access to the tail utility.
The code for this:
var spawn = require('child_process').spawn;
var child = spawn('tail',
['-f', 'my.log']);
child.stdout.on('data',
function (data) {
console.log('tail output: ' + data);
}
);
child.stderr.on('data',
function (data) {
console.log('err data: ' + data);
}
);
So, it seems people are still looking for an answer to this question for five years already, and there is yet no answer on topic.
In short: you can't. Not in Node.js particularly, you can't at all.
Long answer: there are few reasons for this.
First, POSIX standard clarifies select() behavior in this regard as follows:
File descriptors associated with regular files shall always select true for ready to read, ready to write, and error conditions.
So, select() can't help with detecting a write beyond the file end.
With poll() it's similar:
Regular files shall always poll TRUE for reading and writing.
I can't tell for sure with epoll(), since it's not standartized and you have to read quite lengthy implementation, but I would assume it's similar.
Since libuv, which is in core of Node.js implementation, uses read(), pread() and preadv() in its uv__fs_read(), neither of which would block when invoked at the end of file, it would always return empty buffer when EOF is encountered. So, no luck here too.
So, summarizing, if such functionality is desired, something must be wrong with your design, and you should revise it.
What you're trying to do is a FIFO file (acronym for First In First Out), which as you said works like a socket.
There's a node.js module that allows you to work with fifo files.
I don't know what do you want that for, but there are better ways to work with sockets on node.js. Try socket.io instead.
You could also have a look at this previous question:
Reading a file in real-time using Node.js
Update 1
I'm not familiar with any module that would do what you want with a regular file, instead of with a socket type one. But as you said, you could use tail -f to do the trick:
// filename must exist at the time of running the script
var filename = 'somefile.txt';
var spawn = require('child_process').spawn;
var tail = spawn('tail', ['-f', filename]);
tail.stdout.on('data', function (data) {
data = data.toString().replace(/^[\s]+/i,'').replace(/[\s]+$/i,'');
console.log(data);
});
Then from the command line try echo someline > somefile.txt and watch at the console.
You might also would like to have a look at this: https://github.com/layerssss/node-tailer