Logging in node.js to a file, without another module - javascript

I'm running a node application as a daemon. When debugging the daemon, I need to see the output, so I'd like to redirect stdout and stderr to a file.
I'd expect I can just reassign stdout and stderr like in Python or C:
fs = require('fs');
process.stdout = fs.openSync('/var/log/foo', 'w');
process.stderr = process.stdout;
console.log('hello');
When I run the script directly, "hello" is printed to the console! Of course when I run in the background, I see output neither on the console (of course) or in /var/log/foo.
I don't want or need sophisticated logging. I just need to see the builtin messages that node already provides.

The console object grabs a reference to process.stdout and stderr when it is first created.
(you can see this in the source).
Re-assigning them later does not affect console.
The usual way to redirect these streams is to launch the process with the streams redirected.
Alternatively, you can overwrite the console methods and make them write to your file instead.

Similar to this question, you can overwrite process.stdout.write which is called from console.log.
var fs = require('fs');
var oldWrite = process.stdout.write;
process.stdout.write = function (d) {
fs.appendFileSync('./foo', d);
oldWrite.apply(this, arguments);
};
console.log('hello');

Related

Trying to log to file on nodejs with 'log' package from npm

I'm not sure how I'm supposed to use this package. I've followed the example code from the docs:
var fs = require('fs')
, Log = require('log')
, log = new Log('debug', fs.createWriteStream('my.log'));
But then what? How do I send actual log info to the file? I want what normally gets logged with console.log() to go to the file.
edit: here's the context that I am using it in, as a minimal example. log.info() works fine outside of the while. As it is below, the file is created but has nothing in it.
var fs = require('fs')
var Log = require('log')
var log = new Log('info', fs.createWriteStream('my.log'));
while(true) {
log.info("testing");
}
// (taken from readme of package)
log.debug('preparing email');
log.info('sending email');
log.error('failed to send email');
These will each log to the file you specified. The function denotes the log level, and is prepended to the data you provide.
You need to use fs.appendFileSync() method in order to add content synchronously to the log you are creating.
var fs require('fs')
,Log = require('log')
,log = new Log('debug', fs.createWriteStream('test.txt'));
log.debug('test test');
log.debug('test sadasd');
log.debug('test xcvxcv');
log.debug('test ewrewr');
log.debug('test hjgj');
log.debug('test fghfh');
log.debug('test yuiyui');
This package is not designed to log Console.log() statement. As per my knowledge every log file have date/time/(log type) and information related to it. So normal console.log() also have same info stored into it.
Log package which you mentioned is used to create user defined logs of any type (info,debug,warning etc etc.).

writeSync() only writes to the console when console.log is present (node.js)

Take the following snippet:
try {
fs = require('fs');
fs.writeSync(0, 'Trying now...');
fs.writeSync(0, 'worked!\r');
}
catch(error){}
As is, it will not output to the console, however
try {
fs = require('fs');
fs.writeSync(0, 'Trying now...');
fs.writeSync(0, 'worked!\r');
console.log();
}
catch(error){}
Will output "Trying now... worked!" to the console. What exactly is going on here?
You're writing to the file descriptor but are not flushing it. Writing a line break (\n instead of \r) does lead to stdout flushing its buffer, as well as a console.log() call that forces it.
Just realized it was because I was using \r instead of \n

locking a file using lockfile.locksync in node.js

I am using lockfile.locksync to lock a file in node.js. But I'd like to know the complete mechanism of this utility. So far every website says that its a "very polite lock file utility" but none explains the internal mechanism of this utility.
Any suggestions?
NODE FS
To understand file options in Node.js, take a look here #fs_file_system_flags and Linux open() syscall which covers options like
O_TRUNC - truncate existing file
O_CREAT - create if not exists
O_WRONLY - access mode, write-only
O_EXCL - ensure that this call creates the file. In pair with O_CREAT triggers EEXIST error if file exists.
But it doesn't offer file lock alternatives similar to "flock"
NPM LOCKFILE INVESTIGATING
I tries to use npm library called "lockfile" as there are too much copy/pasted examples with it.
Verdict. "lockfile" npm library in current implementation "v1.0.4" is incomplete/useless!
From the source:
exports.lockSync = function (path, opts) {
opts = opts || {}
opts.req = opts.req || req++
debug('lockSync', path, opts)
if (opts.wait || opts.retryWait) {
throw new Error('opts.wait not supported sync for obvious reasons')
}
try {
var fd = fs.openSync(path, wx)
locks[path] = fd
try { fs.closeSync(fd) } catch (er) {}
debug('locked sync!', path, fd)
return
So that 5 lines of actual code in "try{}" block just "openSync" in "wx" mode, save "path" to "locks[path]" and "closeSync".
If file exists it fails! due to "wx"!
YOU CAN'T LOCK THE EXISTING FILE with "lockfile" library!
Finally it tries to call "unlockSync" with onExit signal with this code:
exports.unlockSync = function (path) {
debug('unlockSync', path)
// best-effort. unlocking an already-unlocked lock is a noop
try { fs.unlinkSync(path) } catch (er) {}
delete locks[path]
}
And yes, it will DELETE your file after process exited!
This is not the file locking mechanics!
ANY RELIABLE SOLUTION FOR NODEJS FILE LOCK?
I've found fs-ext works just perfect! Open 2 tabs with node process and check what locking file means:
tab1
const fs = require("fs");
const {flockSync} = require('fs-ext');
const fd = fs.openSync('1.txt', 'w');
flockSync(fd, 'ex');
tab2
const fs = require("fs");
const {flockSync} = require('fs-ext');
const fd = fs.openSync('1.txt', 'r');
flockSync(fd, 'sh'); // PENDING!
In tab2 flockSync(fd, 'sh'); will pending until in tab1 flockSync(fd, 'un'); is called!
It really works!

node.js call external exe and wait for output

I just want to call an external exe from a nodejs-App. This external exe makes some calculations and returns an output the nodejs-App needs. But I have no idea how to make the connection between nodejs and an external exe. So my questions:
How do I call an external exe-file with specific arguments from within nodejs properly?
And how do I have to transmit the output of the exe to nodejs efficiently?
Nodejs shall wait for the output of the external exe. But how does nodejs know when the exe has finished its processing? And then how do I have to deliver the result of the exe? I don't want to create a temporary text-file where I write the output to and nodejs simply reads this text-file. Is there any way I can directly return the output of the exe to nodejs? I don't know how an external exe can directly deliver its output to nodejs. BTW: The exe is my own program. So I have full access to that app and can make any necessary changes. Any help is welcome...
With child_process module.
With stdout.
Code will look like this
var exec = require('child_process').exec;
var result = '';
var child = exec('ping google.com');
child.stdout.on('data', function(data) {
result += data;
});
child.on('close', function() {
console.log('done');
console.log(result);
});
You want to use child_process, you can use exec or spawn, depending on your needs. Exec will return a buffer (it's not live), spawn will return a stream (it is live). There are also some occasional quirks between the two, which is why I do the funny thing I do to start npm.
Here's a modified example from a tool I wrote that was trying to run npm install for you:
var spawn = require('child_process').spawn;
var isWin = /^win/.test(process.platform);
var child = spawn(isWin ? 'cmd' : 'sh', [isWin?'/c':'-c', 'npm', 'install']);
child.stdout.pipe(process.stdout); // I'm logging the output to stdout, but you can pipe it into a text file or an in-memory variable
child.stderr.pipe(process.stderr);
child.on('error', function(err) {
logger.error('run-install', err);
process.exit(1); //Or whatever you do on error, such as calling your callback or resolving a promise with an error
});
child.on('exit', function(code) {
if(code != 0) return throw new Error('npm install failed, see npm-debug.log for more details')
process.exit(0); //Or whatever you do on completion, such as calling your callback or resolving a promise with the data
});

How NOT to stop reading file when meeting EOF?

I'm trying to implement a routine for Node.js that would allow one to open a file, that is being appended to by some other process at this very time, and then return chunks of data immediately as they are appended to file. It can be thought as similar to tail -f UNIX command, however acting immediately as chunks are available, instead of polling for changes over time. Alternatively, one can think of it as of working with a file as you do with socket — expecting on('data') to trigger from time to time until a file is closed explicitly.
In C land, if I were to implement this, I would just open the file, feed its file descriptor to select() (or any alternative function with similar designation), and then just read chunks as file descriptor is marked "readable". So, when there is nothing to be read, it won't be readable, and when something is appended to file, it's readable again.
I somewhat expected this kind of behavior for following code sample in Javascript:
function readThatFile(filename) {
const stream = fs.createReadStream(filename, {
flags: 'r',
encoding: 'utf8',
autoClose: false // I thought this would prevent file closing on EOF too
});
stream.on('error', function(err) {
// handle error
});
stream.on('open', function(fd) {
// save fd, so I can close it later
});
stream.on('data', function(chunk) {
// process chunk
// fs.close() if I no longer need this file
});
}
However, this code sample just bails out when EOF is encountered, so I can't wait for new chunk to arrive. Of course, I could reimplement this using fs.open and fs.read, but that somewhat defeats Node.js purpose. Alternatively, I could fs.watch() file for changes, but it won't work over network, and I don't like an idea of reopening file all the time instead of just keeping it open.
I've tried to do this:
const fd = fs.openSync(filename, 'r'); // sync for readability' sake
const stream = net.Socket({ fd: fd, readable: true, writable: false });
But had no luck — net.Socket isn't happy and throws TypeError: Unsupported fd type: FILE.
So, any solutions?
UPD: this isn't possible, my answer explains why.
I haven't looked into the internals of the read streams for files, but it's possible that they don't support waiting for a file to have more data written to it. However, the fs package definitely supports this with its most basic functionality.
To explain how tailing would work, I've written a somewhat hacky tail function which will read an entire file and invoke a callback for every line (separated by \n only) and then wait for the file to have more lines written to it. Note that a more efficient way of doing this would be to have a fixed size line buffer and just shuffle bytes into it (with a special case for extremely long lines), rather than modifying JavaScript strings.
var fs = require('fs');
function tail(path, callback) {
var descriptor, bytes = 0, buffer = new Buffer(256), line = '';
function parse(err, bytesRead, buffer) {
if (err) {
callback(err, null);
return;
}
// Keep track of the bytes we have consumed already.
bytes += bytesRead;
// Combine the buffered line with the new string data.
line += buffer.toString('utf-8', 0, bytesRead);
var i = 0, j;
while ((j = line.indexOf('\n', i)) != -1) {
// Callback with a single line at a time.
callback(null, line.substring(i, j));
// Skip the newline character.
i = j + 1;
}
// Only keep the unparsed string contents for next iteration.
line = line.substr(i);
// Keep reading in the next tick (avoids CPU hogging).
process.nextTick(read);
}
function read() {
var stat = fs.fstatSync(descriptor);
if (stat.size <= bytes) {
// We're currently at the end of the file. Check again in 500 ms.
setTimeout(read, 500);
return;
}
fs.read(descriptor, buffer, 0, buffer.length, bytes, parse);
}
fs.open(path, 'r', function (err, fd) {
if (err) {
callback(err, null);
} else {
descriptor = fd;
read();
}
});
return {close: function close(callback) {
fs.close(descriptor, callback);
}};
}
// This will tail the system log on a Mac.
var t = tail('/var/log/system.log', function (err, line) {
console.log(err, line);
});
// Unceremoniously close the file handle after one minute.
setTimeout(t.close, 60000);
All that said, you should also try to leverage the NPM community. With some searching, I found the tail-stream package which might do what you want, with streams.
Previous answers have mentioned tail-stream's approach which uses fs.watch, fs.read and fs.stat together to create the effect of streaming the contents of the file. You can see that code in action here.
Another, perhaps hackier, approach might be to just use tail by spawning a child process with it. This of course comes with the limitation that tail must exist on the target platform, but one of node's strengths is using it to do asynchronous systems development via spawn and even on windows, you can execute node in an alternate shell like msysgit or cygwin to get access to the tail utility.
The code for this:
var spawn = require('child_process').spawn;
var child = spawn('tail',
['-f', 'my.log']);
child.stdout.on('data',
function (data) {
console.log('tail output: ' + data);
}
);
child.stderr.on('data',
function (data) {
console.log('err data: ' + data);
}
);
So, it seems people are still looking for an answer to this question for five years already, and there is yet no answer on topic.
In short: you can't. Not in Node.js particularly, you can't at all.
Long answer: there are few reasons for this.
First, POSIX standard clarifies select() behavior in this regard as follows:
File descriptors associated with regular files shall always select true for ready to read, ready to write, and error conditions.
So, select() can't help with detecting a write beyond the file end.
With poll() it's similar:
Regular files shall always poll TRUE for reading and writing.
I can't tell for sure with epoll(), since it's not standartized and you have to read quite lengthy implementation, but I would assume it's similar.
Since libuv, which is in core of Node.js implementation, uses read(), pread() and preadv() in its uv__fs_read(), neither of which would block when invoked at the end of file, it would always return empty buffer when EOF is encountered. So, no luck here too.
So, summarizing, if such functionality is desired, something must be wrong with your design, and you should revise it.
What you're trying to do is a FIFO file (acronym for First In First Out), which as you said works like a socket.
There's a node.js module that allows you to work with fifo files.
I don't know what do you want that for, but there are better ways to work with sockets on node.js. Try socket.io instead.
You could also have a look at this previous question:
Reading a file in real-time using Node.js
Update 1
I'm not familiar with any module that would do what you want with a regular file, instead of with a socket type one. But as you said, you could use tail -f to do the trick:
// filename must exist at the time of running the script
var filename = 'somefile.txt';
var spawn = require('child_process').spawn;
var tail = spawn('tail', ['-f', filename]);
tail.stdout.on('data', function (data) {
data = data.toString().replace(/^[\s]+/i,'').replace(/[\s]+$/i,'');
console.log(data);
});
Then from the command line try echo someline > somefile.txt and watch at the console.
You might also would like to have a look at this: https://github.com/layerssss/node-tailer

Categories