Node.js streams and data disappearing - javascript

I've been playing with Readable and Transforming streams, and I can't solve a mystery of disappearing lines.
Consider a text file in which the lines contain sequential numbers, from 1 to 20000:
$ seq 1 20000 > file.txt
I create a Readable stream and a LineStream (from a library called byline: npm install byline; I'm using version 4.1.1):
var file = (require('fs')).createReadStream('file.txt');
var lines = new (require('byline').LineStream)();
Consider the following code:
setTimeout(function() {
lines.on('readable', function() {
var line;
while (null !== (line = lines.read())) {
console.log(line);
}
});
}, 1500);
setTimeout(function() {
file.on('readable', function() {
var chunk;
while (null !== (chunk = file.read())) {
lines.write(chunk);
}
});
}, 1000);
Notice that it first attaches a listener to the 'readable' event of the file Readable stream, which writes to the lines stream, and only half a second later it attaches a listener to the 'readable' event of the lines stream, which simply prints lines to the console.
If I run this code, it will only print 16384 (which is 2^14) lines and stop. It won't finish the file. However, if I change the 1500ms timeout to 500ms -- effectively swapping the order in which the listeners are attached, it will happily print the whole file.
I've tried playing with highWaterMark, with specifying an amount of bytes to read from the file stream, attaching listeners to other events of the lines stream, all in vain.
What can explain this behavior?
Thanks!

I think this behaviour can be explained with two things:
How you use streams.
How byline works.
What you do is manual piping. The problem with it is that it doesn't respect highWaterMark and forces the whole to be buffered.
All this causes byline to behave badly. See this: https://github.com/jahewson/node-byline/blob/master/lib/byline.js#L110-L112. It means that it stops pushing lines, when buffers length > highWaterMark. But this doesn't make any sense! It doesn't prevent memory usage growth (lines are still stored in special line buffer), but stream doesn't know about these lines and if it ends in overflown state, they will be lost forever.
What you can do:
Use pipe
Modify highWaterMark : lines._readableState.highWaterMark = Infinity;
Stop using byline

Related

Change video resolution in loop

I am trying to decrease the resolution of a video to under 500x500. I don't want to change it to exactly 500x500 because that would mess up video quality. So what I am trying to do is to decrease the resolution by 75% in a loop, and that loop would only stop when the video is under 500x500. In theory that would not be hard, but I can't seem to figure it out.
var vidwidth = 501; //Create variable and put it to 501
var vidheight = 501; //so that it won't go through the If Statement
fs.copyFile(filepath2, './media/media.mp4', (err: any) => { //Copy given file to directory
console.log('filepath2 was copied to media.mp4'); //Log confirmation (Not appearing for some reason, but file is copied)
})
while (true) {
getDimensions('./media/media.mp4').then(function (dimensions: any) { //Get dimensions of copied video
var vidwidth = parseInt(dimensions.width) //Parse to Int
var vidheight = parseInt(dimensions.height) //and put in variables
})
ffmpeg('./media/media.mp4') //Call ffmpeg function with copied video path
.output('./media/media.mp4') //Set output to the same file so we can loop it
.size('75%') //Reduce resolution by 75%
.on('end', function() { //Log confirmation on end
console.log('Finished processing'); //(Not appearing)
}) //
.run(); //Run function
if (vidwidth < 500 && vidheight < 500) { //Check if both the width and height is under 500px
break; //If true, break the loop and continue
}
}
This is the current code I am using with comments. Basically what happens is it gets stuck in the while loop because the dimensions of the video won't change. Tested with console.log() line. I think that if I can fix the ffmpeg problem somehow it will all be fixed.
I'd appreciate any help :)
PS: This is all made in typescript, then build into js using npx tsc
The problem is that the loop prevents the callbacks from getting called because javascript runs on one thread (read more about this in this other SO question: Callback of an asynchronous function is never called). One of those callbacks that doesn't get called is the callback of then where the variables vidwidth and vidheight get changed, so the condition that checks if they're less than 500 and eventually break the loop is never true and the loop keeps running forever. This is not the proper way to deal with asynchronous functions anyway (read more about this in this other SO question: How do I return the response from an asynchronous call?).
By the way, copyFile and the while loop are not necessary at all for this kind of work, you can just use getDimensions to get the dimensions of the video, calculate the desired dimensions based on them and start an ffmpeg task (ffmpeg will handle the creation of the resulting file without altering the input file so no need for copyFile). Like so:
getDimensions(filepath2).then((dimensions: any) => { // get the dimension of the input file
let sizeStr = dimensions.width < dimensions.height ? "?x500" : "500x?"; // if width is smaller than height, reduce the height to 500 and calculate width based on that, same goes for the other way around
ffmpeg(filepath2) // the input is the original video, don't worry 'ffmpeg' won't alter the input file
.output('./media/media.mp4') // the output file path
.size(sizeStr) // use the 'sizeStr' string calculated previously (read more about it here: https://github.com/fluent-ffmpeg/node-fluent-ffmpeg#video-frame-size-options)
.on('end', () => console.log('Finished processing'))
.run();
});
As simple as that!

How to delay function call until its callback will be finished in other places

I'm using child_process to write commands to console, and then subscribe on 'data' event to get output from it. The problem is that sometime outputs are merged with each other.
let command = spawn('vlc', { shell: true });
writeCommand(cmd, callback) {
process.stdin.write(`${cmd}\n`);
this.isBusy = true;
this.process.stdout.on('data', (d) => {
callback(d);
});
}
Function writeCommand is used in several places, how can I delay it from executing until output from previous command is finished?
My output can look like (for status command for example):
( audio volume: 230 ) ( state stopped ) >
data events on a stream have zero guarantees that a whole "unit" of output will come together in a single data event. It could easily be broken up into multiple data events. So, this combined with the fact that you are providing multiple inputs which generate multiple outputs means that you need a way to parse both when you have a complete set of output and thus should call the callback with it and also how to delineate the boundaries between sets of output.
You don't show us what your output looks like so we can't offer any concrete suggestions on how to parse it in that way, but common delimiters are double line feeds of things like that. It would entirely depend upon what your output naturally does at the end or if you control the content the child process creates, what you can insert at the end of the output.
Another work-around for the merged output would be to not send the 2nd command until the 1st one is done (perhaps by using some sort of pending queue). But, you will still need a way to parse the output to know when you actually have the completion of the previous output.
Another problem:
In the code you show, every time you call writeCommand(), you will add yet another listener for the data event. So, when you call it twice to send different commands, you will now have two listeners both listening for the same data and you will be processing the same response twice instead of just once.
let command = spawn('vlc', { shell: true });
writeCommand(cmd, callback) {
process.stdin.write(`${cmd}\n`);
this.isBusy = true;
// every time writeCommand is called, it adds yet another listener
this.process.stdout.on('data', (d) => {
callback(d);
});
}
If you really intend to call this multiple times and multiple commands could be "in flight" at the same time, then you really can't use this coding structure. You will probably need one permanent listener for the data event that is outside this function because you don't want to have more than one listener at the same time and since you've already found that the data from two commands can be merged, even if you separate them, you can't use this structure to capture the data appropriately for the second part of the merged output.
You can use a queuing mechanism to execute the next command after the first one is finished. You can also use a library like https://www.npmjs.com/package/p-limit to do it for you.

Garbage collection can't keep up with Buffer creation and removal

I have a method that runs every 2 seconds to capture a video stream to canvas and write it to file:
function capture(streamName, callback) {
var buffer,
dataURL,
dataSplit,
_ctx;
_ctx = _canvas[streamName].getContext('2d');
_ctx.drawImage(_video[streamName], 0, 0);
dataURL = _canvas[streamName].toDataURL('image/png');
dataSplit = dataURL.split(",")[1];
buffer = new Buffer(dataSplit, 'base64');
fs.writeFileSync(directory + streamName + '.png', buffer);
}
setInterval(function() {
// Called from here
captureState.capture(activeScreens[currentScreenIndex]);
gameState.pollForState(processId, activeScreens[currentScreenIndex], function() {
// do things...
});
}, 2000);
Assuming _video[streamName] exists as a running <video> and _canvas[streamName] exists as a <canvas>. The method works, it just causes a memory leak.
The issue:
Garbage collection can't keep up with the amount of memory the method uses, memory leak ensues.
I have narrowed it down to this line:
buffer = new Buffer(dataSplit, 'base64');
If I comment that out, there is some accumulation of memory (~100MB) but it drops back down every 30s or so.
What I've tried:
Some posts suggested buffer = null; to remove the reference and mark for garbage collection, but that hasn't changed anything.
Any suggestions?
Timeline:
https://i.imgur.com/wH7yFjI.png
https://i.imgur.com/ozFwuxY.png
Allocation Profile:
https://www.dropbox.com/s/zfezp46um6kin7g/Heap-20160929T140250.heaptimeline?dl=0
Just to quantify. After about 30 minutes of run time it sits at 2 GB memory used. This is an Electron (chromium / desktop) app.
SOLVED
Pre-allocating the buffer is what fixed it. This means that in addition to scoping buffer outside of the function, you need to reuse the created buffer with buffer.write. In order to keep proper headers, make sure that you use the encoded parameter of buffer.write.
Matt, I am not sure what was not working with the pre-allocated buffers, so I've posted an algorithm of how such pre-allocated buffers could be used. The key thing here is that buffers are allocated only once for that reason there should not be any memory leak.
var buffers = [];
var bsize = 10000;
// allocate buffer pool
for(var i = 0; i < 10; i++ ){
buffers.push({free:true, buf: new Buffer(bsize)});
}
// sample method that picks one of the buffers into use
function useOneBuffer(data){
// find a free buffer
var theBuf;
var i = 10;
while((typeof theBuf==='undefined')&& i < 10){
if(buffers[i].free){
theBuf = buffers[i];
}
i++;
}
theBuf.free = false;
// start doing whatever you need with the buffer, write data in needed format to it first
// BUT do not allocate
// also, you may want to clear-write the existing data int he buffer, just in case before reuse or after the use.
if(typeof theBuf==='undefined'){
// return or throw... no free buffers left for now
return;
}
theBuf.buf.write(data);
// .... continue using
// dont forget to pass the reference to the buffers member along because
// when you are done, toy have to mark it as free, so that it could be used again
// theBuf.free = true;
}
Did you try something like this? Where did it fail?
There is no leak of buffer object in your code.
Any Buffer objects that you no longer retain a reference to in your code will be immediately available for garbage collection.
the problem caused by callback and how you use it out of capture function.
notice that GC can not cleans the buffer or any other variable as long as callback is running.
I have narrowed it down to this line:
buffer = new Buffer(dataSplit, 'base64');
Short solution is not to use Buffer, as it is not necessary to write file to filesystem, where a file reference exists at base64 portion of data URI. setInterval does not appear to be cleared. You can define a reference for setInterval, then call clearInterval() at <video> ended event.
You can perform function without declaring any variables. Remove data, MIME type, and base64 portions of data URI returned by HTMLCanvasElement.prototype.toDataURL() as described at NodeJS: Saving a base64-encoded image to disk , this Answer at NodeJS write base64 image-file
function capture(streamName, callback) {
_canvas[streamName].getContext("2d")
.drawImage(_video[streamName], 0, 0);
fs.writeFileSync(directory + streamName + ".png"
, _canvas[streamName].toDataURL("image/png").split(",")[1], "base64");
}
var interval = setInterval(function() {
// Called from here
captureState.capture(activeScreens[currentScreenIndex]);
gameState.pollForState(processId, activeScreens[currentScreenIndex]
, function() {
// do things...
});
}, 2000);
video[/* streamName */].addEventListener("ended", function(e) {
clearInterval(interval);
});
I was having a similar issue recently with a software app that uses ~500MB of data in arrayBuffer form. I thought I had a memory leak, but it turns out Chrome was trying to do optimizations on a set of large-ish ArrayBuffer's and corresponding operations (each buffer ~60mb in size and some slightly larger objects). The CPU usage appeared to never allow for GC to run, or at least that's how it appeared. I had to do two things to resolve my issues. I Have not read any specific spec for when the GC gets scheduled to prove or disprove that. What I had to do:
I had to break the reference to the data in my arrayBuffers and some other large objects.
I had to force Chrome to have downtime, which appeared to give it time to schedule and then run the GC.
After applying those two steps, things ran for me and were garbage collected. Unfortunately, when applying those two things independently from each other, my app kept on crashing (exploding into GB of memory used before doing so). The following would be my thoughts on what I'd try on your code.
The problem with the garbage collector is that you cannot force it to run. So you can have objects that are ready to be malloced, but for whatever reason the browser doesn't give the garbage collector opportunity. Another approach to the buffer = null would be instead to break the reference explicitly with the delete operator -- this is what I did, but in theory ... = null is equivalent. It's important to note that delete cannot be run on any variable created by the var operator. So something like the following would be my suggestion:
function capture(streamName, callback) {
this._ctx = _canvas[streamName].getContext('2d');
this._ctx.drawImage(_video[streamName], 0, 0);
this.dataURL = _canvas[streamName].toDataURL('image/png');
this.dataSplit = dataURL.split(",")[1];
this.buffer = new Buffer(dataSplit, 'base64');
fs.writeFileSync(directory + streamName + '.png', this.buffer);
delete this._ctx;//because the context with the image used still exists
delete this.dataURL;//because the data used in dataSplit exists here
delete this.dataSplit;//because the data used in buffer exists here
delete this.buffer;
//again ... = null likely would work as well, I used delete
}
Second, the small break. So it appears you've got some intensive processes going on and the system cannot keep up. It's not actually hitting the 2s save mark, because it needs more than 2 seconds per save. There is always a function on the queue for executing the captureState.capture(...) method and it never has time to garbage collect. Some helpful posts on the scheduler and differences between setInterval and setTimeout:
http://javascript.info/tutorial/settimeout-setinterval
http://ejohn.org/blog/how-javascript-timers-work/
If that is for sure the case, why not use setTimeout and simple check that roughly 2 seconds (or more) time has passed and execute. In doing that check always force your code to wait a set period of time between saves. Give the browser time to schedule/run GC -- something like what follows (100 ms setTimeout in the pollForState):
var MINIMUM_DELAY_BETWEEN_SAVES = 100;
var POLLING_DELAY = 100;
//get the time in ms
var ts = Date.now();
function interValCheck(){
//check if 2000 ms have passed
if(Date.now()-ts > 2000){
//reset the timestamp of the last time save was run
ts = Date.now();
// Called from here
captureState.capture(activeScreens[currentScreenIndex]);
//upon callback, force the system to take a break.
setTimeout(function(){
gameState.pollForState(processId, activeScreens[currentScreenIndex], function() {
// do things...
//and then schedule the interValCheck again, but give it some time
//to potentially garbage collect.
setTimeout(intervalCheck,MINIMUM_DELAY_BETWEEN_SAVES);
});
}
}else{
//reschedule check back in 1/10th of a second.
//or after whatever may be executing next.
setTimeout(intervalCheck,POLLING_DELAY);
}
}
This means that a capture will happen no more than once every 2 seconds, but will also in some sense trick the browser into having the time to GC and remove any data that was left.
Last thoughts, entertaining a more traditional definition of memory leak, The candidates for a memory leak based on what I see in your code would be activeScreens, _canvas or _video which appear to be objects of some sort? Might be worthwhile to explore those if the above doesn't resolve your issue (wouldn't be able to make any assessments based on what is currently shared).
Hope that helps!
In general, I would recommend using a local map of UUID / something that will allow you to control your memory when dealing with getImageData and other buffers.
The UUID can be a pre-defined identifier e.g: "current-image" and "prev-image" if comparing between slides
E.g
existingBuffers: Record<string, UInt8ClampedArray> = {}
existingBuffers[ptrUid] = ImageData.data (OR something equivalent)
then if you want to override ("current-image") you can (overkill here):
existingBuffers[ptrUid] = new UInt8ClampedArray();
delete existingBuffers[ptrUid]
In addition, you will always be able to check your buffers and make sure they are not going out of control.
Maybe it is a bit old-school, but I found it comfortable.

Node.js fs.writeFile() empties the file

I have an update method which gets called about every 16-40ms, and inside I have this code:
this.fs.writeFile("./data.json", JSON.stringify({
totalPlayersOnline: this.totalPlayersOnline,
previousDay: this.previousDay,
gamesToday: this.gamesToday
}), function (err) {
if (err) {
return console.log(err);
}
});
If the server throws an error, the "data.json" file sometimes becomes empty. How do I prevent that?
Problem
fs.writeFile is not an atomic operation. Here is an example program which I will run strace on:
#!/usr/bin/env node
const { writeFile, } = require('fs');
// nodejs won’t exit until the Promise completes.
new Promise(function (resolve, reject) {
writeFile('file.txt', 'content\n', function (err) {
if (err) {
reject(err);
} else {
resolve();
}
});
});
When I run that under strace -f and tidied up the output to show just the syscalls from the writeFile operation (which spans multiple IO threads, actually), I get:
open("file.txt", O_WRONLY|O_CREAT|O_TRUNC|O_CLOEXEC, 0666) = 9
pwrite(9, "content\n", 8, 0) = 8
close(9) = 0
As you can see, writeFile completes in three steps.
The file is open()ed. This is an atomic operation that, with the provided flags, either creates an empty file on disk or, if the file exists, truncates it. Truncating the file is an easy way to make sure that only the content you write ends up in the file. If there is existing data in the file and the file is longer than the data you subsequently write to the file, the extra data will stay. To avoid this you truncate.
The content is written. Because I wrote such a short string, this is done with a single pwrite() call, but for larger amounts of data I assume it is possible nodejs would only write a chunk at a time.
The handle is closed.
My strace had each of these steps occurring on a different node IO thread. This suggests to me that fs.writeFile() might actually be implemented in terms of fs.open(), fs.write(), and fs.close(). Thus, nodejs does not treat this complex operation like it is atomic at any level—because it isn’t. Therefore, if your node process terminates, even gracefully, without waiting for the operation to complete, the operation could be at any of the steps above. In your case, you are seeing your process exit after writeFile() finishes step 1 but before it completes step 2.
Solution
The common pattern for transactionally replacing a file’s contents with a POSIX layer is to use these steps:
Write the data to a differently named file, fsync() the file (See “When should you fsync?” in “Ensuring data reaches disk”), and then close() it.
rename() (or, on Windows, MoveFileEx() with MOVEFILE_REPLACE_EXISTING) the differently-named file over the one you want to replace.
Using this algorithm, the destination file is either updated or not regardless of when your program terminates. And, even better, journalled (modern) filesystems will ensure that, as long as you fsync() the file in step 1 before proceeding to step 2, the two operations will occur in order. I.e., if your program performs step 1 and then step 2 but you pull the plug, when you boot up you will find the filesystem in one of the following states:
None of the two steps are completed. The original file is intact (or if it never existed before, it doesn’t exist). The replacement file is either nonexistent (step 1 of the writeFile() algorithm, open(), effectively never succeeded), existent but empty (step 1 of writeFile() algorithm completed), or existent with some data (step 2 of writeFile() algorithm partially completed).
The first step completed. The original file is intact (or if it didn’t exist before it still doesn’t exist). The replacement file exists with all of the data you want.
Both steps completed. At the path of the original file, you can now access your replacement data—all of it, not a blank file. The path you wrote the replacement data to in the first step no longer exists.
The code to use this pattern might look like the following:
const { writeFile, rename, } = require('fs');
function writeFileTransactional (path, content, cb) {
// The replacement file must be in the same directory as the
// destination because rename() does not work across device
// boundaries.
// This simple choice of replacement filename means that this
// function must never be called concurrently with itself for the
// same path value. Also, properly guarding against other
// processes trying to use the same temporary path would make this
// function more complicated. If that is a concern, a proper
// temporary file strategy should be used. However, this
// implementation ensures that any files left behind during an
// unclean termination will be cleaned up on a future run.
let temporaryPath = `${path}.new`;
writeFile(temporaryPath, content, function (err) {
if (err) {
return cb(err);
}
rename(temporaryPath, path, cb);
});
};
This is basically the same solution you’d use for the same problem in any langage/framework.
if the error is caused due to bad input (the data you want to write) then make sure the data is as they should and then do the writeFile.
if the error is caused due to failure of the writeFile even though the input is Ok, you could check that the function is executed until the file is written. One way is using the async doWhilst function.
async.doWhilst(
writeFile(), //your function here but instead of err when fail callback success to loop again
check_if_file_null, //a function that checks that the file is not null
function (err) {
//here the file is not null
}
);
I didn't run some real tests with this I just noticed with manually reloading my ide that sometime the file was empty.
What I tried first was the rename method and noted the same problem, but recreating a new file was less desirable (considering file watches etc.).
My suggestion or what I'm doing now is in your own readFileSync I check if the file is missing or data returned is empty and sleep for a 100 milliseconds before giving it another try. I suppose a third try with more delay would really push the sigma up a notch but currently not going do it as the added delay is hopefully an unnecessary negative (would consider a promise at that point). There are other recovery option opportunities relative to your own code you can add just in case I hopefully. File not found or empty? is basically a retry another way.
My custom writeFileSync has an added flag to toggle between using the rename method (with write sub-dir '._new' creation) or the normal direct method as your code's need may vary. Possible based on file size is my recommendation.
In this use case the files are small and only updated by one node instance / server at a time. I can see adding the random file name as another option with rename to allow multiple machines to write another option for later if needed. Maybe a retry limit argument as well?
I was also thinking that you could write to a local temp and then copy to share target by some means (maybe also rename on target for speed increase), and then clean up (unlink from local temp) of course. I guess that idea is kind of pushing it to shell commands so not better.
Anyway still the main idea here is to read twice if found empty. I'm sure it's safe from being partially written, via nodejs 8+ on to a shared Ubuntu type NFS mount right?

How to run an async function for each line of a very large (> 1GB) file in Node.js

Say you have a huge (> 1GB) CSV of record ids:
655453
4930285
493029
4930301
493031
...
And for each id you want to make a REST API call to fetch the record data, transform it locally, and insert it into a local database.
How do you do that with Node.js' Readable Stream?
My question is basically this: How do you read a very large file, line-by-line, run an async function for each line, and [optionally] be able to start reading the file from a specific line?
From the following Quora question I'm starting to learn to use fs.createReadStream:
http://www.quora.com/What-is-the-best-way-to-read-a-file-line-by-line-in-node-js
var fs = require('fs');
var lazy = require('lazy');
var stream = fs.createReadStream(path, {
flags: 'r',
encoding: 'utf-8'
});
new lazy(stream).lines.forEach(function(line) {
var id = line.toString();
// pause stream
stream.pause();
// make async API call...
makeAPICall(id, function() {
// then resume to process next id
stream.resume();
});
});
But, that pseudocode doesn't work, because the lazy module forces you to read the whole file (as a stream, but there's no pausing). So that approach doesn't seem like it will work.
Another thing is, I would like to be able to start processing this file from a specific line. The reason for this is, processing each id (making the api call, cleaning the data, etc.) can take up to a half a second per record so I don't want to have to start from the beginning of the file each time. The naive approach I'm thinking about using is to just capture the line number of the last id processed, and save that. Then when you parse the file again, you stream through all the ids, line by line, until you find the line number you left off at, and then you do the makeAPICall business. Another naive approach is to write small files (say of 100 ids) and process each file one at a time (small enough dataset to do everything in memory without an IO stream). Is there a better way to do this?
I can see how this gets tricky (and where node-lazy comes in) because the chunk in stream.on('data', function(chunk) {}); may contain only part of a line (if the bufferSize is small, each chunk may be 10 lines but because the id is variable length, it may only be 9.5 lines or whatever). This is why I'm wondering what the best approach is to the above question.
Related to Andrew Андрей Листочкин's answer:
You can use a module like byline to get a separate data event for each line. It's a transform stream around the original filestream, which produces a data event for each chunk. This lets you pause after each line.
byline won't read the entire file into memory like lazy apparently does.
var fs = require('fs');
var byline = require('byline');
var stream = fs.createReadStream('bigFile.txt');
stream.setEncoding('utf8');
// Comment out this line to see what the transform stream changes.
stream = byline.createStream(stream);
// Write each line to the console with a delay.
stream.on('data', function(line) {
// Pause until we're done processing this line.
stream.pause();
setTimeout(() => {
console.log(line);
// Resume processing.
stream.resume();
}, 200);
});
I guess you don't need to use node-lazy. Here's what I found in Node docs:
Event: data
function (data) { }
The data event emits either a Buffer (by default) or a string if
setEncoding() was used.
So that means that is you call setEncoding() on your stream then your data event callback will accept a string parameter. Then inside this callback you can call use .pause() and .resume() methods.
The pseudo code should look like this:
stream.setEncoding('utf8');
stream.addListener('data', function (line) {
// pause stream
stream.pause();
// make async API call...
makeAPICall(line, function() {
// then resume to process next line
stream.resume();
});
})
Although the docs don't explicitly specify that stream is read line by line I assume that that's the case for file streams. At least in other languages and platforms text streams work that way and I see no reason for Node streams to differ.

Categories