I have a file which stores many JavaScript objects in JSON form and I need to read the file, create each of the objects, and do something with them (insert them into a db in my case). The JavaScript objects can be represented a format:
Format A:
[{name: 'thing1'},
....
{name: 'thing999999999'}]
or Format B:
{name: 'thing1'} // <== My choice.
...
{name: 'thing999999999'}
Note that the ... indicates a lot of JSON objects. I am aware I could read the entire file into memory and then use JSON.parse() like this:
fs.readFile(filePath, 'utf-8', function (err, fileContents) {
if (err) throw err;
console.log(JSON.parse(fileContents));
});
However, the file could be really large, I would prefer to use a stream to accomplish this. The problem I see with a stream is that the file contents could be broken into data chunks at any point, so how can I use JSON.parse() on such objects?
Ideally, each object would be read as a separate data chunk, but I am not sure on how to do that.
var importStream = fs.createReadStream(filePath, {flags: 'r', encoding: 'utf-8'});
importStream.on('data', function(chunk) {
var pleaseBeAJSObject = JSON.parse(chunk);
// insert pleaseBeAJSObject in a database
});
importStream.on('end', function(item) {
console.log("Woot, imported objects into the database!");
});*/
Note, I wish to prevent reading the entire file into memory. Time efficiency does not matter to me. Yes, I could try to read a number of objects at once and insert them all at once, but that's a performance tweak - I need a way that is guaranteed not to cause a memory overload, not matter how many objects are contained in the file.
I can choose to use FormatA or FormatB or maybe something else, just please specify in your answer. Thanks!
To process a file line-by-line, you simply need to decouple the reading of the file and the code that acts upon that input. You can accomplish this by buffering your input until you hit a newline. Assuming we have one JSON object per line (basically, format B):
var stream = fs.createReadStream(filePath, {flags: 'r', encoding: 'utf-8'});
var buf = '';
stream.on('data', function(d) {
buf += d.toString(); // when data is read, stash it in a string buffer
pump(); // then process the buffer
});
function pump() {
var pos;
while ((pos = buf.indexOf('\n')) >= 0) { // keep going while there's a newline somewhere in the buffer
if (pos == 0) { // if there's more than one newline in a row, the buffer will now start with a newline
buf = buf.slice(1); // discard it
continue; // so that the next iteration will start with data
}
processLine(buf.slice(0,pos)); // hand off the line
buf = buf.slice(pos+1); // and slice the processed data off the buffer
}
}
function processLine(line) { // here's where we do something with a line
if (line[line.length-1] == '\r') line=line.substr(0,line.length-1); // discard CR (0x0D)
if (line.length > 0) { // ignore empty lines
var obj = JSON.parse(line); // parse the JSON
console.log(obj); // do something with the data here!
}
}
Each time the file stream receives data from the file system, it's stashed in a buffer, and then pump is called.
If there's no newline in the buffer, pump simply returns without doing anything. More data (and potentially a newline) will be added to the buffer the next time the stream gets data, and then we'll have a complete object.
If there is a newline, pump slices off the buffer from the beginning to the newline and hands it off to process. It then checks again if there's another newline in the buffer (the while loop). In this way, we can process all of the lines that were read in the current chunk.
Finally, process is called once per input line. If present, it strips off the carriage return character (to avoid issues with line endings – LF vs CRLF), and then calls JSON.parse one the line. At this point, you can do whatever you need to with your object.
Note that JSON.parse is strict about what it accepts as input; you must quote your identifiers and string values with double quotes. In other words, {name:'thing1'} will throw an error; you must use {"name":"thing1"}.
Because no more than a chunk of data will ever be in memory at a time, this will be extremely memory efficient. It will also be extremely fast. A quick test showed I processed 10,000 rows in under 15ms.
Just as I was thinking that it would be fun to write a streaming JSON parser, I also thought that maybe I should do a quick search to see if there's one already available.
Turns out there is.
JSONStream "streaming JSON.parse and stringify"
Since I just found it, I've obviously not used it, so I can't comment on its quality, but I'll be interested to hear if it works.
It does work consider the following Javascript and _.isString:
stream.pipe(JSONStream.parse('*'))
.on('data', (d) => {
console.log(typeof d);
console.log("isString: " + _.isString(d))
});
This will log objects as they come in if the stream is an array of objects. Therefore the only thing being buffered is one object at a time.
As of October 2014, you can just do something like the following (using JSONStream) - https://www.npmjs.org/package/JSONStream
var fs = require('fs'),
JSONStream = require('JSONStream'),
var getStream() = function () {
var jsonData = 'myData.json',
stream = fs.createReadStream(jsonData, { encoding: 'utf8' }),
parser = JSONStream.parse('*');
return stream.pipe(parser);
}
getStream().pipe(MyTransformToDoWhateverProcessingAsNeeded).on('error', function (err) {
// handle any errors
});
To demonstrate with a working example:
npm install JSONStream event-stream
data.json:
{
"greeting": "hello world"
}
hello.js:
var fs = require('fs'),
JSONStream = require('JSONStream'),
es = require('event-stream');
var getStream = function () {
var jsonData = 'data.json',
stream = fs.createReadStream(jsonData, { encoding: 'utf8' }),
parser = JSONStream.parse('*');
return stream.pipe(parser);
};
getStream()
.pipe(es.mapSync(function (data) {
console.log(data);
}));
$ node hello.js
// hello world
I had similar requirement, i need to read a large json file in node js and process data in chunks and call a api and save in mongodb.
inputFile.json is like:
{
"customers":[
{ /*customer data*/},
{ /*customer data*/},
{ /*customer data*/}....
]
}
Now i used JsonStream and EventStream to achieve this synchronously.
var JSONStream = require("JSONStream");
var es = require("event-stream");
fileStream = fs.createReadStream(filePath, { encoding: "utf8" });
fileStream.pipe(JSONStream.parse("customers.*")).pipe(
es.through(function(data) {
console.log("printing one customer object read from file ::");
console.log(data);
this.pause();
processOneCustomer(data, this);
return data;
}),
function end() {
console.log("stream reading ended");
this.emit("end");
}
);
function processOneCustomer(data, es) {
DataModel.save(function(err, dataModel) {
es.resume();
});
}
I realize that you want to avoid reading the whole JSON file into memory if possible, however if you have the memory available it may not be a bad idea performance-wise. Using node.js's require() on a json file loads the data into memory really fast.
I ran two tests to see what the performance looked like on printing out an attribute from each feature from a 81MB geojson file.
In the 1st test, I read the entire geojson file into memory using var data = require('./geo.json'). That took 3330 milliseconds and then printing out an attribute from each feature took 804 milliseconds for a grand total of 4134 milliseconds. However, it appeared that node.js was using 411MB of memory.
In the second test, I used #arcseldon's answer with JSONStream + event-stream. I modified the JSONPath query to select only what I needed. This time the memory never went higher than 82MB, however, the whole thing now took 70 seconds to complete!
I wrote a module that can do this, called BFJ. Specifically, the method bfj.match can be used to break up a large stream into discrete chunks of JSON:
const bfj = require('bfj');
const fs = require('fs');
const stream = fs.createReadStream(filePath);
bfj.match(stream, (key, value, depth) => depth === 0, { ndjson: true })
.on('data', object => {
// do whatever you need to do with object
})
.on('dataError', error => {
// a syntax error was found in the JSON
})
.on('error', error => {
// some kind of operational error occurred
})
.on('end', error => {
// finished processing the stream
});
Here, bfj.match returns a readable, object-mode stream that will receive the parsed data items, and is passed 3 arguments:
A readable stream containing the input JSON.
A predicate that indicates which items from the parsed JSON will be pushed to the result stream.
An options object indicating that the input is newline-delimited JSON (this is to process format B from the question, it's not required for format A).
Upon being called, bfj.match will parse JSON from the input stream depth-first, calling the predicate with each value to determine whether or not to push that item to the result stream. The predicate is passed three arguments:
The property key or array index (this will be undefined for top-level items).
The value itself.
The depth of the item in the JSON structure (zero for top-level items).
Of course a more complex predicate can also be used as necessary according to requirements. You can also pass a string or a regular expression instead of a predicate function, if you want to perform simple matches against property keys.
If you have control over the input file, and it's an array of objects, you can solve this more easily. Arrange to output the file with each record on one line, like this:
[
{"key": value},
{"key": value},
...
This is still valid JSON.
Then, use the node.js readline module to process them one line at a time.
var fs = require("fs");
var lineReader = require('readline').createInterface({
input: fs.createReadStream("input.txt")
});
lineReader.on('line', function (line) {
line = line.trim();
if (line.charAt(line.length-1) === ',') {
line = line.substr(0, line.length-1);
}
if (line.charAt(0) === '{') {
processRecord(JSON.parse(line));
}
});
function processRecord(record) {
// Process the records one at a time here!
}
I solved this problem using the split npm module. Pipe your stream into split, and it will "Break up a stream and reassemble it so that each line is a chunk".
Sample code:
var fs = require('fs')
, split = require('split')
;
var stream = fs.createReadStream(filePath, {flags: 'r', encoding: 'utf-8'});
var lineStream = stream.pipe(split());
linestream.on('data', function(chunk) {
var json = JSON.parse(chunk);
// ...
});
Using the #josh3736 answer, but for ES2021 and Node.js 16+ with async/await + AirBnb rules:
import fs from 'node:fs';
const file = 'file.json';
/**
* #callback itemProcessorCb
* #param {object} item The current item
*/
/**
* Process each data chunk in a stream.
*
* #param {import('fs').ReadStream} readable The readable stream
* #param {itemProcessorCb} itemProcessor A function to process each item
*/
async function processChunk(readable, itemProcessor) {
let data = '';
let total = 0;
// eslint-disable-next-line no-restricted-syntax
for await (const chunk of readable) {
// join with last result, remove CR and get lines
const lines = (data + chunk).replace('\r', '').split('\n');
// clear last result
data = '';
// process lines
let line = lines.shift();
const items = [];
while (line) {
// check if isn't a empty line or an array definition
if (line !== '' && !/[\[\]]+/.test(line)) {
try {
// remove the last comma and parse json
const json = JSON.parse(line.replace(/\s?(,)+\s?$/, ''));
items.push(json);
} catch (error) {
// last line gets only a partial line from chunk
// so we add this to join at next loop
data += line;
}
}
// continue
line = lines.shift();
}
total += items.length;
// Process items in parallel
await Promise.all(items.map(itemProcessor));
}
console.log(`${total} items processed.`);
}
// Process each item
async function processItem(item) {
console.log(item);
}
// Init
try {
const readable = fs.createReadStream(file, {
flags: 'r',
encoding: 'utf-8',
});
processChunk(readable, processItem);
} catch (error) {
console.error(error.message);
}
For a JSON like:
[
{ "name": "A", "active": true },
{ "name": "B", "active": false },
...
]
https.get(url1 , function(response) {
var data = "";
response.on('data', function(chunk) {
data += chunk.toString();
})
.on('end', function() {
console.log(data)
});
});
I think you need to use a database. MongoDB is a good choice in this case because it is JSON compatible.
UPDATE:
You can use mongoimport tool to import JSON data into MongoDB.
mongoimport --collection collection --file collection.json
I need some help to understand how stream work in NodeJS
I explain, i need to write a module which call a UNIX process (with spawn) and I want to redirect the stdout of this process to a Readable Stream.
I want this behavior to exports the Readable Stream and allow another module to read them.
To do this, I have write a little piece of code :
var spawn = require('child_process').spawn
var Duplex = require('stream').Duplex;
var stream = new Duplex;
var start = function() {
ps = spawn('mycmd', [/*... args ...*/]);
ps.stdout.pipe(stream);
};
exports.stream = stream;
exports.start = start;
But if I use this module I throw an exception which say that the stream doesn't implement the _read method.
Can you help me with this problem ?
Thanks in advance.
[EDIT] I have try the solution of creating a Stream object, but that's doesnt work, here is the code:
var spawn = require('child_process').spawn;
var Stream = require('stream');
var ps = null;
var audio = new Stream;
audio.readable = audio.writable = true;
var start = function() {
if(ps == null) {
ps = spawn('mycmd', []);
ps.stdout.pipe(stream);
}
};
var stop = function() {
if(ps) {
ps.kill();
ps = null;
}
};
exports.stream = stream;
exports.start = start;
exports.stop = stop;
But when I try to listen the stream, I encount an new error :
_stream_readable.js:583
var written = dest.write(chunk);
^
TypeError: Object #<Stream> has no method 'write'
Most of Node's Stream classes aren't meant to be used directly, but as the base of a custom type:
Note that stream.Duplex is an abstract class designed to be extended with an underlying implementation of the _read(size) and _write(chunk, encoding, callback) methods as you would with a Readable or Writable stream class.
One notable exception is stream.PassThrough, which is a simple echo stream implementation.
var PassThrough = require('stream').PassThrough;
var stream = new PassThrough;
Also note that ps will be a global, making it directly accessible in all other modules.
If you simply want to use stream then you should do :
var stream = new Stream;
stream.readable = stream.writable = true;
Duplex is meant for developers. Some methods like _read and _write need to be implemented for it.
[Update]
OK, you have data source, from the stdout. You will need write function, use this :
stream.write = function(data){this.emit('data', data);};
I'm looking for a way to display a JPEG image stored in a byte array. This is a Windows 8 Store App built in Javascript. The byte array is returned from a C# WinRT component. Its type in C# is byte[].
Basically, what I want is to get an object so that I can call:
URL.createObjectURL(bytearray, {oneTimeOnly: true});
upon. Currently this generates a runtime error because the array interface is not supported.
Thanks in advance.
I discovered a far easier method.
var blob = new Blob([bytes], {type: "image/jpg"});
var url = URL.createObjectURL(blob, { oneTimeOnly: true });
Blob is actually supported directly by URL.createObjectURL. The only catch is that you have to specify a mime format to identify the buffer format, which in my case is possible.
Current solution I found is quite round about. Hence, before giving the solution for bytes to URL, few comments:
If there is better way to get to DOM stream/blob object from bytes, try out.
If you control the winrt component - check if you can return StorageFile object. In that case - code will simplifyto
var file = MSApp.createFileFromStorageFile(storageFile);
var url = URL.createObjectURL(file, { oneTimeOnly: true });
now solution:
var bytes;
var memoryStream;
var streams = Windows.Storage.Streams;
{
// get IBuffer out of bytes
var dataWriter = new streams.DataWriter();
dataWriter.writeBytes(bytes);
var buffer = dataWriter.detachBuffer();
dataWriter.close();
// get IInputStream out of IBuffer
memoryStream = new streams.InMemoryRandomAccessStream();
return memoryStream.writeAsync(buffer);
}).then(function onmemorystreamwritecomplete()
{
var inputStream = memoryStream.getInputStreamAt(0);
// get DOM MSStream from IInputStream
var domMStream = MSApp.createStreamFromInputStream('image/jpg', inputStream);
var url = URL.createObjectURL(domMStream, { oneTimeOnly: true });
memoryStream.close();
test3Img.setAttribute('src', url);
})
I am using a library, ya-csv, that expects either a file or a stream as input, but I have a string.
How do I convert that string into a stream in Node?
As #substack corrected me in #node, the new streams API in Node v10 makes this easier:
const Readable = require('stream').Readable;
const s = new Readable();
s._read = () => {}; // redundant? see update below
s.push('your text here');
s.push(null);
… after which you can freely pipe it or otherwise pass it to your intended consumer.
It's not as clean as the resumer one-liner, but it does avoid the extra dependency.
(Update: in v0.10.26 through v9.2.1 so far, a call to push directly from the REPL prompt will crash with a not implemented exception if you didn't set _read. It won't crash inside a function or a script. If inconsistency makes you nervous, include the noop.)
Do not use Jo Liss's resumer answer. It will work in most cases, but in my case it lost me a good 4 or 5 hours bug finding. There is no need for third party modules to do this.
NEW ANSWER:
var Readable = require('stream').Readable
var s = new Readable()
s.push('beep') // the string you want
s.push(null) // indicates end-of-file basically - the end of the stream
This should be a fully compliant Readable stream. See here for more info on how to use streams properly.
OLD ANSWER:
Just use the native PassThrough stream:
var stream = require("stream")
var a = new stream.PassThrough()
a.write("your string")
a.end()
a.pipe(process.stdout) // piping will work as normal
/*stream.on('data', function(x) {
// using the 'data' event works too
console.log('data '+x)
})*/
/*setTimeout(function() {
// you can even pipe after the scheduler has had time to do other things
a.pipe(process.stdout)
},100)*/
a.on('end', function() {
console.log('ended') // the end event will be called properly
})
Note that the 'close' event is not emitted (which is not required by the stream interfaces).
From node 10.17, stream.Readable have a from method to easily create streams from any iterable (which includes array literals):
const { Readable } = require("stream")
const readable = Readable.from(["input string"])
readable.on("data", (chunk) => {
console.log(chunk) // will be called once with `"input string"`
})
Note that at least between 10.17 and 12.3, a string is itself a iterable, so Readable.from("input string") will work, but emit one event per character. Readable.from(["input string"]) will emit one event per item in the array (in this case, one item).
Also note that in later nodes (probably 12.3, since the documentation says the function was changed then), it is no longer necessary to wrap the string in an array.
https://nodejs.org/api/stream.html#stream_stream_readable_from_iterable_options
Just create a new instance of the stream module and customize it according to your needs:
var Stream = require('stream');
var stream = new Stream();
stream.pipe = function(dest) {
dest.write('your string');
return dest;
};
stream.pipe(process.stdout); // in this case the terminal, change to ya-csv
or
var Stream = require('stream');
var stream = new Stream();
stream.on('data', function(data) {
process.stdout.write(data); // change process.stdout to ya-csv
});
stream.emit('data', 'this is my string');
Edit: Garth's answer is probably better.
My old answer text is preserved below.
To convert a string to a stream, you can use a paused through stream:
through().pause().queue('your string').end()
Example:
var through = require('through')
// Create a paused stream and buffer some data into it:
var stream = through().pause().queue('your string').end()
// Pass stream around:
callback(null, stream)
// Now that a consumer has attached, remember to resume the stream:
stream.resume()
There's a module for that: https://www.npmjs.com/package/string-to-stream
var str = require('string-to-stream')
str('hi there').pipe(process.stdout) // => 'hi there'
Another solution is passing the read function to the constructor of Readable (cf doc stream readeable options)
var s = new Readable({read(size) {
this.push("your string here")
this.push(null)
}});
you can after use s.pipe for exemple
in coffee-script:
class StringStream extends Readable
constructor: (#str) ->
super()
_read: (size) ->
#push #str
#push null
use it:
new StringStream('text here').pipe(stream1).pipe(stream2)
I got tired of having to re-learn this every six months, so I just published an npm module to abstract away the implementation details:
https://www.npmjs.com/package/streamify-string
This is the core of the module:
const Readable = require('stream').Readable;
const util = require('util');
function Streamify(str, options) {
if (! (this instanceof Streamify)) {
return new Streamify(str, options);
}
Readable.call(this, options);
this.str = str;
}
util.inherits(Streamify, Readable);
Streamify.prototype._read = function (size) {
var chunk = this.str.slice(0, size);
if (chunk) {
this.str = this.str.slice(size);
this.push(chunk);
}
else {
this.push(null);
}
};
module.exports = Streamify;
str is the string that must be passed to the constructor upon invokation, and will be outputted by the stream as data. options are the typical options that may be passed to a stream, per the documentation.
According to Travis CI, it should be compatible with most versions of node.
Heres a tidy solution in TypeScript:
import { Readable } from 'stream'
class ReadableString extends Readable {
private sent = false
constructor(
private str: string
) {
super();
}
_read() {
if (!this.sent) {
this.push(Buffer.from(this.str));
this.sent = true
}
else {
this.push(null)
}
}
}
const stringStream = new ReadableString('string to be streamed...')
In a NodeJS, you can create a readable stream in a few ways:
SOLUTION 1
You can do it with fs module. The function fs.createReadStream() allows you to open up a readable stream and all you have to do is pass the path of the file to start streaming in.
const fs = require('fs');
const readable_stream = fs.createReadStream('file_path');
SOLUTION 2
If you don't want to create file, you can create an in-memory stream and do something with it (for example, upload it somewhere). You can do this with stream module. You can import Readable from stream module and you can create a readable stream. When creating an object, you can also implement read() method which is used to read the data out of the internal buffer. If no data available to be read, null is returned. The optional size argument specifies a specific number of bytes to read. If the size argument is not specified, all of the data contained in the internal buffer will be returned.
const Readable = require('stream').Readable;
const readable_stream = new Readable({
read(size) {
// ...
}
});
SOLUTION 3
When you are fetching something over the network, that can be fetched like stream (for example you are fetching a PDF document from some API).
const axios = require('axios');
const readable_stream = await axios({
method: 'get',
url: "pdf_resource_url",
responseType: 'stream'
}).data;
SOLUTION 4
Third party packages can support creating of streams as a feature. That is a way with aws-sdk package that is usually used for uploading files to S3.
const file = await s3.getObject(params).createReadStream();
JavaScript is duck-typed, so if you just copy a readable stream's API, it'll work just fine. In fact, you can probably not implement most of those methods or just leave them as stubs; all you'll need to implement is what the library uses. You can use Node's pre-built EventEmitter class to deal with events, too, so you don't have to implement addListener and such yourself.
Here's how you might implement it in CoffeeScript:
class StringStream extends require('events').EventEmitter
constructor: (#string) -> super()
readable: true
writable: false
setEncoding: -> throw 'not implemented'
pause: -> # nothing to do
resume: -> # nothing to do
destroy: -> # nothing to do
pipe: -> throw 'not implemented'
send: ->
#emit 'data', #string
#emit 'end'
Then you could use it like so:
stream = new StringStream someString
doSomethingWith stream
stream.send()