Pass Bytes to NodeJS Addon - javascript

I want to create a stream with the wav file, and pass it to my NodeJS addon:
var readableStream = fs.createReadStream('random_file.wav');
readableStream.on('data', function(chunk) {
var chunck_to_binary = chunk.toString('what??'); //binary??
var obj1 = addon.store(chunck_to_binary);
console.log('chunck');
console.log(obj1.caract_count);
//console.log(typeof data);
});
Then, when all the file has been passed. I want to to devolve the bytes to Nodejs, just to be shure that all process is correct. And create a copy:
readableStream.on('end', function() {
console.log("loaded");
var data_copy = addon.return_bytes();
fs.writeFile('copy.wav', data_copy, function (err) {
if (err) return console.log(err);
console.log('done!!');
});
});
In addon I implement some like this:
void store_values(const FunctionCallbackInfo<v8::Value>& args) {
Isolate* isolate = Isolate::GetCurrent();
HandleScope scope(isolate);
if (args.Length() < 1) {
isolate->ThrowException(Exception::TypeError(
String::NewFromUtf8(isolate, "Wrong number of arguments")));
return;
}
v8::String::Utf8Value param1(args[0]->ToString());
std::string aux= std::string(*param1); //JS---->C++
//file_in_memory is global
file_in_memory = file_in_memory + aux;
//(..) return byte size of file_in_memory
}
So far I have not succeed. What is possible wrong? I think the problem is how I am passing the information to addon. Any suggestion?

chunk is a Buffer, which is good because it's basically just a chunk of bytes. Just pass to your add-on as-is:
var obj1 = addon.store(chunk);
In your add-on, you can convert it to std::string like so:
std::string aux(node::Buffer::Data(args[0]), node::Buffer::Length(args[0]));

Related

How do I pass a file/blob from JavaScript to emscripten/WebAssembly (C++)?

I'm writing a WebExtension that uses C++ code compiled with emscripten. The WebExtension downloads files which I want to process inside the C++ code. I'm aware of the File System API and I think I read most of it, but I don't get it to work - making a downloaded file accessible in emscripten.
This is the relevant JavaScript part of my WebExtension:
// Download a file
let blob = await fetch('https://stackoverflow.com/favicon.ico').then(response => {
if (!response.ok) {
return null;
}
return response.blob();
});
// Setup FS API
FS.mkdir('/working');
FS.mount(IDBFS, {}, '/working');
FS.syncfs(true, function(err) {
if (err) {
console.error('Error: ' + err);
}
});
// Store the file "somehow"
let filename = 'favicon.ico';
// ???
// Call WebAssembly/C++ to process the file
Module.processFile(filename);
The directory is created, what can be seen, when inspecting the Web Storage of the browser. If I understand the File System API correctly, I have to "somehow" write my data to a file inside /working. Then, I should be able to call a function of my C++ code (from JavaScript) and open that file as if there was a directory called 'working' at the root, containing the file. The call of the C++ function works (I can print the provided filename).
But how do I add the file (currently a blob) to that directory?
C++ code:
#include "emscripten/bind.h"
using namespace emscripten;
std::string processFile(std::string filename)
{
// open and process the file
}
EMSCRIPTEN_BINDINGS(my_module)
{
function("processFile", &processFile);
}
It turned out, that I was mixing some things up while trying different methods, and I was also misinterpreting my debugging tools. So the easiest way to accomplish this task (without using IDBFS) is:
JS:
// Download a file
let blob = await fetch('https://stackoverflow.com/favicon.ico').then(response => {
if (!response.ok) {
return null;
}
return response.blob();
});
// Convert blob to Uint8Array (more abstract: ArrayBufferView)
let data = new Uint8Array(await blob.arrayBuffer());
// Store the file
let filename = 'favicon.ico';
let stream = FS.open(filename, 'w+');
FS.write(stream, data, 0, data.length, 0);
FS.close(stream);
// Call WebAssembly/C++ to process the file
console.log(Module.processFile(filename));
C++:
#include "emscripten/bind.h"
#include <fstream>
using namespace emscripten;
std::string processFile(std::string filename)
{
std::fstream fs;
fs.open (filename, std::fstream::in | std::fstream::binary);
if (fs) {
fs.close();
return "File '" + filename + "' exists!";
} else {
return "File '" + filename + "' does NOT exist!";
}
}
EMSCRIPTEN_BINDINGS(my_module)
{
function("processFile", &processFile);
}
If you want to do it with IDBFS, you can do it like this:
// Download a file
let blob = await fetch('https://stackoverflow.com/favicon.ico').then(response => {
if (!response.ok) {
return null;
}
return response.blob();
});
// Convert blob to Uint8Array (more abstract: ArrayBufferView)
let data = new Uint8Array(await blob.arrayBuffer());
// Setup FS API
FS.mkdir('/persist');
FS.mount(IDBFS, {}, '/persist');
// Load persistant files (sync from IDBFS to MEMFS), will do nothing on first run
FS.syncfs(true, function(err) {
if (err) {
console.error('Error: ' + err);
}
});
FS.chdir('/persist');
// Store the file
let filename = 'favicon.ico';
let stream = FS.open(filename, 'w+');
FS.write(stream, data, 0, data.length, 0);
FS.close(stream);
// Persist the changes (sync from MEMFS to IDBFS)
FS.syncfs(false, function(err) {
if (err) {
console.error('Error: ' + err);
}
});
// NOW you will be able to see the file in your browser's IndexedDB section of the web storage inspector!
// Call WebAssembly/C++ to process the file
console.log(Module.processFile(filename));
Notes:
When using FS.chdir() in the JS world to change the directory, this also changes the working directory in the C++ world. So respect that, when working with relative paths.
When working with IDBFS instead of MEMFS, you are actually still working with MEMFS and just have the opportunity to sync data from or to IDBFS on demand. But all your work is still done with MEMFS. I would consider IDBFS as an add-on to MEMFS. Didn't read that directly from the docs.

Node JS - How to write stream big json data into json array file?

I have difficult to write a json data into json file using stream module.
I learn about this from several blog tutorial, one of them is this page
Let say i am working with big json data on a json file. I think it is not possible to store all json object inside my memory. So i decided to do it using stream module.
Here the codes i have done:
writeStream.js
var Writable = require('stream').Writable,
util = require('util');
var WriteStream = function() {
Writable.call(this, {
objectMode: true
});
};
util.inherits(WriteStream, Writable);
WriteStream.prototype._write = function(chunk, encoding, callback) {
console.log('write : ' + JSON.stringify(chunk));
callback();
};
module.exports = WriteStream;
readStream.js
var data = require('./test_data.json'),
Readable = require('stream').Readable,
util = require('util');
var ReadStream = function() {
Readable.call(this, {
objectMode: true
});
this.data = data;
this.curIndex = 0;
};
util.inherits(ReadStream, Readable);
ReadStream.prototype._read = function() {
if (this.curIndex === this.data.length) {
return this.push(null);
}
var data = this.data[this.curIndex++];
console.log('read : ' + JSON.stringify(data));
this.push(data);
};
module.exports = ReadStream;
Called with this code:
var ReadStream = require('./readStream.js'),
WriteStream = require('./writeStream.js');
var rs = new ReadStream();
var ws = new WriteStream();
rs.pipe(ws);
Problem: I want to write it into different file, how is it possible?
Can you please help me?
If you are looking for a solution to just write the data from your ReadStream into a different file, you can try fs.createWriteStream. It will return you a writeable stream which can be piped directly to your ReadStream.
You will have to make a minor change in your readStream.js. You are currently pushing an object thus making it an object stream while a write stream expects either String or Buffer unless started in the ObjectMode. So you can do one of the following:
Start the write stream in the object mode. More info here.
Push String or Buffer in your read stream as writable stream internally calls writable.write which expects either String or Buffer. More info here.
If we follow the second option as an example, then your readStream.js should look like this:
var data = require('./test_data.json'),
Readable = require('stream').Readable,
util = require('util');
var ReadStream = function() {
Readable.call(this, {
objectMode: true
});
this.data = data;
this.curIndex = 0;
};
util.inherits(ReadStream, Readable);
ReadStream.prototype._read = function() {
if (this.curIndex === this.data.length) {
return this.push(null);
}
var data = this.data[this.curIndex++];
console.log('read : ' + JSON.stringify(data));
this.push(JSON.stringify(data));
};
module.exports = ReadStream;
You can call the above by using the following code
var ReadStream = require('./readStream.js');
const fs = require('fs');
var rs = new ReadStream();
const file = fs.createWriteStream('/path/to/output/file');
rs.pipe(file);
This will write the data from test_data.json to the output file.
Also as a good practice and to reliably detect write errors, add a listener for the 'error' event. For the above code, you can add the following:
file.on('error',function(err){
console.log("err:", err);
});
Hope this helps.

How to write a file from an ArrayBuffer in JS

I am trying to write a file uploader for Meteor framework.
The principle is to split the fileon the client from an ArrayBuffer in small packets of 4096 bits that are sent to the server through a Meteor.method.
The simplified code below is the part of the client that sends a chunk to the server, it is repeated until offset reaches data.byteLength :
// data is an ArrayBuffer
var total = data.byteLength;
var offset = 0;
var upload = function() {
var length = 4096; // chunk size
// adjust the last chunk size
if (offset + length > total) {
length = total - offset;
}
// I am using Uint8Array to create the chunk
// because it can be passed to the Meteor.method natively
var chunk = new Uint8Array(data, offset, length);
if (offset < total) {
// Send the chunk to the server and tell it what file to append to
Meteor.call('uploadFileData', fileId, chunk, function (err, length) {
if (!err) {
offset += length;
upload();
}
}
}
};
upload(); // start uploading
The simplified code below is the part on the server that receives the chunk and writes it to the file system :
var fs = Npm.require('fs');
var Future = Npm.require('fibers/future');
Meteor.methods({
uploadFileData: function(fileId, chunk) {
var fut = new Future();
var path = '/uploads/' + fileId;
// I tried that with no success
chunk = String.fromCharCode.apply(null, chunk);
// how to write the chunk that is an Uint8Array to the disk ?
fs.appendFile(path, chunk, 'binary', function (err) {
if (err) {
fut.throw(err);
} else {
fut.return(chunk.length);
}
});
return fut.wait();
}
});
I failed to write a valid file to the disk, actually the file is saved but I cannot open it, when I see the content in a text editor, it is similar to the original file (a jpg for example) but some characters are different, I think that could be an encoding problem as the file size is not the same, but I don't know how to fix that...
Saving the file was as easy as creating a new Buffer with the Uint8Array object :
// chunk is the Uint8Array object
fs.appendFile(path, Buffer.from(chunk), function (err) {
if (err) {
fut.throw(err);
} else {
fut.return(chunk.length);
}
});
Building on Karl.S answer, this worked for me, outside of any framework:
fs.appendFileSync(outfile, Buffer.from(arrayBuffer));
Just wanted to add that in newer Meteor you could avoid some callback hell with async/await. Await will also throw and push the error up to client
Meteor.methods({
uploadFileData: async function(file_id, chunk) {
var path = 'somepath/' + file_id; // be careful with this, make sure to sanitize file_id
await fs.appendFile(path, new Buffer(chunk));
return chunk.length;
}
});

How can I get a buffer for a file (image) from CollectionFS

I'm trying to insert an image into a pdf I'm creating server-side with PDFkit. I'm using cfs:dropbox to store my files. Before when I was using cvs:filesystem, it was easy to add the images to my pdf's cause they were right there. Now that they're stored remotely, I'm not sure how to add them, since PDFkit does not support adding images with just the url. It will, however, accept a buffer. How can I get a buffer from my CollectionFS files?
So far I have something like this:
var portrait = Portraits.findOne('vS2yFy4gxXdjTtz5d');
readStream = portrait.createReadStream('portraits');
I tried getting the buffer two ways so far:
First using dataMan, but the last command never comes back:
var dataMan = new DataMan.ReadStream(readStream, portrait.type());
var buffer = Meteor.wrapAsync(Function.prototype.bind(dataMan.getBuffer, dataMan))();
Second buffering the stream manually:
var buffer = new Buffer(0);
readStream.on('readable', function() {
buffer = Buffer.concat([buffer, readStream.read()]);
});
readStream.on('end', function() {
console.log(buffer.toString('base64'));
});
That never seems to come back either. I double-checked my doc to make sure it was there and it has a valid url and the image appears when I put the url in my browser. Am I missing something?
I had to do something similar and since there's no answer to this question, here is how I do it:
// take a cfs file and return a base64 string
var getBase64Data = function(file, callback) {
// callback has the form function (err, res) {}
var readStream = file.createReadStream();
var buffer = [];
readStream.on('data', function(chunk) {
buffer.push(chunk);
});
readStream.on('error', function(err) {
callback(err, null);
});
readStream.on('end', function() {
callback(null, buffer.concat()[0].toString('base64'));
});
};
// wrap it to make it sync
var getBase64DataSync = Meteor.wrapAsync(getBase64Data);
// get a cfs file
var file = Files.findOne();
// get the base64 string
var base64str = getBase64DataSync(file);
// get the buffer from the string
var buffer = new Buffer(base64str, 'base64')
Hope it'll help!

Create plugin gulp with stream

I created plugin for send json data in json file.
But I don't understand why send my object json in pipe, and not write file directly in my plugin.
I want use my plugin whit this syntax:
gulp.task('js-hash', function()
{
// Get all js in redis
gulp.src('./build/js/**/*.js')
.pipe(getHashFile('/build/js/'))
.pipe(gulp.dest('./build/js/hash.json'));
});
And not that:
gulp.task('js-hash', function()
{
// Get all js in redis
gulp.src('./build/js/**/*.js')
.pipe(getHashFile('./build/js/hash.json', '/build/js/'));
});
This is my plugin:
var through = require('through2');
var gutil = require('gulp-util');
var crypto = require('crypto');
var fs = require('fs');
var PluginError = gutil.PluginError;
// Consts
const PLUGIN_NAME = 'get-hash-file';
var json = {};
function getHashFile(filename, basename)
{
if (!filename) {
throw PluginError(PLUGIN_NAME, "Missing filename !");
}
// Creating a stream through which each file will pass
var stream = through.obj(function (file, enc, callback) {
if (file.isNull()) {
this.push(file); // Do nothing if no contents
return callback();
}
if (file.isBuffer()) {
var hash = crypto.createHash('sha256').update(String(file.contents)).digest('hex');
json[file.path.replace(file.cwd+basename, '')] = hash;
return callback();
}
if (file.isStream()) {
this.emit('error', new PluginError(PLUGIN_NAME, 'Stream not supported!'));
return callback();
}
}).on('finish', function () {
fs.writeFile(filename, JSON.stringify(json), function(err) {
if (err) {
throw err;
}
});
});
// returning the file stream
return stream;
}
// Exporting the plugin main function
module.exports = getHashFile;
Your are idea
Nothing prevents you from doing this... besides not respecting plugins guidelines!
Users actually assume a plugin will stream files and that they can pipe them to other plugins.
If I get your code right, you're trying to generate a file that contains all sha hashes of inbound files. Why not let users take this file and pipe it to other plugins? You'd be surprised what people could do.
While this question looks a bit opinion-based, you could definitely put the focus on how to deal with files that may not belong to the main stream of files. Issues like this can be found in many plugins; for example, gulp-uglify authors are wondering how they can add source-maps without mixing js and source map downstream.

Categories