How to pass an array of object from javascript to webAssembly - javascript

I know it is possible to pass arrays of integer to web assembly using something like this :
const bin = ...; // WebAssembly binary, I assume below that it imports a memory from module "imports", field "memory".
const module = new WebAssembly.Module(bin);
const memory = new WebAssembly.Memory({ initial: 2 }); // Size is in pages.
const instance = new WebAssembly.Instance(module, { imports: { memory: memory } });
const arrayBuffer = memory.buffer;
const buffer = new Uint8Array(arrayBuffer);
I have read a lot of documentation and some questions that looks like what I was looking for :
How to pass an array of objects to WebAssembly and convert it to a vector of structs with wasm-bindgen?
Pass a JavaScript array as argument to a WebAssembly function
https://becominghuman.ai/passing-and-returning-webassembly-array-parameters-a0f572c65d97
https://rob-blackbourn.github.io/blog/webassembly/wasm/array/arrays/javascript/c/2020/06/07/wasm-arrays.html
And yet none of those answered my question.
here is a small example of AssemblyScript that describe the kind of function I would like to use :
class Dummy {
constructor()
}
export function getDummy(): Dummy {
return new Dummy();
}
export function workWithDummy(dummies: Dummy[] = []) {
// do something
}
and in the javaScript code :
const fs = require('fs');
const {resolve} = require('../utils/utils.js');
const env = {
abort: (message, filename, line, column) => {
throw new Error(`${message} in ${filename} at ${line}:${column}`);
}
};
module.exports = fs.promises.readFile(resolve('parser.wasm')).then(buffer => {
return WebAssembly.instantiate(buffer, {env: env}).then(wasmModule => {
const module = wasmModule.instance.exports;
module.workWithDummy([module.getDummy()]); //won't work
});
});
I am running this code in nodeJs 18.1.0
To sum up my question is : how to make this line work ?
module.workWithDummy([module.getDummy()]); //won't work

Related

How to find if Azure File exists on NodeJS

I'm using the azure file storage, and using express JS to write a backend to render the contents stored in the azure file storage.
I am writing the code based on https://learn.microsoft.com/en-us/javascript/api/#azure/storage-file-share/shareserviceclient?view=azure-node-latest
const { ShareServiceClient, StorageSharedKeyCredential } = require("#azure/storage-file-share");
const account = "<account>";
const accountKey = "<accountkey>";
const credential = new StorageSharedKeyCredential(account, accountKey);
const serviceClient = new ShareServiceClient(
`https://${account}.file.core.windows.net`,
credential
);
const shareName = "<share name>";
const fileName = "<file name>";
// [Node.js only] A helper method used to read a Node.js readable stream into a Buffer
async function streamToBuffer(readableStream) {
return new Promise((resolve, reject) => {
const chunks = [];
readableStream.on("data", (data) => {
chunks.push(data instanceof Buffer ? data : Buffer.from(data));
});
readableStream.on("end", () => {
resolve(Buffer.concat(chunks));
});
readableStream.on("error", reject);
});
}
And you can view the contents through
const downloadFileResponse = await fileClient.download();
const output = await streamToBuffer(downloadFileResponse.readableStreamBody)).toString()
Thing is, I only want to find if the file exists and not spend time downloading the entire file, how could I do this?
I looked at https://learn.microsoft.com/en-us/javascript/api/#azure/storage-file-share/shareserviceclient?view=azure-node-latest
to see if the file client class has what I want, but it doesn't seem to have methods useful for this.
If you are using #azure/storage-file-share (version 12.x) Node package, there's an exists method in ShareFileClient. You can use that to find if a file exists or not. Something like:
const fileExists = await fileClient.exists();//returns true or false.

Writing large CSV to JS file using Node FS

I have a large CSV file of postcode data (~1.1GB), I am trying to filter out the data I need and then write an array of values to a JS file.
The issue is, that i'm always using too much memory and receiving this error:
Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
I have tried increasing the memory using this command: node --max-old-space-size=4096 fileName.js but I still hit my memory limit, it just takes longer!
Here is my code to write to the JS
const csvFilePath = "./data/postcodes.csv";
const csv = require("csvtojson");
const fs = require("fs");
csv()
.fromFile(csvFilePath)
.then((jsonArray) => {
const inUsePostcodes = jsonArray.filter((x) => x["In Use?"] === "Yes").map((y) => y.Postcode);
fs.writeFileSync("postcodes.js", inUsePostcodes);
});
Here is a sample of postcodes.csv:
Postcode,In Use?,Latitude,Longitude,Easting,Northing,Grid Ref,County,District,Ward,District Code,Ward Code,Country,County Code,Constituency,Introduced,Terminated,Parish,National Park,Population,Households,Built up area,Built up sub-division,Lower layer super output area,Rural/urban,Region,Altitude,London zone,LSOA Code,Local authority,MSOA Code,Middle layer super output area,Parish Code,Census output area,Constituency Code,Index of Multiple Deprivation,Quality,User Type,Last updated,Nearest station,Distance to station,Postcode area,Postcode district,Police force,Water company,Plus Code,Average Income
AB1 0AA,No,57.101474,-2.242851,385386,801193,NJ853011,"","Aberdeen City","Lower Deeside",S12000033,S13002843,Scotland,S99999999,"Aberdeen South",1980-01-01,1996-06-01,"","",,,"","","Cults, Bieldside and Milltimber West - 02","Accessible small town",,46,,S01006514,,S02001237,"Cults, Bieldside and Milltimber West",,S00090303,S14000002,6808,1,0,2020-02-19,"Portlethen",8.31408,AB,AB1,"Scotland","Scottish Water",9C9V4Q24+HV,
AB1 0AB,No,57.102554,-2.246308,385177,801314,NJ851013,"","Aberdeen City","Lower Deeside",S12000033,S13002843,Scotland,S99999999,"Aberdeen South",1980-01-01,1996-06-01,"","",,,"","","Cults, Bieldside and Milltimber West - 02","Accessible small town",,61,,S01006514,,S02001237,"Cults, Bieldside and Milltimber West",,S00090303,S14000002,6808,1,0,2020-02-19,"Portlethen",8.55457,AB,AB1,"Scotland","Scottish Water",9C9V4Q33+2F,
AB1 0AD,No,57.100556,-2.248342,385053,801092,NJ850010,"","Aberdeen City","Lower Deeside",S12000033,S13002843,Scotland,S99999999,"Aberdeen South",1980-01-01,1996-06-01,"","",,,"","","Cults, Bieldside and Milltimber West - 02","Accessible small town",,45,,S01006514,,S02001237,"Cults, Bieldside and Milltimber West",,S00090399,S14000002,6808,1,0,2020-02-19,"Portlethen",8.54352,AB,AB1,"Scotland","Scottish Water",9C9V4Q22+6M,
How can I write to the JS file from this CSV, without hitting my memory limit?
You need a csv stream parser that will parse it and provide output a line at a time and let you stream it to a file.
Here's one way to do it using the cvs-reader module:
const fs = require('fs');
const csvReader = require('csv-reader');
const { Transform } = require('stream');
const myTransform = new Transform({
readableObjectMode: true,
writableObjectMode: true,
transform(obj, encoding, callback) {
let data = JSON.stringify(obj);
if (this.tFirst) {
// beginning of transformed data
this.push("[");
this.tFirst = false;
} else {
data = "," + data; // add comma separator if not first object
}
this.push(data);
callback();
}
});
myTransform.tFirst = true;
myTransform._flush = function(callback) {
// end of transformed data
this.push("]");
callback();
}
// All of these arguments are optional.
const options = {
skipEmptyLines: true,
asObject: true, // convert data to object
parseNumbers: true,
parseBooleans: true,
trim: true
};
const csvStream = new csvReader(options);
const readStream = fs.createReadStream('example.csv', 'utf8');
const writeStream = fs.createWriteStream('example.json', {autoClose: false});
readStream.on('error', err => {
console.log(err);
csvStream.destroy(err);
}).pipe(csvStream).pipe(myTransform).pipe(writeStream).on('error', err => {
console.error(err);
}).on('finish', () => {
console.log('done');
});
The issue is that the csvtojson node module is trying to store this massive jsonObj in memory!
I found a different solution which involves using the csv-parser node module and then just parsed one row at a time instead of the whole csv!
Here is my solution:
const csv = require('csv-parser');
const fs = require('fs');
var stream = fs.createWriteStream("postcodes.js", {flags:'a'});
let first = false;
fs.createReadStream('./data/postcodes.csv')
.pipe(csv())
.on('data', (row) => {
if (row["In Use?"]) {
if (!first) {
first = true;
stream.write(`const postcodes = ["${row.Postcode},\n"`);
} else {
stream.write(`"${row.Postcode},\n"`);
}
}
})
.on('end', () => {
stream.write("]");
console.log('CSV file successfully processed');
});
It's not very pretty writing strings like const postcodes = to represent JavaScript, but it performs the desired function.

FS file watcher, get changes

I want to implement a file system watcher using node.js so that it watches a particular JSON file for changes.
And then, i would like to get what changed inside the file.
Here's one way:
Load the current file contents and parse it to an object, keeping it in-memory.
Watch for file changes, using fs.watch.
On change, load the new file contents as an object.
Perform an object diff between the current object and new object; e.g using diff.
Set current object as new object.
Repeat on change.
Here's an example:
const fs = require('fs')
const diff = require('deep-diff')
const filepath = './foo.json'
const getCurrent = () => JSON.parse(fs.readFileSync(filepath, {
encoding: 'utf8'
}))
let currObj = getCurrent()
fs.watch(filepath, { encoding: 'buffer' }, (eventType, filename) => {
if (eventType !== 'change') return
const newObj = getCurrent()
const differences = diff(currObj, newObj)
console.log(differences)
// { kind: 'N' } for new key additions
// { kind: 'E' } for edits
// { kind: 'D' } for deletions
currObj = newObj
})
Note that I'm using fs.readFileSync here for brevity. You should be better off using fs.readFile instead which is non-blocking.

Creating a new JSON import instance

so I have an object in JSON format: object.json.
And on every http call it's used in the method getAlbum(), but if I make multiple requests, the JSON gets cached, because it imports at the upper of the page.
How can I create a new instance to clear the JSON every time?
It has a lot of fields and depth, so I just can't create a new Object();
const albumReportJSON = require('./album.report.json');
const getAlbum = async(ctx) => {
const value = albumReportJSON;
/.. processing
}
Require the json inside your function
const getAlbum = async(ctx) => {
const albumReportJSON = require('./album.report.json');
// Or just do const value = require('./album.report.json');
const value = albumReportJSON;
/.. processing
}

using buffer& streams to search for string in multiple text file & also to find line number in nodejs

please help me to search a string across multiple files, I need to print the line number of that particular string with filename using buffer & streams concept in node.js.
for example:
there are 5 text files and there is " hello " string in 10 and 15th line of the 3rd file. same hello string in the 50th line of the 5th file. now I need to print line number of file name 3 with the line number of that searched string "hello"
same as for the 5th file.
help me to write this program in buffer concept in node.js
const readline = require("readline");
const fs = require("fs");
// Start methods implementation
const beginSearch = (readStream, filePath, queries) => {
let lineCount = 0;
let matches = new Map();
queries.forEach(query => matches.set(query, []));
return new Promise((resolve, reject) => {
readStream.on("line", line => {
lineCount++;
for (query of matches.keys()) {
if (searchForTerm(line, query))
matches.set(query, [...matches.get(query), lineCount]);
}
});
readStream.on("close", () => resolve({
filePath,
matches
}));
});
};
const searchForTerm = (line, query) => line.match(query);
const createLineInterfaces = filePaths =>
filePaths.map(filePath => {
const readStream = readline.createInterface({
input: fs.createReadStream(filePath),
crlfDelay: Infinity
});
return {
filePath,
readStream
};
});
// End methods implementation
// Start main function
const filesToSearch = ["sample.txt", "sample2.txt"];
const queriesToSeatch = ["hello"];
let searchProms = createLineInterfaces(filesToSearch).map(
({
readStream,
filePath
}) =>
beginSearch(readStream, filePath, queriesToSeatch)
);
Promise.all(searchProms).then(searchResults =>
searchResults.forEach(result => console.log(result))
);
// End main function
A little explain
I am using the readline module to split each file into lines. Keep in mind the whole implementation is with streams. Then i am attaching a listener to the line event and I am searching each line for a specific query. The search method is a simple regexp. You could use a fuzzy search method if you want. Then the matched lines are saved in a Map which keys are the queries and values the arrays of lineNumbers that the query has found.
I am assuming that you are familiar with the stream concept and you know about ES6 stuff.

Categories