Recently I have been trying to use the Web workers interface to experiment with threads in JavaScript.
Trying to make contains with web workers, following these steps:
Split the initial array to pieces of equal size
Create a web worker for each piece that runs .contains on that piece
When and if the value is found in any of the pieces, it returns true without waiting for all workers to finish.
Here is what I tried:
var MAX_VALUE = 100000000;
var integerArray = Array.from({length: 40000000}, () => Math.floor(Math.random() * MAX_VALUE));
var t0 = performance.now();
console.log(integerArray.includes(1));
var t1 = performance.now();
console.log("Call to doSomething took " + (t1 - t0) + " milliseconds.");
var promises = [];
var chunks = [];
while(integerArray.length) {
chunks.push(integerArray.splice(0,10000000));
}
t0 = performance.now();
chunks.forEach(function(element) {
promises.push(createWorker(element));
});
function createWorker(arrayChunk) {
return new Promise(function(resolve) {
var v = new Worker(getScriptPath(function(){
self.addEventListener('message', function(e) {
var value = e.data.includes(1);
self.postMessage(value);
}, false);
}));
v.postMessage(arrayChunk);
v.onmessage = function(event){
resolve(event.data);
};
});
}
firstTrue(promises).then(function(data) {
// `data` has the results, compute the final solution
var t1 = performance.now();
console.log("Call to doSomething took " + (t1 - t0) + " milliseconds.");
});
function firstTrue(promises) {
const newPromises = promises.map(p => new Promise(
(resolve, reject) => p.then(v => v && resolve(true), reject)
));
newPromises.push(Promise.all(promises).then(() => false));
return Promise.race(newPromises);
}
//As a worker normally take another JavaScript file to execute we convert the function in an URL: http://stackoverflow.com/a/16799132/2576706
function getScriptPath(foo){ return window.URL.createObjectURL(new Blob([foo.toString().match(/^\s*function\s*\(\s*\)\s*\{(([\s\S](?!\}$))*[\s\S])/)[1]],{type:'text/javascript'})); }
Any browser and cpu tried, it is extremely slow compared to just do a simple contains to the initial array.
Why is this so slow?
What is wrong with the code above?
References
Waiting for several workers to finish
Wait for the first true returned by promises
Edit: The issue is not about .contains() in specific, but it could be other array functions, e.g. .indexOf(), .map(), forEach() etc. Why splitting the work between web workers takes much longer...
This is a bit of a contrived example so it's hard to help optimize for what you're trying to do specifically but one easily-overlooked and fix-able slow path is copying data to the web-worker. If possible you can use ArrayBuffers and SharedArrayBuffers to transfer data to and from web workers quickly.
You can use the second argument to the postMessage function to transfer ownership of an arrayBuffer to the web worker. It's important to note that that buffer will no longer be usable by the main thread until it is transferred back by the web worker. SharedArrayBuffers do not have this limitation and can be read by many workers at once but aren't necessarily supported in all browsers due to a security concern (see mdn for more details)
For example
const arr = new Float64Array(new ArrayBuffer(40000000 * 8));
console.time('posting');
ww.postMessage(arr, [ arr.buffer ]);
console.timeEnd('posting');
takes ~0.1ms to run while
const arr = new Array(40000000).fill(0);
console.time('posting');
ww.postMessage(arr, [ arr ]);
console.timeEnd('posting');
takes ~10000ms to run. This is JUST to transfer the data in the message, not to run the worker logic itself.
You can read more on the postMessage transferList argument here and transferable types here. It's important to note that the way your example is doing a timing comparison includes the web worker creation time, as well, but hopefully this gives a better idea for where a lot of that time is going and how it can be better worked around.
You're doing a lot more work between t0 and t1 compared to a simple contains. These extra steps include:
converting function -> string -> regex -> blob -> object URL
calling new worker -> parses object URL -> JS engine interprets code
sending web worked data -> serialized on main thread -> deserialized in worker (likely in memory struct that's copied actually, so not super slow)
You're better off creating the thread first, then continuously handing it data. It may not be faster but it won't lock up your UI.
Also, if you're repeatedly searching through the array may I suggest converting it into a map where the key is the array value and the value is the index.
e.g.
array ['apple', 'coconut', 'kiwi'] would be converted to { apple: 1, coconut: 2, kiwi:3 }
searching through the map would occur in amortized normal time (fast), vs the array would be a linear search (slow as hell for large sets).
Related
I would like to implement fd_read from the WASI API by waiting for the user to type some text in an HTML input field, and then continuing with the rest of the WASI calls. I.e., with something like:
fd_read = (fd, iovs, iovsLen, nread) => {
// only care about 'stdin'
if(fd !== STDIN)
return WASI_ERRNO_BADF;
const encoder = new TextEncoder();
const view = new DataView(memory.buffer);
view.setUint32(nread, 0, true);
// create a UInt8Array for each buffer
const buffers = Array.from({ length: iovsLen }, (_, i) => {
const ptr = iovs + i * 8;
const buf = view.getUint32(ptr, true);
const bufLen = view.getUint32(ptr + 4, true);
return new Uint8Array(memory.buffer, buf, bufLen);
});
// get input for each buffer
buffers.forEach(buf => {
const input = waitForUserInput();
buf.set(encoder.encode(input));
view.setUint32(nread, view.getUint32(nread, true) + input.length, true);
});
return WASI_ESUCCESS;
}
The implementation works if the variable input is provided. For example, setting const input = "1\n" passes that string to a scanf call in my C program, and it reads in a value of 1.
However, I'm struggling to "stop" the JavaScript execution while waiting for the input to be provided. I understand that JavaScript is event-driven and can't be "paused" in the traditional sense, but trying to provide the input as a callback/Promise has the problem of the function still executing, causing nothing to get passed to stdin:
buffers.forEach(buf => {
let input;
waitForUserInput().then(value => {
input = value;
});
buf.set(encoder.encode(input));
view.setUint32(nread, view.getUint32(nread, true) + input.length, true);
});
Since input is still waiting to be set, nothing gets encoded in the buffer and stdin just reads a 0.
Is there a way to wait for the input with async/await, or maybe a "hack-y" solution with setTimeout? I know that window.Prompt() would stop the execution, but I want the input to be a part of the page. Looking for vanilla JavaScript solutions.
You want to connect asynchronous JavaScript APIs to synchronous WebAssembly APIs. This is a common problem for which WebAssembly itself doesn't yet have a built-in solution, but there are some at the tooling level. In particular, you might want to take a look at Asyncify - I've written a detailed post on how it helps solve those use-cases and how to use it here: https://web.dev/asyncify/
Particularly for WASI, the post also showcases a demo that connects fd_read and other synchronous operations to async APIs from File System Access. You can find live demo at https://wasi.rreverser.com/ and its code at https://github.com/GoogleChromeLabs/wasi-fs-access.
For example, here is an implementation of the fd_read function you're interested in, that uses async-await to wait for asynchronous API: https://github.com/GoogleChromeLabs/wasi-fs-access/blob/4c2d29fdfe79abb9b48bd44e296c2019f55d0eec/src/bindings.ts#L449-L461
You should be able to adapt same approach, Asyncify tooling and potentially even the same code to your example using setTimeout or input events.
In the non-blocking event loop of JavaScript, is it safe to read and then alter a variable? What happens if two processes want to change a variable nearly at the same time?
Example A:
Process 1: Get variable A (it is 100)
Process 2: Get variable A (it is 100)
Process 1: Add 1 (it is 101)
Process 2: Add 1 (it is 101)
Result: Variable A is 101 instead of 102
Here is a simplified example, having an Express route. Lets say the route gets called 1000 per second:
let counter = 0;
const getCounter = () => {
return counter;
};
const setCounter = (newValue) => {
counter = newValue;
};
app.get('/counter', (req, res) => {
const currentValue = getCounter();
const newValue = currentValue + 1;
setCounter(newValue);
});
Example B:
What if we do something more complex like Array.findIndex() and then Array.splice()? Could it be that the found index has become outdated because another event-process already altered the array?
Process A findIndex (it is 12000)
Process B findIndex (it is 34000)
Process A splice index 12000
Process B splice index 34000
Result: Process B removed the wrong index, should have removed 33999 instead
const veryLargeArray = [
// ...
];
app.get('/remove', (req, res) => {
const id = req.query.id;
const i = veryLargeArray.findIndex(val => val.id === id);
veryLargeArray.splice(i, 1);
});
Example C:
What if we add an async operation into Example B?
const veryLargeArray = [
// ...
];
app.get('/remove', (req, res) => {
const id = req.query.id;
const i = veryLargeArray.findIndex(val => val.id === id);
someAsyncFunction().then(() => {
veryLargeArray.splice(i, 1);
});
});
This question was kind of hard to find the right words to describe it. Please feel free to update the title.
As per #ThisIsNoZaku's link, Javascript has a 'Run To Completion' principle:
Each message is processed completely before any other message is processed.
This offers some nice properties when reasoning about your program, including the fact that whenever a function runs, it cannot be pre-empted and will run entirely before any other code runs (and can modify data the function manipulates). This differs from C, for instance, where if a function runs in a thread, it may be stopped at any point by the runtime system to run some other code in another thread.
A downside of this model is that if a message takes too long to complete, the web application is unable to process user interactions like click or scroll. The browser mitigates this with the "a script is taking too long to run" dialog. A good practice to follow is to make message processing short and if possible cut down one message into several messages.
Further reading: https://developer.mozilla.org/en-US/docs/Web/JavaScript/EventLoop
So, for:
Example A: This works perfectly fine as a sitecounter.
Example B: This works perfectly fine as well, but if many requests happen at the same time then the last request submitted will be waiting quite some time.
Example C: If another call to \remove is sent before someAsyncFunction finishes, then it is entirely possible that your array will be invalid. The way to resolve this would be to move the index finding into the .then clause of the async function.
IMO, at the cost of latency, this solves a lot of potentially painful concurrency problems. If you must optimise the speed of your requests, then my advice would be to look into different architectures (additional caching, etc).
I have a set of API endpoints in Express. One of them receives a request and starts a long running process that blocks other incoming Express requests.
My goal to make this process non-blocking. To understand better inner logic of Node Event Loop and how I can do it properly, I want to replace this long running function with my dummy long running blocking function that would start when I send a request to its endpoint.
I suppose, that different ways of making the dummy function blocking could cause Node manage these blockings differently.
So, my question is - how can I make a basic blocking process as a function that would run infinitely?
You can use node-webworker-threads.
var Worker, i$, x$, spin;
Worker = require('webworker-threads').Worker;
for (i$ = 0; i$ < 5; ++i$) {
x$ = new Worker(fn$);
x$.onmessage = fn1$;
x$.postMessage(Math.ceil(Math.random() * 30));
}
(spin = function(){
return setImmediate(spin);
})();
function fn$(){
var fibo;
fibo = function(n){
if (n > 1) {
return fibo(n - 1) + fibo(n - 2);
} else {
return 1;
}
};
return this.onmessage = function(arg$){
var data;
data = arg$.data;
return postMessage(fibo(data));
};
}
function fn1$(arg$){
var data;
data = arg$.data;
console.log("[" + this.thread.id + "] " + data);
return this.postMessage(Math.ceil(Math.random() * 30));
}
https://github.com/audreyt/node-webworker-threads
So, my question is - how can I make a basic blocking process as a function that would run infinitely?
function block() {
// not sure why you need that though
while(true);
}
I suppose, that different ways of making the dummy function blocking could cause Node manage these blockings differently.
Not really. I can't think of a "special way" to block the engine differently.
My goal to make this process non-blocking.
If it is really that long running you should really offload it to another thread.
There are short cut ways to do a quick fix if its like a one time thing, you can do it using a npm module that would do the job.
But the right way to do it is setting up a common design pattern called 'Work Queues'. You will need to set up a queuing mechanism, like rabbitMq, zeroMq, etc. How it works is, whenever you get a computation heavy task, instead of doing it in the same thread, you send it to the queue with relevant id values. Then a separate node process commonly called a 'worker' process will be listening for new actions on the queue and will process them as they arrive. This is a worker queue pattern and you can read up on it here:
https://www.rabbitmq.com/tutorials/tutorial-one-javascript.html
I would strongly advise you to learn this pattern as you would come across many tasks that would require this kind of mechanism. Also with this in place you can scale both your node servers and your workers independently.
I am not sure what exactly your 'long processing' is, but in general you can approach this kind of problem in two different ways.
Option 1:
Use the webworker-threads module as #serkan pointed out. The usual 'thread' limitations apply in this scenario. You will need to communicate with the Worker in messages.
This method should be preferable only when the logic is too complicated to be broken down into smaller independent problems (explained in option 2). Depending on complexity you should also consider if native code would better serve the purpose.
Option 2:
Break down the problem into smaller problems. Solve a part of the problem, schedule the next part to be executed later, and yield to let NodeJS process other events.
For example, consider the following example for calculating the factorial of a number.
Sync way:
function factorial(inputNum) {
let result = 1;
while(inputNum) {
result = result * inputNum;
inputNum--;
}
return result;
}
Async way:
function factorial(inputNum) {
return new Promise(resolve => {
let result = 1;
const calcFactOneLevel = () => {
result = result * inputNum;
inputNum--;
if(inputNum) {
return process.nextTick(calcFactOneLevel);
}
resolve(result);
}
calcFactOneLevel();
}
}
The code in second example will not block the node process. You can send the response when returned promise resolves.
PROBLEM: I am trying to speed up my IndexedDB searches by using multiple web workers and therefore executing multiple read transactions simultaneously, but it's not really working, and my CPU only gets to around 30-35% utilization. I have a 4-core processor and was hoping that spawning 4 web workers would dramatically reduce the search time.
I am using Firefox 53 with a WebExtension; other browsers are not an option.
DATABASE: I have a data store with about 250,000 records, each with about 30 keys, some of them containing paragraphs of text.
TASK: Perform a string search on a given key to find matching values. Currently, this takes about 90 seconds to do on a single thread. Adding an additional worker reduces that time to about 75 seconds. More workers than that have no noticeable effect. An acceptable time to me would be under 10 seconds (somewhat comparable to an SQL database).
CURRENT STRATEGY: Spawn a worker for each processor, and create a Promise that resolves when the worker sends a message. In each worker, open the database, divide the records up evenly, and search for the string. I do that by starting on the first record if you're the first worker, second record for the second, etc. Then advance by the number of workers. So the first worker checks records 1, 5, 9, etc. Second worker checks 2, 6, 10, etc. Of course, I could also have the first worker check 1-50, second worker check 51-100, etc. (but obviously thousands each).
Using getAll() on a single thread took almost double the time and 4GB of memory. Splitting that into 4 ranges significantly reduces the time down to a total of about 40 seconds after merging the results (the 40 seconds varies wildly every time I run the script).
Any ideas on how I can make this work, or other suggestions for significantly speeding up the search?
background.js:
var key = whatever, val = something
var proc = navigator.hardwareConcurrency; // Number of processors
var wPromise = []; // Array of promises (one for each worker)
var workers = [];
/* Create a worker for each processor */
for (var pos = 0; pos < proc; pos++) {
workers[pos] = new Worker("js/dbQuery.js");
wPromise.push(
new Promise( resolve => workers[pos].onmessage = resolve )
);
workers[pos].postMessage({key:key, val:val, pos:pos, proc:proc});
}
return Promise.all(wPromise); // Do something once all the workers have finished
dbQuery.js:
onmessage = e => {
var data = e.data;
var req = indexedDB.open("Blah", 1);
req.onsuccess = e => {
var keyArr = [];
var db = e.currentTarget.result;
db.transaction("Blah").objectStore("Blah").index(data.key).openKeyCursor().onsuccess = e => {
var cursor = e.target.result;
if (cursor) {
if (data.pos) {
cursor.advance(data.pos); // Start searching at a position based on which web worker
data.pos = false;
}
else {
if (cursor.key.includes(data.val)) {
keyArr.push(cursor.primaryKey); // Store key if value is a match
}
cursor.advance(data.proc); // Advance position based on number of processors
}
}
else {
db.close();
postMessage(keyArr);
close();
}
}
}
}
Any ideas on how I can make this work, or other suggestions for
significantly speeding up the search?
You can substitute using Promise.race() for Promise.all() to return a resolved Promise once a match is found, instead of waiting for all of the Promises passed to Promise.all() to be resolved.
When I run the following code 9999999+ times, Node returns with:
FATAL ERROR: CALL_AND_RETRY_2 Allocation failed - process out of memory
Aborted (core dumped)
Whats the best solution to get around this issue, other than increasing the max alloc size or any command line arguments?
I'd like to improve the code quality rather than hack a solution.
The following is the main bulk of recursion within the application.
The application is a load testing tool.
a.prototype.createClients = function(){
for(var i = 0; i < 999999999; i++){
this.recursiveRequest();
}
}
a.prototype.recursiveRequest = function(){
var self = this;
self.hrtime = process.hrtime();
if(!this.halt){
self.reqMade++;
this.http.get(this.options, function(resp){
resp.on('data', function(){})
.on("connection", function(){
})
.on("end", function(){
self.onSuccess();
});
})
.on("error", function(e){
self.onError();
});
}
}
a.prototype.onSuccess = function(){
var elapsed = process.hrtime(this.hrtime),
ms = elapsed[0] * 1000000 + elapsed[1] / 1000
this.times.push(ms);
this.successful++;
this.recursiveRequest();
}
Looks like you should really be using a queue instead of recursive calls. async.queue offers a fantastic mechanism for processing asynchronous queues. You should also consider using the request module to make your http client connections simpler.
var async = require('async');
var request = require('request');
var load_test_url = 'http://www.testdomain.com/';
var parallel_requests = 1000;
function requestOne(task, callback) {
request.get(task.url, function(err, connection, body) {
if(err) return callback(err);
q.push({url:load_test_url});
callback();
});
}
var q = async.queue(requestOne, parallel_requests);
for(var i = 0; i < parallel_requests; i++){
q.push({url:load_test_url});
}
You can set the parallel_requests variable according to how many simultaneous requests you want to hit the test server with.
You are launching 1 billion "clients" in parallel, and having each of them perform an http get request recursively in an endless recursion.
Few remarks:
while your question mentions 10 million clients, your code creates 1 billion clients.
You should replace the for loop by a recursive function, to get rid of the out-of-memory error.
Something in these lines:
a.prototype.createClients = function(i){
if (i < 999999999) {
this.recursiveRequest();
this.createClients(i+1);
}
}
Then, you probably want to include some delay between the clients creations, or between the calls to recursiveRequest. Use setTimeout.
You should have a way to get the recursions stopping (onSuccess and recursiveRequest keep calling each other)
A flow control library like async node.js module may help.
10 million is very large... Assuming that the stack supports any number of calls, it should work, but you are likely asking the JavaScript interpreter to load 10 million x quite a bit of memory... and the result is Out of Memory.
Also I personally do no see why you'd want to have so many requests at the same time (testing a heavy load on a server?) one way to optimize is to NOT create "floating functions" which you are doing a lot. "Floating functions" use their own set of memory on each instantiation.
this.http.get(this.options, function(resp){ ... });
^^^^
++++--- allocates memory x 10 million
Here the function(resp)... declaration allocates more memory on each call. What you want to do is:
# either global scope:
function r(resp) {...}
this.http.get(this.options, r ...);
# or as a static member:
a.r = function(resp) {...};
this.http.get(this.options, a.r ...);
At least you'll save on all that function memory. That goes for all the functions you declare within the r function, of course. Especially if their are quite large.
If you want to use the this pointer (make r a prototype function) then you can do that:
a.prototype.r = function(resp) {...};
// note that we have to have a small function to use 'that'... probably not a good idea
var that = this;
this.http.get(this.options, function(){that.r();});
To avoid the that reference, you may use an instance saved in a global. That defeats the use of an object as such though:
a.instance = new a;
// r() is static, but can access the object as follow:
a.r = function(resp) { a.instance.<func>(); }
Using the instance you can access the object's functions from the static r function. That could be the actual implementation which could make full use of the this reference:
a.r = function(resp) { a.instance.r_impl(); }
According to a comment by Daniel, your problem is that you misuse a for() to count the total number of requests you want to send. This means you can apply a very simple fix to your code as follow:
a.prototype.createClients = function(){
this.recursiveRequest();
};
a.prototype.recursiveRequest = function(){
var self = this;
self.hrtime = process.hrtime();
if(!this.halt && this.successful < 10000000){
...
Your recursivity is enough to run the test any number of times.
What you do is never quit, though. You have a halt variable, but it does not look like you ever set that to true. However, to test 10 million times, you want to check the number of requests you already sent.
My "fix" supposes that onError() fails (is no recursive). You could also change the code to make use of the halt flag as in:
a.prototype.onSuccess = function(){
var elapsed = process.hrtime(this.hrtime),
ms = elapsed[0] * 1000000 + elapsed[1] / 1000
this.times.push(ms);
this.successful++;
if(this.successful >= 10000000)
{
this.halt = true;
}
this.recursiveRequest();
}
Note here that you will be pushing ms in the times buffer 10 million times. That's a big table! You may want to have a total instead and compute an average at the end:
this.time += ms;
// at the end:
this.average_time = this.time / this.successful;