I'm developing an Javascript application for which I need to implement a fixed memory block of 64k. This block can be anything like object, array, buffer I don't know what.This should work as 64k physical memory chip. Which I can address and store data. Addresses will be 16 bits and data in each location is 8 bits. How can I implement it? Can you recommend any npm packages?
I think most browsers support Uint8Array nowadays:
const buffer = new Uint8Array(65536);
let index = 123;
buffer[index] = 42;
console.log( buffer[index] );
Related
I had a question. The following code has a memory leak:
let inMemoryCache = {};
app.get("/hello",(req, resp) => {
inMemoryCache[unixTimeStamp] = {"foo":"bar"}
resp.json({});
});
Isn't it? The size of the object inMemoryCache will keep on increasing with each request until it hits the ceiling and the heap size explodes.
What then is the best to implement in-memory-caches?
Manage the size of your cache somehow. For example, you could keep a fixed number of entries, and once the cache has reached its maximum size and a new entry comes in, delete the oldest (or least frequently used, or least recently used, or some other selection criteria).
Caching is harder than it seems :-)
I'm trying to understand how the conversion of C code to WebAssembly and the JavaScript interop works in the background. And I'm having problems getting a simple string from a function parameter.
My program is a simple Hello World, and I'm trying to "emulate" a printf/puts.
More or less the C equivalent I want to build:
int main() {
puts("Hello World\n");
}
You can see a working example here.
My best idea currently is to read 16bit chunks of memory at a time (since wasm seems to allocate them in 16bit intervals) and check for the null-terminaton.
function get_string(memory, addr) {
var length = 0;
while (true) {
let buffer = new Uint8Array(memory.buffer, addr, 16);
let term = buffer.indexOf(0);
length += term == -1 ? 16 : term;
if (term != -1) break;
}
const strBuf = new Uint8Array(memory.buffer, addr, length);
return new TextDecoder().decode(strBuf);
}
But this seems really clumsy. Is there a better way to read a string from the memory if you only know the start address?
And is it really necessary that I only read 16bit chunks at a time?
I couldn't find any information if creating an typed array of the memory counts as accessing the whole memory or this only happens when I try to get the data from the array.
WebAssembly allocates memory in 64k pages. Maybe this is where the 16 bit thing came from, because 16 bits can address 64 kbytes. However this is irrelevant to the task at hand, since the WebAssembly memory is just a continuous address space, there isn't much difference between the memory object and an ArrayBuffer of the given size, if there's any at all.
The 16-byte-window-at-a-time isn't necessary as well (somehow 16 bits became 16 bytes).
You can do it simply without any performance penalty and create a view of the rest of the buffer in the following way:
function get_string(memory, addr) {
let buffer = new Uint8Array(memory.buffer, addr, memory.buffer.byteLength - addr);
let term = buffer.indexOf(0);
return new TextDecoder().decode(buffer.subarray(0, term));
}
I'm using xxHash to create hashes from elements id. I just don't want to show real id on website. I created script to test is there option to get same hashes:
const _ = require('lodash');
const XXH = require('xxhashjs');
let hashes = []
let uniq_hashes = []
for(let i = 0; i < 1000000; i++){
var h = XXH.h32(i.toString(), 0xABCD).toString(16)
hashes.push(h)
}
uniq_hashes = _.uniq(hashes)
console.log(hashes.length, uniq_hashes.length);
Log from the script is 1000000 999989, so some hashes was the same. Is it correct way how xxHash works?
Also, first pair is '1987' and '395360'
If i need really unique hashes (no crypto) what should I use?
By the birthday paradox you should see a collisions at around 1:16^2 or 10^6 / 2^16 = ~15 so 11 collisions seems about right. (Note: the math is grossly simplified, see Birthday problem for good math.)
Too reduce the number of collisions increase the hash size and use a cryptographic hash such as SHA-256. Cryptographic hash functions are designed to avoid collisions.
You should use a hash with a larger hash digest. Even a 32-bit chunk of a secure cryptographic hash will have unavoidable collisions eventually.
Since you are using Node.js and want something faster than cryptographic hashes, try MetroHash128 or murmur128 or CityHash128. There's also CityHash256 is you want to go completely overboard. These should be very fast due to using C++ bindings, and the chance of random collision is reduced astronomically.
I am using this bit of code in order to reformat some large ajax responseText into good binary data. It works, albeit slow.
The data that I am working with can be as large as 8-10 megs.
I need to get this code to be absolutely efficient. How would loop unrolling or Duff's device be applied to this code while still keeping my binary data intact, or does anyone see anything that can be changed that would help increase it's speed?
var ff = [];
var mx = text.length;
var scc= String.fromCharCode;
for (var z = 0; z < mx; z++) {
ff[z] = scc(text.charCodeAt(z) & 255);
}
var b = ff.join("");
this.fp=b;
return b;
Thanks
Pat
Your time hog isn't the loop. It's this: ff[z] = scc(text.charCodeAt(z) & 255); Are you incrementally growing ff? That will be a pig, guaranteed.
If you just run it under the debugger and pause it, I bet you will see it in the process of growing ff. Pre-allocate.
Convert the data to a JSON array on the server. 8/10 megabytes will take a long time even with a native JSON engine. I'm not sure why a JS application needs 8/10 megs of data in it. If you are downloading to the client's device, convert it to a format they expect and just link to it. They can download and process it themselves then.
I need to encode and decode IEEE 754 floats and doubles from binary in node.js to parse a network protocol.
Are there any existing libraries that do this, or do I have to read the spec and implement it myself? Or should I write a C module to do it?
Note that as of node 0.6 this functionality is included in the core library, so that is the new best way to do it.
See http://nodejs.org/docs/latest/api/buffer.html for details.
If you are reading/writing binary data structures you might consider using a friendly wrapper around this functionality to make things easier to read and maintain. Plug follows: https://github.com/dobesv/node-binstruct
I ported a C++ (made with GNU GMP) converter with float128 support to Emscripten so that it would run in the browser: https://github.com/ysangkok/ieee-754
Emscripten produces JavaScript that will run on Node.js too. You will get the float representation as a string of bits, though, I don't know if that's what you want.
In modern JavaScript (ECMAScript 2015) you can use ArrayBuffer and Float32Array/Float64Array. I solved it like this:
// 0x40a00000 is "5" in float/IEEE-754 32bit.
// You can check this here: https://www.h-schmidt.net/FloatConverter/IEEE754.html
// MSB (Most significant byte) is at highest index
const bytes = [0x00, 0x00, 0xa0, 0x40];
// The buffer is like a raw view into memory.
const buffer = new ArrayBuffer(bytes.length);
// The Uint8Array uses the buffer as its memory.
// This way we can store data byte by byte
const byteArray = new Uint8Array(buffer);
for (let i = 0; i < bytes.length; i++) {
byteArray[i] = bytes[i];
}
// float array uses the same buffer as memory location
const floatArray = new Float32Array(buffer);
// floatValue is a "number", because a number in javascript is a
// double (IEEE-754 # 64bit) => it can hold f32 values
const floatValue = floatArray[0];
// prints out "5"
console.log(`${JSON.stringify(bytes)} as f32 is ${floatValue}`);
// double / f64
// const doubleArray = new Float64Array(buffer);
// const doubleValue = doubleArray[0];
PS: This works in NodeJS but also in Chrome, Firefox, and Edge.