Getting bits from Javascript Uint8ClampedArray - javascript

I have a node server running, and in the server I generate and store into Redis a bunch of bits representing colours on a canvas. Every four bits of the stored bits represents a 4 bit colour. (Ex. If I store 001001101011111, then the colours I'm interested in are 0010, 0110, 1011, and 1111).
var byteArr = new ArrayBuffer(360000);
redis_client.set("board", byteArr);
redis_client.setbit("board", 360000, 0);
// There is some garbage at the beginning of the stored value, so zeroing them out.
for(var i = 0; i < 160; i++){
redis_client.setbit("board", i, 0);
}
When a client connects to a server, I grab this string from Redis, and send it through a Websocket:
wss.on('connection', function(ws) {
redis_client.get("board", function (error, result) {
var initial_send = {"initial_send":true, "board":result};
ws.send(JSON.stringify(initial_send));
});
On the client side, I read the board like so:
socket.onmessage = function (event) {
var o = JSON.parse(event.data);
board = o["board"];
var clampedBoard = new Uint8ClampedArray(board.length);
for(var i = 0; i < board.length; i++){
clampedBoard = board[i];
}
}
At this point, the length of the board is 45000, I believe this is because in Javascript, the smallest TypedArray constructor only allows 1 byte units. So because my initial ArrayBuffer was 360000 in size, when I receive it in the client, it is of size 360000/8.
This is where I'm having issues. At this point if I get clampedBoard[0], it should give me the first 8 bits (the first two colours I care about), and I can do clampedBoard[0]>>4, and clampedBoard[0]&15 to get me those two colours, and I can then
look them up in a map where the keys are 0000, 0001, etc,.
But that isn't what is happening.
Here's what I've tried, and what I know:
Printing values out on client-side: console.log(clampedBoard[0]) gives back a [] looking null/undefined character on Chrome's console.
On the server side, when initializing byteArr and clearing the first 160 values to 0, I manually set the first 8 bits to '00111111', which is the binary representation of the ASCII character '?'.
When console.logging clampBoard[0] on the client side, I get the same [] null/undefined character, but when console.logging board[0], it prints out a '?'. I'm not sure of why this is so.
And when attempting to look up in my map of colours by doing clampedBoard[0]>>4, it always defaults to the key in the dictionary which represents 0, even though it should be 0011.
If there is any more information I can provide, please let me know.

First of all, you can do:
board = o["board"];
var clampedBoard = new Uint8ClampedArray(board);
So you won't need the value initialization loop.
Also, the way you were passing it to the object, the "stringification" will convert it to an string instead of array. In order to get an array, you'll need to create a Node.js' Buffer out of the ArrayBuffer, then convert it into a native array, as so:
var view = Buffer.from(result);
var initial_send = {"initial_send":true, "board":[...view]};

Related

WebAssembly: Correct way to get a string from a parameter (with memory address) in JavaScript

I'm trying to understand how the conversion of C code to WebAssembly and the JavaScript interop works in the background. And I'm having problems getting a simple string from a function parameter.
My program is a simple Hello World, and I'm trying to "emulate" a printf/puts.
More or less the C equivalent I want to build:
int main() {
puts("Hello World\n");
}
You can see a working example here.
My best idea currently is to read 16bit chunks of memory at a time (since wasm seems to allocate them in 16bit intervals) and check for the null-terminaton.
function get_string(memory, addr) {
var length = 0;
while (true) {
let buffer = new Uint8Array(memory.buffer, addr, 16);
let term = buffer.indexOf(0);
length += term == -1 ? 16 : term;
if (term != -1) break;
}
const strBuf = new Uint8Array(memory.buffer, addr, length);
return new TextDecoder().decode(strBuf);
}
But this seems really clumsy. Is there a better way to read a string from the memory if you only know the start address?
And is it really necessary that I only read 16bit chunks at a time?
I couldn't find any information if creating an typed array of the memory counts as accessing the whole memory or this only happens when I try to get the data from the array.
WebAssembly allocates memory in 64k pages. Maybe this is where the 16 bit thing came from, because 16 bits can address 64 kbytes. However this is irrelevant to the task at hand, since the WebAssembly memory is just a continuous address space, there isn't much difference between the memory object and an ArrayBuffer of the given size, if there's any at all.
The 16-byte-window-at-a-time isn't necessary as well (somehow 16 bits became 16 bytes).
You can do it simply without any performance penalty and create a view of the rest of the buffer in the following way:
function get_string(memory, addr) {
let buffer = new Uint8Array(memory.buffer, addr, memory.buffer.byteLength - addr);
let term = buffer.indexOf(0);
return new TextDecoder().decode(buffer.subarray(0, term));
}

Create typed array from part of a binary buffer Javascript

I have a binary buffer, the first half contains data meant to be read as an int using the Uint32 view. The second half is meant to be read as a char using the Uint8 view.
However the problem is the length of the char data is never guaranteed to be divisible by 4.
So if the length of the int data is 7, and the length of the char data is 5 then when I go to make the arrays I get a response like this:
var Unit8Array = new Uint8Array(buffer);
var Unit32Array = new Uint32Array(buffer);
console.log(Unit8Array.length) // 32; (It's 32 because 7*4 + 5 = 32)
console.log(Uint32Array.length) // Error Array Buffer out of bounds
So as you can see I can't create the Uint32 array because the entire buffer isn't divisible by the size of an Int. However I only need the first half of the data in that Uint32 array.
Is there any way to fix this problem without creating a new buffer? For performance reasons I was hoping to read the same data in memory just using different views or separating the buffer (meaning multiple downloads, as I get this buffer from an xhr request).
I tried to do this:
var uint8Array= new Uint8Array(buffer);
var padding = uint8Array.length + (4 - uint8Array%4);
var uint32Array = new Uint32Array(buffer- padding);
But that just makes uint32Array be undefined.
If you want to initialize a Uint32Array from the largest aligned segment of a given array buffer, you can do this:
var byteLength = buffer.byteLength;
var alignment = Uint32Array.BYTES_PER_ELEMENT;
var alignedLength = byteLength - (byteLength % alignment);
var alignedBuffer = buffer.slice(0, alignedLength);
var uint32Array = new Uint32Array(alignedBuffer);

How to implement a robust hash table like v8

Looking to learn how to implement a hash table in a decent way in JavaScript.
I would like for it to be able to:
Efficiently resolve collisions,
Be space efficient, and
Be unbounded in size (at least in principle, like v8 objects are, up to the size of the system memory).
From my research and help from SO, there are many ways to resolve collisions in hash tables. The way v8 does it is Quadratic probing:
hash-table.h
The wikipedia algorithm implementing quadratic probing in JavaScript looks something like this:
var i = 0
var SIZE = 10000
var key = getKey(arbitraryString)
var hash = key % SIZE
if (hashtable[hash]) {
while (i < SIZE) {
i++
hash = (key + i * i) % SIZE
if (!hashtable[hash]) break
if (i == SIZE) throw new Error('Hashtable full.')
}
hashtable[hash] = key
} else {
hashtable[hash] = key
}
The elements that are missing from the wikipedia entry are:
How to compute the hash getKey(arbitraryString). Hoping to learn how v8 does this (not necessarily an exact replica, just along the same lines). Not being proficient in C it looks like the key is an object, and the hash is a 32 bit integer. Not sure if the lookup-cache.h is important.
How to make it dynamic so the SIZE constraint can be removed.
Where to store the final hash, and how to compute it more than once.
V8 allows you to specify your own "Shape" object to use in the hash table:
// The hash table class is parameterized with a Shape.
// Shape must be a class with the following interface:
// class ExampleShape {
// public:
// // Tells whether key matches other.
// static bool IsMatch(Key key, Object* other);
// // Returns the hash value for key.
// static uint32_t Hash(Isolate* isolate, Key key);
// // Returns the hash value for object.
// static uint32_t HashForObject(Isolate* isolate, Object* object);
// // Convert key to an object.
// static inline Handle<Object> AsHandle(Isolate* isolate, Key key);
// // The prefix size indicates number of elements in the beginning
// // of the backing storage.
// static const int kPrefixSize = ..;
// // The Element size indicates number of elements per entry.
// static const int kEntrySize = ..;
// // Indicates whether IsMatch can deal with other being the_hole (a
// // deleted entry).
// static const bool kNeedsHoleCheck = ..;
// };
But not sure what the key is and how they convert that key to the hash so keys are evenly distributed and the hash function isn't just a hello-world example.
The question is, how to implement a quick hash table like V8 that can efficiently resolve collisions and is unbounded in size. It doesn't have to be exactly like V8 but have the features outlined above.
In terms of space efficiency, a naive approach would do var array = new Array(10000), which would eat up a bunch of memory until it was filled out. Not sure how v8 handles it, but if you do var x = {} a bunch of times, it doesn't allocate a bunch of memory for unused keys, somehow it is dynamic.
I'm stuck here essentially:
var m = require('node-murmurhash')
function HashTable() {
this.array = new Array(10000)
}
HashTable.prototype.key = function(value){
// not sure if the key is actually this, or
// the final result computed from the .set function,
// and if so, how to store that.
return m(value)
}
HashTable.prototype.set = function(value){
var key = this.key(value)
var array = this.array
// not sure how to get rid of this constraint.
var SIZE = 10000
var hash = key % SIZE
var i = 0
if (array[hash]) {
while (i < SIZE) {
i++
hash = (key + i * i) % SIZE
if (!array[hash]) break
if (i == SIZE) throw new Error('Hashtable full.')
}
array[hash] = value
} else {
array[hash] = value
}
}
HashTable.prototype.get = function(index){
return this.array[index]
}
This is a very broad question, and I'm not sure what exactly you want an answer to. ("How to implement ...?" sounds like you just want someone to do your work for you. Please be more specific.)
How to compute the hash
Any hash function will do. I've pointed out V8's implementation in the other question you've asked; but you really have a lot of freedom here.
Not sure if the lookup-cache.h is important.
Nope, it's unrelated.
How to make it dynamic so the SIZE constraint can be removed.
Store the table's current size as a variable, keep track of the number of elements in your hash table, and grow the table when the percentage of used slots exceeds a given threshold (you have a space-time tradeoff there: lower load factors like 50% give fewer collisions but use more memory, higher factors like 80% use less memory but hit more slow cases). I'd start with a capacity that's an estimate of "minimum number of entries you'll likely need", and grow in steps of 2x (e.g. 32 -> 64 -> 128 -> etc.).
Where to store the final hash,
That one's difficult: in JavaScript, you can't store additional properties on strings (or primitives in general). You could use a Map (or object) on the side, but if you're going to do that anyway, then you might as well use that as the hash table, and not bother implementing your own thing on top.
and how to compute it more than once.
That one's easy: invoke your hashing function again ;-)
I just want a function getUniqueString(string)
How about this:
var table = new Map();
var max = 0;
function getUniqueString(string) {
var unique = table.get(string);
if (unique === undefined) {
unique = (++max).toString();
table.set(string, unique);
}
return unique;
}
For nicer encapsulation, you could define an object that has table and max as properties.

Reading signed 16 bit data in Javascript

I have been banging my head to solve this:
I received a raw data from an embeded device. From the documentation, the way to read it into a single value is:
Every two bytes of data can be combined to a single raw wave value.Its value is a signed 16-bit integer that ranges from -32768 to 32767. The first byte of the Value represents the high-order byte of the twos-compliment value, while the second byte represents the low-order byte. To reconstruct the full raw wave value, simply shift the first byte left by 8 bits, and bitwise-or with the
second byte.
short raw = (Value[0]<<8) | Value[1];
One of the 2 bytes that I received is "ef". When I used the bitwise operation above the result does not seems right as I noticed I never get a single negative value (its ECG data, negative values are normal). I believe using Javascript to do this is not straight forward.
The way I did it was like this:
var raw = "ef"; // just to show one. Actual one is an array of this 2 bytes but in string.
var value = raw.charAt(0) << 8 | raw.charAt(1)
Please Advice. Thanks!
EDIT:
I also did like this:
let first = new Int8Array(len); // len is the length of the raw data array
let second = new Int8Array(len);
let values = new Int16Array(len) // to hold the converted value
for(var i=0; i<len ; i++)
{
//arr is the array that contains the every two "characters"
first[i] = arr[i].charAt(0);
second[i] = arr[i].charAt(1);
values[i] = first[i] << 8 | second[i];
}
But still all is positive result. no negative. Can someone verify if I am doing this correctly, just in case maybe the values are actually all positive :p
It's two's complement: Check the top bit of the high byte - byte[high]>>7. If it's 0, do byte[top]<<8 | byte[low]. If it is one, do -((byte[top]^0xff)<<8 | byte[low]^0xff) - 1. See https://en.wikipedia.org/wiki/Two%27s_complement for an explanation.
Also check out https://developer.mozilla.org/en-US/docs/Web/JavaScript/Typed_arrays. It has Int16 arrays which is what you want. It might be a ton faster.
You can use property that the string is already 16 bit and then make it signed.
Also instead of reading 8bit at time just read one unsigned 16bit using charCodeAt.
var raw = "\u00ef"; //original example
var buf = new Int16Array(1);
buf[0] = raw.charCodeAt(0); //now in the buf[0] is typed 16 bit integer
//returns 239, for \uffef returns -17
var raw = "\uffef"; //original example
var buf = new Int16Array(1);
buf[0] = raw.charCodeAt(0); //now in the buf[0] is typed 16 bit integer
console.log(buf[0])
For first byte take two's complement first then shift by 8 bit
let x = raw.charCodeAt(0); //ASCII value of first character
then flip x for 1's complement and add +1 for 2's complement and finally do
var value = x << 8 | bytevalueof(raw.charCodeAt(1))
This is a question about the raw wave data coming from a Neurosky Mindwave Mobile EEG headset.
You should find three values in the buffer when you read from the device. Perform this operation on the second two to get a correct reading:
var b = reader.buffer(3);
var raw = b[1]*256 + b[2];
if(raw >= 32768) {
raw = raw - 65536;
}

Presimplify topojson from command line

As far as I understand topojson.presimplify(JSON) in D3 adds Z coordinate to each point in the input topojson shape based on its significance, which then allows to use it for the dynamic simplification like in http://bl.ocks.org/mbostock/6245977
This method topojson.presimplify() takes quite a long time to execute on complicated maps, especially in Firefox which makes the browser unresponsive for few seconds.
Can it be baked directly into the topojson file via the command line as it is done with projections:
topojson --projection 'd3.geo.mercator().translate([0,0]).scale(1)' -o cartesian.topo.json spherical.topo.json
I found a workaround for this which is not completely as simple as I wanted but still achieves the same result.
After the topojson.presimplify(data) is called, data already holds the pre simplified geometry with added Z axis values.
Then I convert it to the JSON string and manually copy it to a new file with JSON.stringify(data)
Nevertheless these conversion to a JSON string has a problem with Infinity values which often occur for Z and with JSON.stringify method are converted to null. Also when there is a value for Z coordinate it is usually too precise and writing all decimal points takes too much space.
For that reason before converting data to a JSON string I trim the numbers:
// Simplifying the map
topojson.presimplify(data);
// Changing Infinity values to 0, limiting decimal points
var arcs = data.arcs;
for(var i1 = arcs.length; i1--;) {
var arc = arcs[i1];
for(var i2 = arc.length; i2--;) {
var v = arc[i2][2];
if(v === Infinity) arc[i2][2] = 0;
else {
arc[i2][2] = M.round(v * 1e9)/1e9;
}
}
}
This makes Infinity values to appear as exactly 0 and other values are trimmed to 9 decimal points which is enough for dynamic simplification to work properly.
Since such string is too long to easily print it for copying to the new json file it is much easier to store it in the localStorage of the browser:
localStorage.setItem(<object name>, JSON.stringify(data))
Then in Safari or Chrome open the developer console and in the tab Resources -> Local Storage -> <Website URL> the stored object can be found, copied and then pasted into a text editor.
Usually it is pasted as a <key> <value> pair, so one needs to remove from the beginning of the pasted string so that it starts from {.
Since Infinity values have been converted to 0, in the dynamic simplification function it should be taken into account so that points with Z = 0 are treated as Z = Infinity and are always plotted with any simplification area:
point: function(x, y, z) {
if (z===0 || z >= simplificationArea) {
this.stream.point(x, y);
}
}

Categories