I have been banging my head to solve this:
I received a raw data from an embeded device. From the documentation, the way to read it into a single value is:
Every two bytes of data can be combined to a single raw wave value.Its value is a signed 16-bit integer that ranges from -32768 to 32767. The first byte of the Value represents the high-order byte of the twos-compliment value, while the second byte represents the low-order byte. To reconstruct the full raw wave value, simply shift the first byte left by 8 bits, and bitwise-or with the
second byte.
short raw = (Value[0]<<8) | Value[1];
One of the 2 bytes that I received is "ef". When I used the bitwise operation above the result does not seems right as I noticed I never get a single negative value (its ECG data, negative values are normal). I believe using Javascript to do this is not straight forward.
The way I did it was like this:
var raw = "ef"; // just to show one. Actual one is an array of this 2 bytes but in string.
var value = raw.charAt(0) << 8 | raw.charAt(1)
Please Advice. Thanks!
EDIT:
I also did like this:
let first = new Int8Array(len); // len is the length of the raw data array
let second = new Int8Array(len);
let values = new Int16Array(len) // to hold the converted value
for(var i=0; i<len ; i++)
{
//arr is the array that contains the every two "characters"
first[i] = arr[i].charAt(0);
second[i] = arr[i].charAt(1);
values[i] = first[i] << 8 | second[i];
}
But still all is positive result. no negative. Can someone verify if I am doing this correctly, just in case maybe the values are actually all positive :p
It's two's complement: Check the top bit of the high byte - byte[high]>>7. If it's 0, do byte[top]<<8 | byte[low]. If it is one, do -((byte[top]^0xff)<<8 | byte[low]^0xff) - 1. See https://en.wikipedia.org/wiki/Two%27s_complement for an explanation.
Also check out https://developer.mozilla.org/en-US/docs/Web/JavaScript/Typed_arrays. It has Int16 arrays which is what you want. It might be a ton faster.
You can use property that the string is already 16 bit and then make it signed.
Also instead of reading 8bit at time just read one unsigned 16bit using charCodeAt.
var raw = "\u00ef"; //original example
var buf = new Int16Array(1);
buf[0] = raw.charCodeAt(0); //now in the buf[0] is typed 16 bit integer
//returns 239, for \uffef returns -17
var raw = "\uffef"; //original example
var buf = new Int16Array(1);
buf[0] = raw.charCodeAt(0); //now in the buf[0] is typed 16 bit integer
console.log(buf[0])
For first byte take two's complement first then shift by 8 bit
let x = raw.charCodeAt(0); //ASCII value of first character
then flip x for 1's complement and add +1 for 2's complement and finally do
var value = x << 8 | bytevalueof(raw.charCodeAt(1))
This is a question about the raw wave data coming from a Neurosky Mindwave Mobile EEG headset.
You should find three values in the buffer when you read from the device. Perform this operation on the second two to get a correct reading:
var b = reader.buffer(3);
var raw = b[1]*256 + b[2];
if(raw >= 32768) {
raw = raw - 65536;
}
Related
I have a node server running, and in the server I generate and store into Redis a bunch of bits representing colours on a canvas. Every four bits of the stored bits represents a 4 bit colour. (Ex. If I store 001001101011111, then the colours I'm interested in are 0010, 0110, 1011, and 1111).
var byteArr = new ArrayBuffer(360000);
redis_client.set("board", byteArr);
redis_client.setbit("board", 360000, 0);
// There is some garbage at the beginning of the stored value, so zeroing them out.
for(var i = 0; i < 160; i++){
redis_client.setbit("board", i, 0);
}
When a client connects to a server, I grab this string from Redis, and send it through a Websocket:
wss.on('connection', function(ws) {
redis_client.get("board", function (error, result) {
var initial_send = {"initial_send":true, "board":result};
ws.send(JSON.stringify(initial_send));
});
On the client side, I read the board like so:
socket.onmessage = function (event) {
var o = JSON.parse(event.data);
board = o["board"];
var clampedBoard = new Uint8ClampedArray(board.length);
for(var i = 0; i < board.length; i++){
clampedBoard = board[i];
}
}
At this point, the length of the board is 45000, I believe this is because in Javascript, the smallest TypedArray constructor only allows 1 byte units. So because my initial ArrayBuffer was 360000 in size, when I receive it in the client, it is of size 360000/8.
This is where I'm having issues. At this point if I get clampedBoard[0], it should give me the first 8 bits (the first two colours I care about), and I can do clampedBoard[0]>>4, and clampedBoard[0]&15 to get me those two colours, and I can then
look them up in a map where the keys are 0000, 0001, etc,.
But that isn't what is happening.
Here's what I've tried, and what I know:
Printing values out on client-side: console.log(clampedBoard[0]) gives back a [] looking null/undefined character on Chrome's console.
On the server side, when initializing byteArr and clearing the first 160 values to 0, I manually set the first 8 bits to '00111111', which is the binary representation of the ASCII character '?'.
When console.logging clampBoard[0] on the client side, I get the same [] null/undefined character, but when console.logging board[0], it prints out a '?'. I'm not sure of why this is so.
And when attempting to look up in my map of colours by doing clampedBoard[0]>>4, it always defaults to the key in the dictionary which represents 0, even though it should be 0011.
If there is any more information I can provide, please let me know.
First of all, you can do:
board = o["board"];
var clampedBoard = new Uint8ClampedArray(board);
So you won't need the value initialization loop.
Also, the way you were passing it to the object, the "stringification" will convert it to an string instead of array. In order to get an array, you'll need to create a Node.js' Buffer out of the ArrayBuffer, then convert it into a native array, as so:
var view = Buffer.from(result);
var initial_send = {"initial_send":true, "board":[...view]};
I have a binary buffer, the first half contains data meant to be read as an int using the Uint32 view. The second half is meant to be read as a char using the Uint8 view.
However the problem is the length of the char data is never guaranteed to be divisible by 4.
So if the length of the int data is 7, and the length of the char data is 5 then when I go to make the arrays I get a response like this:
var Unit8Array = new Uint8Array(buffer);
var Unit32Array = new Uint32Array(buffer);
console.log(Unit8Array.length) // 32; (It's 32 because 7*4 + 5 = 32)
console.log(Uint32Array.length) // Error Array Buffer out of bounds
So as you can see I can't create the Uint32 array because the entire buffer isn't divisible by the size of an Int. However I only need the first half of the data in that Uint32 array.
Is there any way to fix this problem without creating a new buffer? For performance reasons I was hoping to read the same data in memory just using different views or separating the buffer (meaning multiple downloads, as I get this buffer from an xhr request).
I tried to do this:
var uint8Array= new Uint8Array(buffer);
var padding = uint8Array.length + (4 - uint8Array%4);
var uint32Array = new Uint32Array(buffer- padding);
But that just makes uint32Array be undefined.
If you want to initialize a Uint32Array from the largest aligned segment of a given array buffer, you can do this:
var byteLength = buffer.byteLength;
var alignment = Uint32Array.BYTES_PER_ELEMENT;
var alignedLength = byteLength - (byteLength % alignment);
var alignedBuffer = buffer.slice(0, alignedLength);
var uint32Array = new Uint32Array(alignedBuffer);
I have two very simple functions in python that calculate the size of a struct (C struct, numpy struct, etc) given the range of numbers you want to store. So, if you wanted to store numbers from 0 to 8389798, 8389798 would be the value you feed the function:
def ss(value):
nbits = 8
while value > 2**nbits:
nbits += 8
return nbits * value
def mss(value):
total_size = 0
max_bits = [(0,0)] # (bits for 1 row in struct, max rows in struct)
while value > 2 ** max_bits[-1][0] :
total_size += max_bits[-1][0] * max_bits[-1][1]
value -= max_bits[-1][1]
new_struct_bits = max_bits[-1][0]+8
max_bits.append( (new_struct_bits,2**new_struct_bits) )
total_size += max_bits[-1][0] * value
#print max_bits
return total_size
ss is for a single struct where you need as many bytes in the first row to store "1" as you would in the last row to store "8389798". However, this is not as space efficient as breaking your struct up into structs of 1 byte, 2 bytes, 3 bytes, etc, up until N bytes needed to store your value. This is what mss calculates.
So now i want to see how much more efficient mss is over ss for a range of values - that range being 1 to, say, 100 billion. That's much to much data to save and plot, and it's totally unnecessary to do so in the first place. Far better to take the plot window, and for every value of X that has a pixel in that window, calculate the value of y [which is ss(x) - mss(x)].
This sort of interactive graph is really the only way i can think of to look at the relationship between mss and ss. Does anyone know how i should plot such a graph? I'm willing to use a JavaScript solution because I can rewrite the python to that, as well as use "solutions" like Excel, R, Wolfram, if they offer a way to do interactive/generated graphs.
I am looking for a portable algorithm for creating a hashCode for binary data. None of the binary data is very long -- I am Avro-encoding keys for use in kafka.KeyedMessages -- we're probably talking anywhere from 2 to 100 bytes in length, but most of the keys are in the 4 to 8 byte range.
So far, my best solution is to convert the data to a hex string, and then do a hashCode of that. I'm able to make that work in both Scala and JavaScript. Assuming I have defined b: Array[Byte], the Scala looks like this:
b.map("%02X" format _).mkString.hashCode
It's a little more elaborate in JavaScript -- luckily someone already ported the basic hashCode algorithm to JavaScript -- but the point is being able to create a Hex string to represent the binary data, I can ensure the hashing algorithm works off the same inputs.
On the other hand, I have to create an object twice the size of the original just to create the hashCode. Luckily most of my data is tiny, but still -- there has to be a better way to do this.
Instead of padding the data as its hex value, I presume you could just coerce the binary data into a String so the String has the same number of bytes as the binary data. It would be all garbled, more control characters than printable characters, but it would be a string nonetheless. Do you run into portability issues though? Endian-ness, Unicode, etc.
Incidentally, if you got this far reading and don't already know this -- you can't just do:
val b: Array[Byte] = ...
b.hashCode
Luckily I already knew that before I started, because I ran into that one early on.
Update
Based on the first answer given, it appears at first blush that java.util.Arrays.hashCode(Array[Byte]) would do the trick. However, if you follow the javadoc trail, you'll see that this is the algorithm behind it, which is as based on the algorithm for List and the algorithm for byte combined.
int hashCode = 1;
for (byte e : list) hashCode = 31*hashCode + (e==null ? 0 : e.intValue());
As you can see, all it's doing is creating a Long representing the value. At a certain point, the number gets too big and it wraps around. This is not very portable. I can get it to work for JavaScript, but you have to import the npm module long. If you do, it looks like this:
function bufferHashCode(buffer) {
const Long = require('long');
var hashCode = new Long(1);
for (var value of buff.values()) { hashCode = hashCode.multiply(31).add(value) }
return hashCode
}
bufferHashCode(new Buffer([1,2,3]));
// hashCode = Long { low: 30817, high: 0, unsigned: false }
And you do get the same results when the data wraps around, sort of, though I'm not sure why. In Scala:
java.util.Arrays.hashCode(Array[Byte](1,2,3,4,5,6,7,8,9,10))
// res30: Int = -975991962
Note that the result is an Int. In JavaScript:
bufferHashCode(new Buffer([1,2,3,4,5,6,7,8,9,10]);
// hashCode = Long { low: -975991962, high: 197407, unsigned: false }
So I have to take the low bytes and ignore the high, but otherwise I get the same results.
This functionality is already available in Java standard library, look at the Arrays.hashCode() method.
Because your binary data are Array[Byte], here is how you can verify it works:
println(java.util.Arrays.hashCode(Array[Byte](1,2,3))) // prints 30817
println(java.util.Arrays.hashCode(Array[Byte](1,2,3))) // prints 30817
println(java.util.Arrays.hashCode(Array[Byte](2,2,3))) // prints 31778
Update: It is not true that the Java implementation boxes the bytes. Of course, there is conversion to int, but there's no way around that. This is the Java implementation:
public static int hashCode(byte a[]) {
if (a == null) return 0;
int result = 1;
for (byte element : a) result = 31 * result + element;
return result;
}
Update 2
If what you need is a JavaScript implementation that gives the same results as a Scala/Java implementation, than you can extend the algorithm by, e.g., taking only the rightmost 31 bits:
def hashCode(a: Array[Byte]): Int = {
if (a == null) {
0
} else {
var hash = 1
var i: Int = 0
while (i < a.length) {
hash = 31 * hash + a(i)
hash = hash & Int.MaxValue // taking only the rightmost 31 bits
i += 1
}
hash
}
}
and JavaScript:
var hashCode = function(arr) {
if (arr == null) return 0;
var hash = 1;
for (var i = 0; i < arr.length; i++) {
hash = hash * 31 + arr[i]
hash = hash % 0x80000000 // taking only the rightmost 31 bits in integer representation
}
return hash;
}
Why do the two implementations produce the same results? In Java, integer overflow behaves as if the addition was performed without loss of precision and then bits higher than 32 got thrown away and & Int.MaxValue throws away the 32nd bit. In JavaScript, there is no loss of precision for integers up to 253 which is a limit the expression 31 * hash + a(i) never exceeds. % 0x80000000 then behaves as taking the rightmost 31 bits. The case without overflows is obvious.
This is the meat of algorithm used in the Java library:
int result 1;
for (byte element : a) result = 31 * result + element;
You comment:
this algorithm isn't very portable
Incorrect. If we are talking about Java, then provided that we all agree on the type of the result, then the algorithm is 100% portable.
Yes the computation overflows, but it overflows exactly the same way on all valid implementations of the Java language. A Java int is specified to be 32 bits signed two's complement, and the behavior of the operators when overflow occurs is well-defined ... and the same for all implementations. (The same goes for long ... though the size is different, obviously.)
I'm not an expert, but my understanding is that Scala's numeric types have the same properties as Java. Javascript is different, being based on IEE 754 double precision floating point. However, with case you should be able to code the Java algorithm portably in Javascript. (I think #Mifeet's version is wrong ...)
I have an ArrayBuffer that I convert to a Uint8Array so that I can use traditional array access with square brackets and gather a subarray. Now that I have the correct set of 4 bytes that describe the 32-bit (little endian) floating point number, I don't seem to have an easy way to convert to the floating point value:
var startIndex = 2;
var buffer = new Uint8Array(data)
buffer.subarray(startIndex, startIndex + 4);
var myNumber = ?uint8ArrayToFloat(buffer);
console.log(myNumber);
I'm new to JavaScript and am still looking around different docs...
You can use DataView.getFloat32. First, you would create the DataView from the original ArrayBuffer (or the Uint8Array). getFloat32 takes an optional parameter that allows you to specify the endianess of the data you are reading.