I have an ArrayBuffer that I convert to a Uint8Array so that I can use traditional array access with square brackets and gather a subarray. Now that I have the correct set of 4 bytes that describe the 32-bit (little endian) floating point number, I don't seem to have an easy way to convert to the floating point value:
var startIndex = 2;
var buffer = new Uint8Array(data)
buffer.subarray(startIndex, startIndex + 4);
var myNumber = ?uint8ArrayToFloat(buffer);
console.log(myNumber);
I'm new to JavaScript and am still looking around different docs...
You can use DataView.getFloat32. First, you would create the DataView from the original ArrayBuffer (or the Uint8Array). getFloat32 takes an optional parameter that allows you to specify the endianess of the data you are reading.
Related
I have a binary buffer, the first half contains data meant to be read as an int using the Uint32 view. The second half is meant to be read as a char using the Uint8 view.
However the problem is the length of the char data is never guaranteed to be divisible by 4.
So if the length of the int data is 7, and the length of the char data is 5 then when I go to make the arrays I get a response like this:
var Unit8Array = new Uint8Array(buffer);
var Unit32Array = new Uint32Array(buffer);
console.log(Unit8Array.length) // 32; (It's 32 because 7*4 + 5 = 32)
console.log(Uint32Array.length) // Error Array Buffer out of bounds
So as you can see I can't create the Uint32 array because the entire buffer isn't divisible by the size of an Int. However I only need the first half of the data in that Uint32 array.
Is there any way to fix this problem without creating a new buffer? For performance reasons I was hoping to read the same data in memory just using different views or separating the buffer (meaning multiple downloads, as I get this buffer from an xhr request).
I tried to do this:
var uint8Array= new Uint8Array(buffer);
var padding = uint8Array.length + (4 - uint8Array%4);
var uint32Array = new Uint32Array(buffer- padding);
But that just makes uint32Array be undefined.
If you want to initialize a Uint32Array from the largest aligned segment of a given array buffer, you can do this:
var byteLength = buffer.byteLength;
var alignment = Uint32Array.BYTES_PER_ELEMENT;
var alignedLength = byteLength - (byteLength % alignment);
var alignedBuffer = buffer.slice(0, alignedLength);
var uint32Array = new Uint32Array(alignedBuffer);
I have been banging my head to solve this:
I received a raw data from an embeded device. From the documentation, the way to read it into a single value is:
Every two bytes of data can be combined to a single raw wave value.Its value is a signed 16-bit integer that ranges from -32768 to 32767. The first byte of the Value represents the high-order byte of the twos-compliment value, while the second byte represents the low-order byte. To reconstruct the full raw wave value, simply shift the first byte left by 8 bits, and bitwise-or with the
second byte.
short raw = (Value[0]<<8) | Value[1];
One of the 2 bytes that I received is "ef". When I used the bitwise operation above the result does not seems right as I noticed I never get a single negative value (its ECG data, negative values are normal). I believe using Javascript to do this is not straight forward.
The way I did it was like this:
var raw = "ef"; // just to show one. Actual one is an array of this 2 bytes but in string.
var value = raw.charAt(0) << 8 | raw.charAt(1)
Please Advice. Thanks!
EDIT:
I also did like this:
let first = new Int8Array(len); // len is the length of the raw data array
let second = new Int8Array(len);
let values = new Int16Array(len) // to hold the converted value
for(var i=0; i<len ; i++)
{
//arr is the array that contains the every two "characters"
first[i] = arr[i].charAt(0);
second[i] = arr[i].charAt(1);
values[i] = first[i] << 8 | second[i];
}
But still all is positive result. no negative. Can someone verify if I am doing this correctly, just in case maybe the values are actually all positive :p
It's two's complement: Check the top bit of the high byte - byte[high]>>7. If it's 0, do byte[top]<<8 | byte[low]. If it is one, do -((byte[top]^0xff)<<8 | byte[low]^0xff) - 1. See https://en.wikipedia.org/wiki/Two%27s_complement for an explanation.
Also check out https://developer.mozilla.org/en-US/docs/Web/JavaScript/Typed_arrays. It has Int16 arrays which is what you want. It might be a ton faster.
You can use property that the string is already 16 bit and then make it signed.
Also instead of reading 8bit at time just read one unsigned 16bit using charCodeAt.
var raw = "\u00ef"; //original example
var buf = new Int16Array(1);
buf[0] = raw.charCodeAt(0); //now in the buf[0] is typed 16 bit integer
//returns 239, for \uffef returns -17
var raw = "\uffef"; //original example
var buf = new Int16Array(1);
buf[0] = raw.charCodeAt(0); //now in the buf[0] is typed 16 bit integer
console.log(buf[0])
For first byte take two's complement first then shift by 8 bit
let x = raw.charCodeAt(0); //ASCII value of first character
then flip x for 1's complement and add +1 for 2's complement and finally do
var value = x << 8 | bytevalueof(raw.charCodeAt(1))
This is a question about the raw wave data coming from a Neurosky Mindwave Mobile EEG headset.
You should find three values in the buffer when you read from the device. Perform this operation on the second two to get a correct reading:
var b = reader.buffer(3);
var raw = b[1]*256 + b[2];
if(raw >= 32768) {
raw = raw - 65536;
}
Implementing something based on a pathetic documentation without no info nothing.
The example is just this
(7F02AAF7)H => (F7AA027F)H = -139853185
Let's say even if I convert 7F02AAF7 to F7AA027F, then still the output via
'parseInt('F7AA027F', 16)' is different from what I am expecting.
I did some google search and found this website http://www.scadacore.com/field-tools/programming-calculators/online-hex-converter/
Here when you input 7F02AAF7 then it is processed to wanted number under INT32 - Little Endian (DCBA) system. I tried this search term but no luck.
Can you please tell me what exactly am I supposed to do here and is there any node.js library which can help me with this.
You could adapt the excellent answer of T.J. Crowder and use DataView#setUint8 for the given bytes with DataView#getInt32 and an indicator for littleEndian.
var data = '7F02AAF7'.match(/../g);
// Create a buffer
var buf = new ArrayBuffer(4);
// Create a data view of it
var view = new DataView(buf);
// set bytes
data.forEach(function (b, i) {
view.setUint8(i, parseInt(b, 16));
});
// get an int32 with little endian
var num = view.getInt32(0, 1);
console.log(num);
Node can do that pretty easily using buffers:
Buffer.from('7F02AAF7', 'hex').readInt32LE()
JavaScript integers are usually stored as a Number value:
4.3.19
primitive value corresponding to a double-precision 64-bit binary format IEEE 754 value
So the result of parseInt is a float where the value losslessly fits into the fraction part of the float (52 bits capacity). parseInt also doesn't parse it as two-complement notation.
If you want to force anything that you read into 32 bit, then the easiest would be two force it to be automatically converted to 32 bit by applying a binary operation. I would suggest:
parseInt("F7AA027F", 16) | 0
The binary OR (|) with 0 is essentially a no-op, but it converts any integer to 32 bit. This trick is often used in order to convert numbers to 32 bit in order to make calculations on it faster.
Also, this is portable.
In my case, I am trying to send accelerometer data from Arduino to Pi.
The raw data I am reading is that [0x0, 0x0, 0x10, 0xBA].
If you lack knowledge about the topic as me, use the scadacore.com website to find out find your data should correspond to. In my case it is Float - Little Endian (DCBA) which outputs: -0.03126526. Now we know what kind of a conversion that we need.
Then, check available functions based on the language. In my case, Node.js buffer library offers buf.readFloatLE([offset]) function which is the one I need.
I need to write a piece of hardware emulator in JavaScript. It has its own floating point format, so I do lots of conversion between JS numerics and that format, which is slow. I have the idea to use JavaScript TypedArray float32 since I have direct access of bytes forming the float32 floating point value which is not so far from the desired format, so the conversion would be much faster this way (only some shifts, etc, using Uint8 view of the Float32).
However, I am not sure how portable solution it would be. Various documents about TypedArray topics states that float32 is like "the native C format on that hardware" or such. But can I expect that the exact binary format of float32 is the same on all platforms running some browser/JS? I can guess the endiannes can be a problem, but I can deal with that if at least there are no other differences. As far as I can tell, the format used seems to be IEE754, but there can be others used to implement float32 in JS on some (well, at least not so exotic ...) platforms?
I could test at least x86 and Apple A7 CPUs, and it seems they are the very same, which is good, but also odd, as I thought the byte order of these CPUs are different (maybe not the floating format at least?). However it's far from being a global truth just checking two platforms/OSes/browsers/whatever ...
The Float32Array will internally represent the values bases on the endianess of the host system, typically little-endian.
And yes, the format is IEEE 754 (this has been around since before FPUs came along, and the variations of it deals with more the width, ie. 64-bit, 80-bit and so on). All numbers (Number) in JavaScript is internally represented as 64-bit IEEE 754. For typed arrays both 32-bit and 64-bit IEEE 754 is of course available.
PowerPC and 68k CPUs uses big-endian (the way it ought to be! :) ). The so-called network order is also big-endian, and many platform-independent file formats are stored in big-endian byte order (in particular with audio and graphics). Most mainstream computers uses little-endian CPUs such as the x86. So in cases with a combination of these you very likely have to deal with different byte orders.
To deal with endianess you can instead of using a Float32Array use a DataView.
For example:
var buffer = new ArrayBuffer(10240); // some raw byte buffer
var view = new DataView(buffer); // flexible view supporting endianness
Now you can now read and write to any position in the buffer with endianess in mind (DataView also allow reading/writing from/to non-aligned positions, ie. you can write a Float32 value to position 3 if you need to. You cannot do this with Float32/Uint32/Int16 etc.).
The browser will internally convert to the correct order - you just provide the value as-is:
view.setFloat32(pos, 0.5); // big-endian
view.setFloat32(pos, 0.5, false) // big-endian
view.setFloat32(pos, 0.5, true); // little-endian
And likewise when reading:
var n = view.getFloat32(pos); // big-endian
var n = view.getFloat32(pos, false) // big-endian
var n = view.getFloat32(pos, true); // little-endian
Tip: You can use a native Float32Array internally and later read/write to it using endianess. This tend to be speedier but it requires a conversion using DataView at the end if the resulting buffer's endianess is different from the host system:
var f32 = new Float32Array(buffer); // or use a size
f32[0] = 0.5;
Then to make sure you have big-endian representation:
var view = new DataView(f32.buffer);
var msbVal = view.getFloat32(0): // returns 32-bit repres. in big-endian
Hope that gave some inputs! Just throw me questions about it if you want me to elaborate on some part.
I need to encode and decode IEEE 754 floats and doubles from binary in node.js to parse a network protocol.
Are there any existing libraries that do this, or do I have to read the spec and implement it myself? Or should I write a C module to do it?
Note that as of node 0.6 this functionality is included in the core library, so that is the new best way to do it.
See http://nodejs.org/docs/latest/api/buffer.html for details.
If you are reading/writing binary data structures you might consider using a friendly wrapper around this functionality to make things easier to read and maintain. Plug follows: https://github.com/dobesv/node-binstruct
I ported a C++ (made with GNU GMP) converter with float128 support to Emscripten so that it would run in the browser: https://github.com/ysangkok/ieee-754
Emscripten produces JavaScript that will run on Node.js too. You will get the float representation as a string of bits, though, I don't know if that's what you want.
In modern JavaScript (ECMAScript 2015) you can use ArrayBuffer and Float32Array/Float64Array. I solved it like this:
// 0x40a00000 is "5" in float/IEEE-754 32bit.
// You can check this here: https://www.h-schmidt.net/FloatConverter/IEEE754.html
// MSB (Most significant byte) is at highest index
const bytes = [0x00, 0x00, 0xa0, 0x40];
// The buffer is like a raw view into memory.
const buffer = new ArrayBuffer(bytes.length);
// The Uint8Array uses the buffer as its memory.
// This way we can store data byte by byte
const byteArray = new Uint8Array(buffer);
for (let i = 0; i < bytes.length; i++) {
byteArray[i] = bytes[i];
}
// float array uses the same buffer as memory location
const floatArray = new Float32Array(buffer);
// floatValue is a "number", because a number in javascript is a
// double (IEEE-754 # 64bit) => it can hold f32 values
const floatValue = floatArray[0];
// prints out "5"
console.log(`${JSON.stringify(bytes)} as f32 is ${floatValue}`);
// double / f64
// const doubleArray = new Float64Array(buffer);
// const doubleValue = doubleArray[0];
PS: This works in NodeJS but also in Chrome, Firefox, and Edge.