Encoding and decoding IEEE 754 floats in JavaScript - javascript

I need to encode and decode IEEE 754 floats and doubles from binary in node.js to parse a network protocol.
Are there any existing libraries that do this, or do I have to read the spec and implement it myself? Or should I write a C module to do it?

Note that as of node 0.6 this functionality is included in the core library, so that is the new best way to do it.
See http://nodejs.org/docs/latest/api/buffer.html for details.
If you are reading/writing binary data structures you might consider using a friendly wrapper around this functionality to make things easier to read and maintain. Plug follows: https://github.com/dobesv/node-binstruct

I ported a C++ (made with GNU GMP) converter with float128 support to Emscripten so that it would run in the browser: https://github.com/ysangkok/ieee-754
Emscripten produces JavaScript that will run on Node.js too. You will get the float representation as a string of bits, though, I don't know if that's what you want.

In modern JavaScript (ECMAScript 2015) you can use ArrayBuffer and Float32Array/Float64Array. I solved it like this:
// 0x40a00000 is "5" in float/IEEE-754 32bit.
// You can check this here: https://www.h-schmidt.net/FloatConverter/IEEE754.html
// MSB (Most significant byte) is at highest index
const bytes = [0x00, 0x00, 0xa0, 0x40];
// The buffer is like a raw view into memory.
const buffer = new ArrayBuffer(bytes.length);
// The Uint8Array uses the buffer as its memory.
// This way we can store data byte by byte
const byteArray = new Uint8Array(buffer);
for (let i = 0; i < bytes.length; i++) {
byteArray[i] = bytes[i];
}
// float array uses the same buffer as memory location
const floatArray = new Float32Array(buffer);
// floatValue is a "number", because a number in javascript is a
// double (IEEE-754 # 64bit) => it can hold f32 values
const floatValue = floatArray[0];
// prints out "5"
console.log(`${JSON.stringify(bytes)} as f32 is ${floatValue}`);
// double / f64
// const doubleArray = new Float64Array(buffer);
// const doubleValue = doubleArray[0];
PS: This works in NodeJS but also in Chrome, Firefox, and Edge.

Related

WebAssembly: Correct way to get a string from a parameter (with memory address) in JavaScript

I'm trying to understand how the conversion of C code to WebAssembly and the JavaScript interop works in the background. And I'm having problems getting a simple string from a function parameter.
My program is a simple Hello World, and I'm trying to "emulate" a printf/puts.
More or less the C equivalent I want to build:
int main() {
puts("Hello World\n");
}
You can see a working example here.
My best idea currently is to read 16bit chunks of memory at a time (since wasm seems to allocate them in 16bit intervals) and check for the null-terminaton.
function get_string(memory, addr) {
var length = 0;
while (true) {
let buffer = new Uint8Array(memory.buffer, addr, 16);
let term = buffer.indexOf(0);
length += term == -1 ? 16 : term;
if (term != -1) break;
}
const strBuf = new Uint8Array(memory.buffer, addr, length);
return new TextDecoder().decode(strBuf);
}
But this seems really clumsy. Is there a better way to read a string from the memory if you only know the start address?
And is it really necessary that I only read 16bit chunks at a time?
I couldn't find any information if creating an typed array of the memory counts as accessing the whole memory or this only happens when I try to get the data from the array.
WebAssembly allocates memory in 64k pages. Maybe this is where the 16 bit thing came from, because 16 bits can address 64 kbytes. However this is irrelevant to the task at hand, since the WebAssembly memory is just a continuous address space, there isn't much difference between the memory object and an ArrayBuffer of the given size, if there's any at all.
The 16-byte-window-at-a-time isn't necessary as well (somehow 16 bits became 16 bytes).
You can do it simply without any performance penalty and create a view of the rest of the buffer in the following way:
function get_string(memory, addr) {
let buffer = new Uint8Array(memory.buffer, addr, memory.buffer.byteLength - addr);
let term = buffer.indexOf(0);
return new TextDecoder().decode(buffer.subarray(0, term));
}

How do I implement a memory block in Javascript?

I'm developing an Javascript application for which I need to implement a fixed memory block of 64k. This block can be anything like object, array, buffer I don't know what.This should work as 64k physical memory chip. Which I can address and store data. Addresses will be 16 bits and data in each location is 8 bits. How can I implement it? Can you recommend any npm packages?
I think most browsers support Uint8Array nowadays:
const buffer = new Uint8Array(65536);
let index = 123;
buffer[index] = 42;
console.log( buffer[index] );

Hex String to INT32 - Little Endian (DCBA Format) Javascript

Implementing something based on a pathetic documentation without no info nothing.
The example is just this
(7F02AAF7)H => (F7AA027F)H = -139853185
Let's say even if I convert 7F02AAF7 to F7AA027F, then still the output via
'parseInt('F7AA027F', 16)' is different from what I am expecting.
I did some google search and found this website http://www.scadacore.com/field-tools/programming-calculators/online-hex-converter/
Here when you input 7F02AAF7 then it is processed to wanted number under INT32 - Little Endian (DCBA) system. I tried this search term but no luck.
Can you please tell me what exactly am I supposed to do here and is there any node.js library which can help me with this.
You could adapt the excellent answer of T.J. Crowder and use DataView#setUint8 for the given bytes with DataView#getInt32 and an indicator for littleEndian.
var data = '7F02AAF7'.match(/../g);
// Create a buffer
var buf = new ArrayBuffer(4);
// Create a data view of it
var view = new DataView(buf);
// set bytes
data.forEach(function (b, i) {
view.setUint8(i, parseInt(b, 16));
});
// get an int32 with little endian
var num = view.getInt32(0, 1);
console.log(num);
Node can do that pretty easily using buffers:
Buffer.from('7F02AAF7', 'hex').readInt32LE()
JavaScript integers are usually stored as a Number value:
4.3.19
primitive value corresponding to a double-precision 64-bit binary format IEEE 754 value
So the result of parseInt is a float where the value losslessly fits into the fraction part of the float (52 bits capacity). parseInt also doesn't parse it as two-complement notation.
If you want to force anything that you read into 32 bit, then the easiest would be two force it to be automatically converted to 32 bit by applying a binary operation. I would suggest:
parseInt("F7AA027F", 16) | 0
The binary OR (|) with 0 is essentially a no-op, but it converts any integer to 32 bit. This trick is often used in order to convert numbers to 32 bit in order to make calculations on it faster.
Also, this is portable.
In my case, I am trying to send accelerometer data from Arduino to Pi.
The raw data I am reading is that [0x0, 0x0, 0x10, 0xBA].
If you lack knowledge about the topic as me, use the scadacore.com website to find out find your data should correspond to. In my case it is Float - Little Endian (DCBA) which outputs: -0.03126526. Now we know what kind of a conversion that we need.
Then, check available functions based on the language. In my case, Node.js buffer library offers buf.readFloatLE([offset]) function which is the one I need.

Node.js and 64-bit varints

I'm in the process of writing a Node.js based application which talks via TCP to a C++ based server. The server speaks a binary protocol, quite similar to Protocol Buffers, but not exactly the same.
One data type the server returns is that of a unsigned 64-bit integer (uint64_t), serialized as a varint, where the most significant bit is used to indicate whether the next byte is also part of the int.
I am unable to parse this out in Javascript currently due to the 32-bit limitation on bitwise operations, and also the fact that JS doesn't do 64-bit ints natively. Does anyone have any suggestions on how I could do this?
My varint reading code is very similar to that shown here: https://github.com/chrisdickinson/varint/blob/master/decode.js
I thought I could use node-bignum to represent the number, but I'm unsure how to turn a Buffer consisting of varint bytes into this.
Cheers,
Nathan
Simply took the existing varint read module and modified it to yield a Bignum object instead of a regular number:
Bignum = require('bignum');
module.exports = read;
var MSB = 0x80
, REST = 0x7F;
function read(buf, offset) {
var res = Bignum(0)
, offset = offset || 0
, counter = offset
, b
, shift = 0
, l = buf.length;
do {
if(counter >= l) {
read.bytesRead = 0;
return undefined
}
b = buf[counter++];
res = res.add(Bignum(b & REST).shiftLeft(shift));
shift += 7
} while (b >= MSB);
read.bytes = counter - offset;
return res
}
Use it exactly the same way as you would have used the original decode module.

About the binary format of JavaScript TypedArray float32

I need to write a piece of hardware emulator in JavaScript. It has its own floating point format, so I do lots of conversion between JS numerics and that format, which is slow. I have the idea to use JavaScript TypedArray float32 since I have direct access of bytes forming the float32 floating point value which is not so far from the desired format, so the conversion would be much faster this way (only some shifts, etc, using Uint8 view of the Float32).
However, I am not sure how portable solution it would be. Various documents about TypedArray topics states that float32 is like "the native C format on that hardware" or such. But can I expect that the exact binary format of float32 is the same on all platforms running some browser/JS? I can guess the endiannes can be a problem, but I can deal with that if at least there are no other differences. As far as I can tell, the format used seems to be IEE754, but there can be others used to implement float32 in JS on some (well, at least not so exotic ...) platforms?
I could test at least x86 and Apple A7 CPUs, and it seems they are the very same, which is good, but also odd, as I thought the byte order of these CPUs are different (maybe not the floating format at least?). However it's far from being a global truth just checking two platforms/OSes/browsers/whatever ...
The Float32Array will internally represent the values bases on the endianess of the host system, typically little-endian.
And yes, the format is IEEE 754 (this has been around since before FPUs came along, and the variations of it deals with more the width, ie. 64-bit, 80-bit and so on). All numbers (Number) in JavaScript is internally represented as 64-bit IEEE 754. For typed arrays both 32-bit and 64-bit IEEE 754 is of course available.
PowerPC and 68k CPUs uses big-endian (the way it ought to be! :) ). The so-called network order is also big-endian, and many platform-independent file formats are stored in big-endian byte order (in particular with audio and graphics). Most mainstream computers uses little-endian CPUs such as the x86. So in cases with a combination of these you very likely have to deal with different byte orders.
To deal with endianess you can instead of using a Float32Array use a DataView.
For example:
var buffer = new ArrayBuffer(10240); // some raw byte buffer
var view = new DataView(buffer); // flexible view supporting endianness
Now you can now read and write to any position in the buffer with endianess in mind (DataView also allow reading/writing from/to non-aligned positions, ie. you can write a Float32 value to position 3 if you need to. You cannot do this with Float32/Uint32/Int16 etc.).
The browser will internally convert to the correct order - you just provide the value as-is:
view.setFloat32(pos, 0.5); // big-endian
view.setFloat32(pos, 0.5, false) // big-endian
view.setFloat32(pos, 0.5, true); // little-endian
And likewise when reading:
var n = view.getFloat32(pos); // big-endian
var n = view.getFloat32(pos, false) // big-endian
var n = view.getFloat32(pos, true); // little-endian
Tip: You can use a native Float32Array internally and later read/write to it using endianess. This tend to be speedier but it requires a conversion using DataView at the end if the resulting buffer's endianess is different from the host system:
var f32 = new Float32Array(buffer); // or use a size
f32[0] = 0.5;
Then to make sure you have big-endian representation:
var view = new DataView(f32.buffer);
var msbVal = view.getFloat32(0): // returns 32-bit repres. in big-endian
Hope that gave some inputs! Just throw me questions about it if you want me to elaborate on some part.

Categories