Converting from a Uint8Array to an ArrayBuffer prepends 32 bytes? - javascript

Consider the input bytes that is a 400byte Uint8Array. The following code converts it first to an ArrayBuffer via the .buffer method and subsequently to a Float32Array :
static bytesToFloatArray(bytes) {
let buffer = bytes.buffer; // Get the ArrayBuffer from the Uint8Array.
let floats = new Float32Array(buffer);
return floats
}
The surprise is that the conversion to the ArrayBuffer prepends 32 bytes - which is reflected in having 8 extra "0" float values at the beginning of the subsequent Float32Array :
Why does the buffer method add the 32 bytes - and how can that be avoided (or corrected) ?

Why does the buffer method add the 32 bytes?
It didn't. The buffer had 432 bytes in the first place, even before the Uint8Array was created on it. The difference comes from the typed array using an offset and/or a length which essentially restrict the view to a slice of the buffer.
And how can that be avoided (or corrected)?
Use the same offset and adjusted length:
function bytesToFloatArray(bytes) {
return new Float32Array(bytes.buffer, bytes.byteOffset, bytes.byteLength/Float32Array.BYTES_PER_ELEMENT);
}

Post: #Bergi's answer is correct, I did not even look at the buffer Uint8 was created from.
I was not able to reproduce the behavior you had in chrome console, but you can always resort to using DataView to have fine grained control, something like this:
(beware of the endianness, and I did not test the code below, I might have done mistake in the byte orders)
let test8 = new Uint8Array(400);
test8.forEach((d,i,a) => a[i] = 0xff * Math.random() | 0);
function c8to32(uint8, endian = false){ //big endian by default
var cpBuff = new ArrayBuffer(uint8.length),
view = new DataView(cpBuff);
for (let i = 0,v = void(0); i < uint8.length; i += 4){
if(!endian) { //big
v = uint8[i] << 24
| uint8[i + 1] << 16
| uint8[i + 2] << 8
| uint8[i + 3];
} else { //little
v = uint8[i]
| uint8[i + 1] << 8
| uint8[i + 2] << 16
| uint8[i + 3] << 24;
}
view.setFloat32(
i,
v,
endian
);
}
return cpBuff;
}
document.getElementById("resultbig").textContent = c8to32(test8).byteLength;
document.getElementById("resultlittle").textContent = c8to32(test8, true).byteLength;
<div id="resultbig">test</div>
<div id="resultlittle">test</div>

Related

How to calculate the number of bytes a 32-bit number will take when chunked into bytes

So we have this to break a 32 bit integer into 8-bit chunks:
var chunks = [
(num & 0xff000000) >> 24,
(num & 0x00ff0000) >> 16,
(num & 0x0000ff00) >> 8,
(num & 0x000000ff)
]
How can you tell how many chunks it will be before computing the chunks? Basically I would like to know if it will be 1, 2, 3, or 4 bytes before I chunk it into the array. Some bit trick or something, on the 32-bit integer.
function countBytes(num) {
// ???
}
There are several approaches I can think of, depending on your preference and/or codebase style.
The first one uses more analytical maths that the other and can perform a little worse than the bitwise maths one below:
// We will need a logarithm with base 16 since you are working with hexadecimals
const BASE_16 = Math.log(16);
const log16 = (num) => Math.log(num) / BASE_16;
// This is a function that gives you the number of non-zero chunks you get out
const getNumChunks = (num) => {
// First we grab the base-16 logarithm of the number, that will give us the number of places
// you need to left-shift 16 to get your number.
const numNonZeroes = Math.round(log16(num));
// We need to divide that number by 2 since you are grabbing bits by two
const numChunks = Math.ceil(numNonZeroes / 2);
return numChunks;
}
The second one is strictly bitwise:
const getNumChunks = (num) => {
let probe = 0xff;
let numChunks = 0;
while ((probe & num) || num > probe) {
probe = probe << 8;
numChunks++;
}
return numChunks;
}
JSFiddle here
Or this one liner making use of the clz32 function to determine how many bytes of a 32 bit unsigned int are being utilized...
function numOfBytes( x ) {
return x === 0 ? 1 : (((31 - Math.clz32( x )) / 8) + 1) | 0;
}

understandment: byte order in protocol specification (gzip)

Im trying to understand the gzip speicifaction (http://www.zlib.org/rfc-gzip.html)
Especial the section 2, Overall conventions:
Bytes stored within a computer do not have a "bit order", since they are always treated as a unit. However, a byte considered as an integer between 0 and 255 does have a most- and least-significant bit, and since we write numbers with the most-significant digit on the left, we also write bytes with the most-significant bit on the left. In the diagrams below, we number the bits of a byte so that bit 0 is the least-significant bit, i.e., the bits are numbered:
+--------+
|76543210|
+--------+
This document does not address the issue of the order in which bits of a byte are transmitted on a bit-sequential medium, since the data format described here is byte- rather than bit-oriented.
Within a computer, a number may occupy multiple bytes. All multi-byte numbers in the format described here are stored with the least-significant byte first (at the lower memory address). For example, the decimal number 520 is stored as:
0 1
+--------+--------+
|00001000|00000010|
+--------+--------+
^ ^
| |
| + more significant byte = 2 x 256
+ less significant byte = 8
The problem that i have is, im not sure how to calcualte the length for the FEXTRA header:
+---+---+=================================+
| XLEN |...XLEN bytes of "extra field"...| (more-->)
+---+---+=================================+
If i have one (sub)-field with a string, length of 1600 bytes (characters) then my complete FEXTRA length should be 1600 (payload) + 2 (SI1&I2, subfield ID), right ?
But the length bytes are set to 73 & 3 and i am not sure why.
Can someone clarify how i can calculate the complete FEXTRA length with the two length bytes ?
Im using nodejs for the operations on the .tgz/.gz file.
Demo code:
const fs = require("fs");
//const bitwise = require("bitwise");
// http://www.zlib.org/rfc-gzip.html
// http://www.onicos.com/staff/iz/formats/gzip.html
// https://de.wikipedia.org/wiki/Gzip
// https://dev.to/somedood/bitmasks-a-very-esoteric-and-impractical-way-of-managing-booleans-1hlf
// https://www.npmjs.com/package/bitwise
// https://stackoverflow.com/questions/1436438/how-do-you-set-clear-and-toggle-a-single-bit-in-javascript
fs.readFile("./test.gz", (err, bytes) => {
if (err) {
console.log(err);
process.exit(100);
}
console.log("bytes: %d", bytes.length);
let header = bytes.slice(0, 10);
let flags = header[3];
let eFlags = header[8];
let OS = header[9];
console.log("Is tarfile:", header[0] === 31 && header[1] === 139);
console.log("compress method:", header[2] === 8 ? "deflate" : "other");
console.log("M-Date: %d%d%d%d", bytes[4], bytes[5], bytes[6], bytes[7]);
console.log("OS", OS);
console.log("flags", flags);
console.log();
// | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |
// +---+---+---+---+---+---+---+---+---+---+
// |ID1|ID2|CM |FLG| MTIME |XFL|OS | (more-->)
// +---+---+---+---+---+---+---+---+---+---+
//
//
// |10 |11 |
// +---+---+=================================+
// | XLEN |...XLEN bytes of "extra field"...| (more-->)
// +---+---+=================================+
// (if FLG.FEXTRA set)
//
// bit 0 FTEXT
// bit 1 FHCRC
// bit 2 FEXTRA
// bit 3 FNAME
// bit 4 FCOMMENT
// bit 5 reserved
// bit 6 reserved
// bit 7 reserved
// bitwise operation on header flags
const FLAG_RESERVED_3 = (bytes[3] >> 7) & 1;
const FLAG_RESERVED_2 = (bytes[3] >> 6) & 1;
const FLAG_RESERVED_1 = (bytes[3] >> 5) & 1;
const FLAG_COMMENT = (bytes[3] >> 4) & 1;
const FLAG_NAME = (bytes[3] >> 3) & 1;
const FLAG_EXTRA = (bytes[3] >> 2) & 1;
const FLAG_CRC = (bytes[3] >> 1) & 1;
const FLAG_TEXT = (bytes[3] >> 0) & 1;
console.log("FLAG_RESERVED_3", FLAG_RESERVED_3);
console.log("FLAG_RESERVED_2", FLAG_RESERVED_2);
console.log("FLAG_RESERVED_1", FLAG_RESERVED_1);
console.log("FLAG_COMMENT", FLAG_COMMENT);
console.log("FLAG_NAME", FLAG_NAME);
console.log("FLAG_EXTRA", FLAG_EXTRA);
console.log("FLAG_CRC", FLAG_CRC);
console.log("FLAG_TEXT", FLAG_TEXT);
console.log();
if (FLAG_EXTRA) {
let len1 = bytes[10];
let len2 = bytes[11];
console.log("Extra header lenght", len1, len2);
}
});
EDIT 2:
After reading it over and over and over again, i think i got it:
if (FLAG_EXTRA) {
let len1 = bytes[10];
let len2 = bytes[11];
console.log("Extra header lenght", len1 + (len2 * 256));
}
len1 (byte 0) is a nuber till 256, len2 is a multplicator for 256.
len2 * 256 + len1 = FEXTRA header length.
Can someone correct me if im wrong ?!
Thank you guys!

BigInteger to a Uint8Array of bytes

I need to get the bytes of a big integer in JavaScript.
I've tried a couple of big integer libraries, but the one that actually offered this function wouldn't work.
I am not quite sure how to implement this myself, given a string containing a large number, which is generally what the libraries give access to.
Is there a library that works and allows to do this?
Or is it actually not hard, and I am just missing something?
I was googling for quick and elegant solution of this problem in JavaScript, but the only what I found was the method of conversion, based on intermediate hex-string. What is suboptimal for sure and that code also didn't work for me, unfortunately. So, I implemented my own code and wanted to post it as an answer to my own question, but found this one.
Explanation
First of all, I will answer to the opposite question, since it is more illustrative.
Reading BigInteger from a bytes array
What is an array of bytes for us? This is a number in 256-base numeral system, which we want to convert to more convenient for us 10-base (decimal) system.
For instance, let's take an array of bytes
[AA][BB][CC][DD] (1 byte is 8 bits or 2 hexadecimal digits).
Depending on the side we start from (see https://en.wikipedia.org/wiki/Endianness), we can read it as:
(AA*1 + BB*256 + CC*256^2 + DD*256^3) in little-endian
or (DD*1 + CC*256 + BB*256^2 + AA*256^3) in big-endian.
Let's use little-endian here. So, our number encoded by the array [AA][BB][CC][DD] is:
AA + BB*256 + CC*256^2 + DD*256^3
= 170 + 187*256 + 204*65536 + 221*16777216
= 170 + 47872 + 13369344 + 3707764736
= 3721182122
Writing BigInteger to a bytes array
For writing a number into an array of bytes we have to perform an opposite operation, i.e. having a number in decimal system to find all digits of it in 256-base numeral system. Let's take the same number: 3721182122
To find it's least significant byte (https://en.wikipedia.org/wiki/Bit_numbering#Least_significant_byte), we have to just divide it by 256. The remainder represents higher digits. So, we divide the remainder again by 256 and so on, until we receive 0 remainder:
3721182122 = 14535867*256 + 170
14535867 = 56780*256 + 187
56780 = 221*256 + 204
221 = 0*256 + 221
So, the result is [170][187][204][221] in decimal, [AA][BB][CC][DD] in hex.
Solution in JavaScript
Now, here is this algorithm encoded in NodeJS with big-integer library.
const BigInteger = require('big-integer');
const zero = BigInteger(0);
const one = BigInteger(1);
const n256 = BigInteger(256);
function fromLittleEndian(bytes) {
let result = zero;
let base = one;
bytes.forEach(function (byte) {
result = result.add(base.multiply(BigInteger(byte)));
base = base.multiply(n256);
});
return result;
}
function fromBigEndian(bytes) {
return fromLittleEndian(bytes.reverse());
}
function toLittleEndian(bigNumber) {
let result = new Uint8Array(32);
let i = 0;
while (bigNumber.greater(zero)) {
result[i] = bigNumber.mod(n256);
bigNumber = bigNumber.divide(n256);
i += 1;
}
return result;
}
function toBigEndian(bytes) {
return toLittleEndian(bytes).reverse();
}
console.log('Reading BigInteger from an array of bytes');
let bigInt = fromLittleEndian(new Uint8Array([170, 187, 204, 221]));
console.log(bigInt.toString());
console.log('Writing BigInteger to an array of bytes');
let bytes = toLittleEndian(bigInt);
console.log(bytes);
Benchmark
I have written small benchmark for this approach. Anybody is welcome to modify it for his own conversion method and to compare with my one.
https://repl.it/repls/EvenSturdyEquipment
Set "i" to be your BigInt's value. You can see the bytes by looking at "a" after running this:
i=11111n;n=1500;a=new Uint8Array(n);while(i>0){a[--n]=Number(i&255n);i>>=8n}
You can also extract the BigInt back out from the Uint8Array:
a.reduce((p,c)=>BigInt(p)*256n+BigInt(c))
I've got a version that works with BigInt that's supported by the browser:
const big0 = BigInt(0)
const big1 = BigInt(1)
const big8 = BigInt(8)
bigToUint8Array(big: bigint) {
if (big < big0) {
const bits: bigint = (BigInt(big.toString(2).length) / big8 + big1) * big8
const prefix1: bigint = big1 << bits
big += prefix1
}
let hex = big.toString(16)
if (hex.length % 2) {
hex = '0' + hex
}
const len = hex.length / 2
const u8 = new Uint8Array(len)
var i = 0
var j = 0
while (i < len) {
u8[i] = parseInt(hex.slice(j, j + 2), 16)
i += 1
j += 2
}
return u8
}
I've got a BigDecimal implementation that works with sending & receiving bytes as arbitary precision big decimal: https://jackieli.dev/posts/bigint-to-uint8array/

How to convert last 4 bytes in an array to an integer?

If I have an Uint8Array array in JavaScript, how would I get the last four bytes and then convert that to an int? Using C# I would do something like this:
int count = BitConverter.ToInt32(array, array.Length - 4);
Is there an inequivalent way to do this using JavaScript?
Access the underlying ArrayBuffer and create a new TypedArray with a slice of its bytes:
var u8 = new Uint8Array([1,2,3,4,5,6]); // original array
var u32bytes = u8.buffer.slice(-4); // last four bytes as a new `ArrayBuffer`
var uint = new Uint32Array(u32bytes)[0];
If the TypedArray does not cover the entire buffer, you need to be a little trickier, but not much:
var startbyte = u8.byteOffset + u8.byteLength - Uint32Array.BYTES_PER_ELEMENT;
var u32bytes = u8.buffer.slice(startbyte, startbyte + Uint32Array.BYTES_PER_ELEMENT);
This works in both cases.
If the bytes you want fit in the alignment boundary of your underlying buffer for the datatype (e.g., you want the 32-bit value of bytes 4-8 of the underlying buffer), you can avoid copying the bytes with slice() and just supply a byteoffset to the view constructor, as in #Bergi's answer.
Below is a very-lightly-tested function that should get the scalar value of any offset you want. It will avoid copying if possible.
function InvalidArgument(msg) {
this.message = msg | null;
}
function scalarValue(buf_or_view, byteOffset, type) {
var buffer, bufslice, view, sliceLength = type.BYTES_PER_ELEMENT;
if (buf_or_view instanceof ArrayBuffer) {
buffer = buf_or_view;
if (byteOffset < 0) {
byteOffset = buffer.byteLength - byteOffset;
}
} else if (buf_or_view.buffer instanceof ArrayBuffer) {
view = buf_or_view;
buffer = view.buffer;
if (byteOffset < 0) {
byteOffset = view.byteOffset + view.byteLength + byteOffset;
} else {
byteOffset = view.byteOffset + byteOffset;
}
return scalarValue(buffer, view.byteOffset + byteOffset, type);
} else {
throw new InvalidArgument('buf_or_view must be ArrayBuffer or have a .buffer property');
}
// assert buffer instanceof ArrayBuffer
// assert byteOffset > 0
// assert byteOffset relative to entire buffer
try {
// try in-place first
// only works if byteOffset % slicelength === 0
return (new type(buffer, byteOffset, 1))[0]
} catch (e) {
// if this doesn't work, we need to copy the bytes (slice them out)
bufslice = buffer.slice(byteOffset, byteOffset + sliceLength);
return (new type(bufslice, 0, 1))[0]
}
}
You would use it like this:
// positive or negative byte offset
// relative to beginning or end *of a view*
100992003 === scalarValueAs(u8, -4, Uint32Array)
// positive or negative byte offset
// relative to the beginning or end *of a buffer*
100992003 === scalarValue(u8.buffer, -4, Uint32Array)
Do you have an example? I think this would do it:
var result = ((array[array.length - 1]) |
(array[array.length - 2] << 8) |
(array[array.length - 3] << 16) |
(array[array.length - 4] << 24));
Nowadays if you can live with IE 11+ / Chrome 49+ / Firefox 50+, then you can use DataView to make your life almost as easy as in C#:
var u8array = new Uint8Array([0xFF, 0xFF, 0xFF, 0xFF]); // -1
var view = new DataView(u8array.buffer)
console.log("result:" + view.getInt32());
Test it here: https://jsfiddle.net/3udtek18/1/
A little inelegant, but if you can do it manually based on the endianess.
Little endian:
var count = 0;
// assuming the array has at least four elements
for(var i = array.length - 1; i >= array.length - 4; i--)
{
count = count << 8 + array[i];
}
Big endian:
var count = 0;
// assuming the array has at least four elements
for(var i = array.length - 4; i <= array.length - 1 ; i++)
{
count = count << 8 + array[i];
}
This can be extended to other data lengths
Edit: Thanks to David for pointing out my typos
I'm surprised that the other answers don't use the native Buffer object, which provides a lot of this tooling in a simple, native library. It's possible that this library isn't widely used for bitpacking/unpacking simply because people don't think to check here and it took me a while to find it, too, but it's the right tool for bitpacking/unpacking in nodejs/javascript/typescript.
You can use it like so:
// Create a simple array with 5 elements. We'll pop the last 4 and you should expect the end value to be 1 because this is a little-endian array with all zeros other than the 1 in the littlest(?)-endian
const array = [0, 1, 0, 0, 0]
// get the last 4 elements of your array and convert it to a Buffer
const buffer = Buffer.from(array.slice(-4));
// Use the native Buffer type to read the object as an (U) unsigned (LE) little-endian 32 (32 bits) integer
const value = Buffer.readUInt32LE();
Or, more concisely:
const value = Buffer.from(array.slice(-4)).readUInt32LE();
It should be more efficient to just create an Uint32Array view on the same ArrayBuffer and accessing the 32-bit number directly:
var uint8array = new Uint8Array([1,2,3,4,5,6,7,8]);
var uint32array = new Uint32Array(
uint8array.buffer,
uint8array.byteOffset + uint8array.byteLength - 4,
1 // 4Bytes long
);
return uint32array[0];
var a = Uint8Array(6)
a.set([1,2,8,0,0,1])
i1 = a[a.length-4];
i2 = a[a.length-3];
i3 = a[a.length-2];
i4 = a[a.length-1];
console.log(i1<<24 | i2<<16 | i3<<8 | i4);
It's a shame there are not build in ways to do this.
I needed to read variables of variable sizes so based on Imortenson answer I've wrote this little function where p is read position and s is number of bytes to read:
function readUInt(arr, p, s) {
var r = 0;
for (var i = s-1; i >= 0; i--) {
r |= arr[p + i] << (i * 8);
} return r >>> 0;
}
var iable = readUint(arr, arr.length - 4, 4);

Most efficient way to store large arrays of integers in localStorage with Javascript

*"Efficient" here basically means in terms of smaller size (to reduce the IO waiting time), and speedy retrieval/deserialization times. Storing times are not as important.
I have to store a couple of dozen arrays of integers, each with 1800 values in the range 0-50, in the browser's localStorage -- that is, as a string.
Obviously, the simplest method is to just JSON.stringify it, however, that adds a lot of unnecessary information, considering that the ranges of the data is well known. An average size for one of these arrays is then ~5500 bytes.
Here are some other methods I've tried (resultant size, and time to deserialize it 1000 times at the end)
zero-padding the numbers so each was 2 characters long, eg:
[5, 27, 7, 38] ==> "05270738"
base 50 encoding it:
[5, 11, 7, 38] ==> "5b7C"
just using the value as a character code (adding 32 to avoid the weird control characters at the start):
[5, 11, 7, 38] ==> "%+'F" (String.fromCharCode(37), String.fromCharCode(43) ...)
Here are my results:
size Chrome 18 Firefox 11
-------------------------------------------------
JSON.stringify 5286 60ms 99ms
zero-padded 3600 354ms 703ms
base 50 1800 315ms 400ms
charCodes 1800 21ms 178ms
My question is if there is an even better method I haven't yet considered?
Update
MДΓΓБДLL suggested using compression on the data. Combining this LZW implementation with the base 50 and charCode data. I also tested aroth's code (packing 4 integers into 3 bytes). I got these results:
size Chrome 18 Firefox 11
-------------------------------------------------
LZW base 50 1103 494ms 999ms
LZW charCodes 1103 194ms 882ms
bitpacking 1350 2395ms 331ms
If your range is 0-50, then you can pack 4 numbers into 3 bytes (6 bits per number). This would allow you to store 1800 numbers using ~1350 bytes. This code should do it:
window._firstChar = 48;
window.decodeArray = function(encodedText) {
var result = [];
var temp = [];
for (var index = 0; index < encodedText.length; index += 3) {
//skipping bounds checking because the encoded text is assumed to be valid
var firstChar = encodedText.charAt(index).charCodeAt() - _firstChar;
var secondChar = encodedText.charAt(index + 1).charCodeAt() - _firstChar;
var thirdChar = encodedText.charAt(index + 2).charCodeAt() - _firstChar;
temp.push((firstChar >> 2) & 0x3F); //6 bits, 'a'
temp.push(((firstChar & 0x03) << 4) | ((secondChar >> 4) & 0xF)); //2 bits + 4 bits, 'b'
temp.push(((secondChar & 0x0F) << 2) | ((thirdChar >> 6) & 0x3)); //4 bits + 2 bits, 'c'
temp.push(thirdChar & 0x3F); //6 bits, 'd'
}
//filter out 'padding' numbers, if present; this is an extremely inefficient way to do it
for (var index = 0; index < temp.length; index++) {
if(temp[index] != 63) {
result.push(temp[index]);
}
}
return result;
};
window.encodeArray = function(array) {
var encodedData = [];
for (var index = 0; index < dataSet.length; index += 4) {
var num1 = dataSet[index];
var num2 = index + 1 < dataSet.length ? dataSet[index + 1] : 63;
var num3 = index + 2 < dataSet.length ? dataSet[index + 2] : 63;
var num4 = index + 3 < dataSet.length ? dataSet[index + 3] : 63;
encodeSet(num1, num2, num3, num4, encodedData);
}
return encodedData;
};
window.encodeSet = function(a, b, c, d, outArray) {
//we can encode 4 numbers in 3 bytes
var firstChar = ((a & 0x3F) << 2) | ((b >> 4) & 0x03); //6 bits for 'a', 2 from 'b'
var secondChar = ((b & 0x0F) << 4) | ((c >> 2) & 0x0F); //remaining 4 bits from 'b', 4 from 'c'
var thirdChar = ((c & 0x03) << 6) | (d & 0x3F); //remaining 2 bits from 'c', 6 bits for 'd'
//add _firstChar so that all values map to a printable character
outArray.push(String.fromCharCode(firstChar + _firstChar));
outArray.push(String.fromCharCode(secondChar + _firstChar));
outArray.push(String.fromCharCode(thirdChar + _firstChar));
};
Here's a quick example: http://jsfiddle.net/NWyBx/1
Note that storage size can likely be further reduced by applying gzip compression to the resulting string.
Alternately, if the ordering of your numbers is not significant, then you can simply do a bucket-sort using 51 buckets (assuming 0-50 includes both 0 and 50 as valid numbers) and store the counts for each bucket instead of the numbers themselves. That would likely give you better compression and efficiency than any other approach.
Assuming (as in your test) that compression takes more time than the size reduction saves you, your char encoding is the smallest you'll get without bitshifting. You're currently using one byte for each number, but if they're guaranteed to be small enough you could put two numbers in each byte. That would probably be an over-optimization, unless this is a very hot piece of your code.
You might want to consider using Uint8Array or ArrayBuffer. This blogpost shows how it's done. Copying his logic, here's an example, assuming you have an existing Uint8Array named arr.
function arrayBufferToBinaryString(buffer, cb) {
var blobBuilder = new BlobBuilder();
blobBuilder.append(buffer);
var blob = blobBuilder.getBlob();
var reader = new FileReader();
reader.onload = function (e) {
cb(reader.result);
};
reader.readAsBinaryString(blob);
}
arrayBufferToBinaryString(arr.buffer, function(s) {
// do something with s
});

Categories