I have a Uint8Array with an offset that I receive from another function. This contains the data I need, but there is a bit of other stuff at the start of the buffer backing this typed array.
The actual data are 32bit integers, and I'd like to have that data in an Int32Array. But converting this doesn't seem to be straightforward, I'm currently doing it manually the following way:
var outputBuffer = new ArrayBuffer(data.length);
var inputByteArray = new Uint8Array(outputBuffer);
for (var i=0; i < data.length; ++i) {
inputByteArray[i] = data[i]
}
var outputInt32Array= new Int32Array(outputBuffer);
The straightforward way of just creating a new Int32Array and passing the source Uint8Array doesn't work:
var outputInt32Array = new Int32Array(data) // data is the Uint8Array with offset
This results in a typed array that still behaves like a Uint8Array and hands out individual bytes, not 32bit integers.
Trying it by passing in the offset also doesn't work, I get the error "RangeError: start offset of Int32Array should be a multiple of 4":
var outputInt32Array = new Int32Array(data.buffer, data.byteOffset, length)
Is manually copying each byte the only way to get an Int32Array out of an Int8Array with an offset?
No, you don't need to manually copy the bytes from one to the other array. Using new Int32Array(data.buffer, …) is the best approach, but if you have a weird offset you will need to use a second buffer that is properly aligned. Still, you don't have to copy it manually, you can just use the slice method:
var outputInt32Array = new Int32Array(data.buffer.slice(data.byteOffset), 0, length);
If you need to access Int32s on the same buffer as data, you can also use a DataView.
Related
I am trying to understand ArrayBuffer.
It seems like that if we use uint8Array directly, we get everything we need(typed array with fixed type).
We can do something like this:
var a = new Uint8Array(2)
a[0] = 10
a[1] = 20;
we can even do this = console.log(a.buffer) which means we still have a buffer meaning that Uint8Array itself is a buffer.
So why do we need ArrayBuffer class then ? What's the actual use case ?
I have an ArrayBuffer object that I need to be able to convert to String to JSON, but I can't get the value of the [Int8Array] out of the object, even though it is clearly there.
I've tried all variations on this, but they all return undefined
console.log(result);//Returns the array buffer
//Following methods all return undefined?
console.log(result["[[Int8Array]]"]);
console.log(result[[[Int8Array]]]);
console.log(result[[["Int8Array"]]]);
console.log(result[Int8Array]);
console.log(result["Int8Array"]);
How can I get all the Int8Array or UInt8Array values that are clearly available in the object?
You can use the textDecoder that accepts ArrayBuffer (as well as uint8array) without having to deal with Uint8array's:
var str = new TextDecoder().decode(arrayBuffer)
var json = JSON.parse(str)
if you want to get straight to json
var json = await new Response(arrayBuffer).json()
You need to intiantiate a new Uint8Array to get their values, you can't access them directly using your ArrayBuffer instance.
var buf = new ArrayBuffer(8);
var int8view = new Uint8Array(buf);
console.log(int8view)
JSFiddle : https://jsfiddle.net/v8m7pjqb/
How do I convert an array of bytes into ArrayBuffer in Nashorn? I am trying to insert binary data into a pure JavaScript environment (i.e., it doesn't have access to Java.from or Java.to) and so would like to create an instance out an array of bytes.
Looks like I was going about this the wrong way. It made more sense to convert it into Uint8Array since what I'm sending in is an array of bytes.
I created the following function:
function byteToUint8Array(byteArray) {
var uint8Array = new Uint8Array(byteArray.length);
for(var i = 0; i < uint8Array.length; i++) {
uint8Array[i] = byteArray[i];
}
return uint8Array;
}
This will convert an array of bytes (so byteArray is actually of type byte[]) into a Uint8Array.
I think you're right about using a Uint8Array, but this code might be preferable:
function byteToUint8Array(byteArray) {
var uint8Array = new Uint8Array(byteArray.length);
uint8Array.set(Java.from(byteArray));
return uint8Array;
}
Also, if you really need an ArrayBuffer you can use uint8Array.buffer.
I have 4 bytes ArrayBuffer and I assigned a number at the index 0 using dataview. When I try to get the value using dataview that gives result correctly but it does not give correct result when I try to get value using typed array. Can anyone help on this? Here are the code:
var buffer = new ArrayBuffer(4);
var dataview = new DataView(buffer);
dataview.setUint32(0,5000);
var unint32 = new Uint32Array(buffer);
console.log(unint32[0]); //2282946560 instead of 5000
console.log(dataview.getUint32(0)); //shows correctly 5000
That's because you're using the wrong endian-type when using setUint32. Specifically, you need it to store a little-endian representation, because your hardware behaves like that.
You can see this more easily using hexadecimal values. Now, 5000 === 0x1388, while 2282946560 = 0x88130000. Can you see the pattern here?
Try this instead:
dataview.setUint32(0, 5000, true);
var unint32 = new Uint32Array(buffer);
console.log(unint32[0]); // 5000, yay!
As apsillers pointed out, if you're going to use dataview to retrieve the value too, you'll also have to use dataview.getUint32(0, true).
As a last word, if you just need to work with Uint32 numbers, forget about the DataView and use the typed array right away:
var buffer = new ArrayBuffer(4);
var unint32 = new Uint32Array(buffer);
unint32[0] = 5000;
You'll never get it wrong with this.
On the other hand, if you need to fill an ArrayBuffer with raw binary data, you'd like to check then endiannes of your hardware with this little trick.
What is the preferable way of appending/combining ArrayBuffers?
I'm receiving and parsing network packets with a variety of data structures. Incoming messages are read into ArrayBuffers. If a partial packet arrives I need to store it and wait for the next message before re-attempting to parse it.
Currently I'm doing something like this:
function appendBuffer( buffer1, buffer2 ) {
var tmp = new Uint8Array( buffer1.byteLength + buffer2.byteLength );
tmp.set( new Uint8Array( buffer1 ), 0 );
tmp.set( new Uint8Array( buffer2 ), buffer1.byteLength );
return tmp.buffer;
}
Obviously you can't get around having to create a new buffer as ArrayBuffers are of a fixed length, but is it necessary to initialize typed arrays? Upon arrival I just want is to be able to treat the buffers as buffers; types and structures are of no concern.
Why not using a Blob ? (I realize it might not have been available at that time).
Just create a Blob with your data, like var blob = new Blob([array1,array2,string,...]) and turn it back into an ArrayBuffer (if needed) using a FileReader (see this).
Check this : What's the difference between BlobBuilder and the new Blob constructor?
And this : MDN Blob API
EDIT :
I wanted to compare the efficiency of these two methods (Blobs, and the method used in the question) and created a JSPerf : http://jsperf.com/appending-arraybuffers
Seems like using Blobs is slower (In fact, I guess it's the use of Filereader to read the Blob that takes the most time). So now you know ;)
Maybe it would me more efficient when there are more than 2 ArrayBuffer (like reconstructing a file from its chunks).
function concat (views: ArrayBufferView[]) {
let length = 0
for (const v of views)
length += v.byteLength
let buf = new Uint8Array(length)
let offset = 0
for (const v of views) {
const uint8view = new Uint8Array(v.buffer, v.byteOffset, v.byteLength)
buf.set(uint8view, offset)
offset += uint8view.byteLength
}
return buf
}
It seems you've already concluded that there is no way around creating a new array buffer. However, for performance sake, it could be beneficial to append the contents of the buffer to a standard array object, then create a new array buffer or typed array from that.
var data = [];
function receive_buffer(buffer) {
var i, len = data.length;
for(i = 0; i < buffer.length; i++)
data[len + i] = buffer[i];
if( buffer_stream_done())
callback( new Uint8Array(data));
}
Most javascript engines will already have some space set aside for dynamically allocated memory. This method will utilize that space instead of creating numerous new memory allocations, which can be a performance killer inside the operating system kernel. On top of that you'll also shave off a few function calls.
A second, more involved option would be to allocate the memory beforehand.
If you know the maximum size of any data stream then you could create an array buffer of that size, fill it up (partially if necessary) then empty it when done.
Finally, if performance is your primary goal, and you know the maximum packet size (instead of the entire stream) then start out with a handful of array buffers of that size. As you fill up your pre-allocated memory, create new buffers between network calls -- asynchronously if possible.
You could always use DataView (http://www.khronos.org/registry/typedarray/specs/latest/#8) rather than a specific typed array, but as has been mentioned in the comments to your question, you can't actually do much with ArrayBuffer on its own.