I'm not sure if I understand the concept of UInt8Array right, but I'm trying to convert any given data which can be image/png,jpg,gif text/html,json,js,css,less or any type of data including octet binary and then I can create data type UInt8Array
So, for any given data how can I convert them so I can make this possible?
var value = new Uint8Array([2, 4, 6, 8]);
Obviously that numbers in the array is hardcoded with random numbers, but the idea I think I want that part to be the data that I'm trying to convert to.
If you can get the data in a Base64 string MDN can help with this.
To sum up, use base64DecToArr
var myArray = base64DecToArr("QmFzZSA2NCDigJQgTW96aWxsYSBEZXZlbG9wZXIgTmV0d29yaw=="); // "Base 64 \u2014 Mozilla Developer Network"
var myBuffer = base64DecToArr("QmFzZSA2NCDigJQgTW96aWxsYSBEZXZlbG9wZXIgTmV0d29yaw==").buffer; // "Base 64 \u2014 Mozilla Developer Network"
alert(myBuffer.byteLength);
Related
Hi Here I am using JavaScript to convert string to Json object,Here below my code
const json = '{"result":true, "count":42,"groupID": 80000000000000809}';
const obj = JSON.parse(json);
console.log(obj);
result is
Object { result: true, count: 42, groupID: 80000000000000820 }
but required output is
{ result: true, count: 42, groupID: 80000000000000809 }
Why the groupID value is changing during the conversion.Please help me to resolve this.
Thanks for your response.
I have found solution for this.
const json = '{"result":true, "count":42,"groupID": 80000000000000000809}';
const obj = JSON.parse(json.replace(/("[^"]*"\s*:\s*)(\d{17,})/g, '$1"$2"'));
console.log(obj);
The above code I have used string replace method to replace big-integer into string.
The integer you are trying to parse is too large. JavaScript only supports up to 53-bit integers, so the maximum value for a Number type in JavaScript is +/- 9,007,199,254,740,991. Any larger and you'll need to use a BigInt. See here for how to work with large integers in JavaScript.
I would recommend changing groupID in your API response to a string rather than an integer.
If you can't do this, then you could also try using a JSON parsing library that can handle BigInt types, e.g. json-bigint.
You can always check Number.MAX_SAFE_INTEGER(). Let's compare this result with the number you're working with:
9007199254740991
80000000000000826
Your number appears to be over the technical maximum that JavaScript supports with integers. I tested with a few other numbers, they appear to have similar problems. Documentation...
The Number.MAX_SAFE_INTEGER constant represents the maximum safe integer in JavaScript (253 - 1).
For larger integers, consider using BigInt. (Source: MDN Web Docs: Number.MAX_SAFE_INTEGER
You can always use a string, though, since it is likely that you'll actually be getting this JSON from a web-server, rather than locally defined:
const json = '{"result":true, "count":42,"groupID": "80000000000000809"}';
const obj = JSON.parse(json);
console.log(obj);
Take the following snippet:
const arr = [1.1, 2.2, 3.3]
const arrBuffer = (Float32Array.from(arr)).buffer
How would one cast this ArrayBuffer to a SharedArrayBuffer?
const sharedArrBuffer = ...?
Note that both ArrayBuffer and SharedArrayBuffer are backing data pointers that you only interact with through a typed array (like Float32Array, in your example). Array Buffers represent memory allocations and can't be "cast" (only represented with a typed array).
If you have one typed array already, and need to copy it into a new SharedArrayBuffer, you can do that with set:
// Create a shared float array big enough for 256 floats
let sharedFloats = new Float32Array(new SharedArrayBuffer(1024));
// Copy floats from existing array into this buffer
// existingArray can be a typed array or plain javascript array
sharedFloats.set(existingArray, 0);
(In general, you can have a single array buffer and interact with it through multiple "typed lenses" - so, basically, casting an array buffer into different types, like Float32 and Uint8. But you can't cast an ArrayBuffer to a SharedArrayBuffer, you'll need to copy its contents.)
I read a file using FileReader.readAsArrayBuffer and then do something like this:
var compressedData = pako.gzip(new Uint8Array(this.result));
var blob1 = new Blob([compressedData]); // size = 1455338 bytes
var blob2 = new Blob(compressedData); // size = 3761329 bytes
As an example: if result has 4194304 bytes, after compression it will be size 1455338 bytes. But for some reason the Uint8Array needs to be wrapped in an Array. Why is this?
Cf. documentation for BLOB constructor:
https://developer.mozilla.org/en-US/docs/Web/API/Blob/Blob
[the first argument] is an Array of ArrayBuffer, ArrayBufferView, Blob, DOMString objects, or a mix of any of such objects, that will be put inside the Blob. DOMStrings are encoded as UTF-8.
I'm not sure how it works under the hood, but basically the constructor expects an array of things it will pack into the BLOB. So, in the first case, you're constructing a BLOB of a single part (i.e. your ArrayBuffer), whereas in the second you're constructing it from 1455338 parts (i.e. each byte separately).
Since the documentation says the BLOB parts can only be arrays or strings, it probably ends up converting each of the byte values inside your ArrayBuffer into UTF-8 strings, which means instead of using 1 byte per number, it uses 1 byte per decimal digit (the ratio of the two result sizes seems to support this, since single byte values are 1-3 digits long, and the larger BLOB is about 2.5 times the size of the smaller). Not only is that wasteful, I'm pretty sure it also renders your ZIP unusable.
So, bottom line is, the first version is the correct way to go.
Unfortunately, MDN article is almost wrong here, and at best misleading.
From the specs:
The Blob() constructor can be invoked with the parameters below:
A blobParts sequence
which takes any number of the following types of elements, and in any order:
BufferSource elements.
Blob elements.
USVString elements.
... [BlobPropertyBag, none of our business here]
So a sequence here can be a lot of things, from an Array to a Set going through an multi-dimensional Array.
Then the algorithm is to traverse this sequence until it finds one of the three types of elements above.
So what happens in your case is that a TypedArray can be converted to a sequence. This means that when you pass it as the direct parameter, it will not be able to see its ArrayBuffer and the algorithm will traverse its content and pick up the values (here 8 bit numbers converted to Strings), which is probably not what you expected.
In the other hand, when you wrap your Uint8Array through an Array, the algorithm is able to find the BufferSource your Uint8Array points to. So it will use it instead (binary data, and probably what you want).
var arr = new Uint8Array(25);
arr.fill(255);
var nowrap = new Blob(arr);
var wrapped = new Blob([arr]);
test(nowrap, 'no wrap');
test(wrapped, 'wrapped');
function test(blob, msg) {
var reader = new FileReader();
reader.onload = e => console.log(msg, reader.result);
reader.readAsText(blob);
}
I have a c# application that converts a double array to a byte array of data to a node.js server which is converted to a Buffer (as convention seems to recommend). I want to convert this buffer into an array of the numbers originally stored in the double array, I've had a look at other questions but they either aren't applicable or just don't work ([...buf], Array.prototype.slice.call(buf, 0) etc.).
Essentially I have a var buf which contains the data, I want this to be an array of integers, is there any way I can do this?
Thank you.
First, you need to know WHAT numbers are in the array. I'll assume they are 32bit integers. So first, create encapsulating Typed Array around the buffer:
// #type {ArrayBuffer}
var myBuffer = // get the bufffer from C#
// Interprets byte array as 32 bit int array
var myTypedArray = new Int32Array(myBuffer);
// And if you really want standard JS array:
var normalArray = [];
// Push all numbers from buffer to Array
normalArray.push.apply(normalArray, myTypedArray);
Note that stuff might get more complicated if the C#'s array is in Big Endian, but I assume it's not. According to this answer, you should be fine.
I managed to do this with a DataView and used that to iterate over the buffer, something I'd tried before but for some reason didn't work but does now.
I am using jsmodbus npm library to retrieve 16 bit registers using readHoldingRegisters() function.
// array of values returned by jsmodbus readHoldingRegisters for two 16 bit registers
data = [ 17008, 28672 ]
I am using an ArrayBuffer and DataView to set and get the data in the required format:
const buffer = new ArrayBuffer(4)
const view = new DataView(buffer)
I understand that data returned from registers is always 16-bit integers split over two 8-bit byte values, so should I set the two bytes to the view as two consecutive ints even though it may be later retrieved from the view as an Int16 or Float32. Is this correct?
Secondly, if I expect to retrieve the data as signed Int16 or Float32, is it necessary to set the high byte as signed and the low byte as unsigned, like so:
view.setInt16(0, data[0])
view.setUint16(0, data[1])
And third: notwithstanding the need to set proper endianness, does it even matter which method you use when setting the data against the view, as the order of bytes and bits in the view isn't affected by which method you set against, only when you retrieve that data back out?
Certainly, it doesn't appear to:
view.setInt16(0, data[0])
view.setInt16(1, data[1]) // notice: setInt16
val = view.getFloat32(0)
// val = 122.880859375
view.setInt16(0, data[0])
view.setUint16(1, data[1]) // notice: set*U*int16
val = view.getFloat32(0)
// val = 122.880859375
Sanity check appreciated!