I want to fetch some binary encoded data and split it into two objects. It's half Uint32Array and half BigInt64Array. How would I do the split without copying? The doc say it does the copy.
Related
I need to convert a base64 string of an image to a Uint8Array to be inputted in a machine learning model. I've been attempting to convert the base64 to Uint8Array with this Node.js code:
new Uint8Array(Buffer.from(base64String, 'base64'));
Where base64 is a base64 encoded image. I've attached the base64 in this gist.
Since the image is 100x100, I would expect the Uint8Array to have 30,000 values (3 values per pixel due to RGB, and 10,000 total pixels). This is also what my machine learning model expects. However, after running that code, the Uint8Array only has 3889 values according to uintarray.length. I've tried multiple methods in different environments to get the Uint8Array and it's always 3889.
When I print the Uint8Array, I see a list of 3889 numbers from 0-255. When I convert the Uint8Array back to base64 through Buffer.from(uintarray).toString('base64'), it returns the correct base64.
Why are there only 3889 values in my Uint8Array instead of 30,000? How can I get the array in the format that I want, with three values for each pixel and 10,000 pixels?
I am streaming a file from a gRPC backend to my javascript client. The stream sends a series of UInt8Array objects. I want to combine this series of UInt8Array into one single arrayBuffer (i.e. the original file's arrayBuffer). How can I do this?
I'm pushing the limits of HTML5 here.
I have this problem: I have an javascript array of a billion doubles (or ints), anyways A LOT of numbers. I want to store this in the HTML5 localStorage.
You may say, hey, just use JSON.Stringify, BUT, JSON.Stringify produces a huge 200MB string. Because, a number (0.03910319 for example), is stored as a string (so each number is taking up some bytes instead of just a few bytes for the whole number).
I was thinking about base64 encoding the numbers in the array first, and then applying JSON.stringify?
Or is it for example better to JSON.Stringify and then GZip or use some compression function?
Come up with your creative ideas to encode/decode an javascript array of A BILLION ints/doubles in an efficient matter to a localStorage variable.
TensorFlowJS
I looked at TensorflowJS, my array is basically a 1-D Tensor. Tensorflow has some storage capabilities for models... Maybe that is a feasible solution.
For anyone who is also dealing with this problem:
I used a Float32Array (javascript typed array) for my data.
A Float32Array is easily stored in IndexedDB using https://github.com/localForage/localForage
I am sending the document as byte array to an API that modifies the file and gets me a new file as a byte array.
Is it possible to replace document that I am working with currently, with the document received from the API?
If you can convert the byte array to a base64 string, then you can use the Body.insertFileFromBase64 method (with the Replace option) to do what you want. There are lots of StackOverflow answers about converting byte arrays to base64 string. For more about the Body.insertFileFromBase64 method, see Body.
I'm trying to decode a base64 string representing an image stored in a db.
I tried many libraries and solutions provided on SO, but I'm still unable to decode the image correctly. In particular, using the following code:
var img = new Buffer(b64, 'base64').toString('ascii');
I get a similar binary representation, except for the first bytes.
This is the initial part of the base64 string:
/9j/4RxVRXhpZgAASUkqAAgAAAANADIBAgAUAAAAqgAAACWIBAABAAAAiwYAABABAgAIAAAAvgAA
Here are the first 50 bytes of the original image:
ffd8ffe11c5545786966000049492a00080000000d003201020014000000aa00000025880400010000008b06000010010200
And here are the first 50 bytes of the string I get with javascript:
7f587f611c5545786966000049492a00080000000d0032010200140000002a00000025080400010000000b06000010010200
How you can see, the two strings are identical except for the fisrt 3 bytes and some few bytes in the middle.
Can somebody help me understand why this is happening and how to solve it? Thanks
The problem is that you're trying to convert binary data to ASCII, which most likely than not, will mean loss of data since ASCII only consists of values 0x00-0x7F. So when the conversion takes place, all bytes > 0x7F are capped at 0x7F.
If you do this instead, you can see the data matches your first 50 bytes of the original image:
console.log(Buffer.from(b64, 'base64').toString('hex'));
But if you want to keep the binary data intact, just keep it as a Buffer instance without calling .toString(), as many functions that work with binary data can deal with Buffers (e.g. fs core module).