Is there a way to read binary data in JavaScript? - javascript

I would like to inject binary data into an object in JavaScript. Is there a way to do this?
i.e.
var binObj = new BinaryObject('101010100101011');
Something to that effect. Any help would be great.

You can use parseInt:
var bin = parseInt('10101010', 2);
The second argument (the radix) is the base of the input.

There's this binary ajax library that is explained here and there's also another binary parser library that can handle more data types.
You could also look at Google Gears which has a binary Blob object or take a look at making a javascript wrapper for Flash which provides a native ByteArray implementation.
Or... you can sit and wait and hope that all these things become standard :)

On all recent browsers you can do:
xhr.overrideMimeType('text/plain; charset=x-user-defined');
And retrieve a string. To get the binary result you will have to do
data.charCodeAt(pos) & 0xff;
On the nightly builds of Firefox and Chrome you can retrieve the value as an ArrayBuffer
xhr.responseType = "arraybuffer";
The result is then accessible there
xhr.mozResponseArrayBuffer // Firefox
xhr.response // Chrome
Then you can apply a TypedArray (eg: Int32Array) or a DataView on the buffer to read the result.
In order to make this process easier, I made a jQuery Patch to support the binary type and a DataView Wrapper that uses the latest available reading feature of the browser.

JavaScript has very little support for raw binary data. In general, it's best to live within this restriction. However, there's a trick I'm considering trying for a project of mine that involves manipulating huge bitmaps to do set operations in an OLAP database. This won't work in IE.
The basic idea is this: coerce the binary data into a PNG to send it to JavaScript, For example, a bitmap might be a black and white PNG, with black being 100% transparent. Then use Canvas operations to do bitwise data manipulation.
The HTML5 Canvas includes a pixel array type, which allows access to bytes in an image. Canvas also supports compositing operations, such as XOR. Lighten and darken should be able to do AND and OR. These operations are likely to be well optimized in any browser that supports them-- probably using the GPU.
If anyone tries this, please let me know how well it works.

That would be the other way around... pow and squareroot might be calculated by the Math-Class... I don't know if it is the fastest way, but it's as fast as the Windows Calculator in the "Programmer View".
AlertFormatedBin();
function AlertFormatedBin()
{
var vals = decToBinArr(31,8);
var i;
var s = "";
var mod = vals.length % 4;
for(i= 0; i <mod;i++)
{
s+=vals[i];
}
if(i>0)
s+=" ";
var j = i;
for(i;i<vals.length;i++)
{
s+=vals[i];
if(i-j != 0 && (i+1-j)%4 == 0)
{
s+=" ";
}
}
alert(s);
}
function decToBinArr(dec, minSize)
{
var mod = dec%2;
var r = new Array();
if(dec > 1)
{
dec-=mod;
var bd = squareRootRoundedDown(dec);
if(minSize && minSize-1 > bd)
bd = minSize-1;
else
var i;
for(i = bd; i>0;i--)
{
var nxt = pow(2,i);
if(dec >= nxt)
{
r[i] = 1;
dec-=nxt;
}
else
{
r[i] = 0;
}
}
}
r[0]= mod;
r.reverse();
return r;
}
function squareRootRoundedDown(dec)
{
if(dec<2)
return 0;
var x = 2;
var i;
for(i= 1;true;i++)
{
if(x>=dec)
{
i = x == dec ? i : i-1;
break;
}
x= x*2;
}
return i;
}
function pow(b,exp)
{
if(exp == 0)
return 0;
var i = 1;
var r= b;
for(i = 1; i < exp;i++)
r=r*b;
return r;
}

In the near future you will be able to use ArrayBuffers and File API Blobs.

As #Zippy pointed out in a comment, the more recent (late 2016) solutions include:
DataView (Standard)
jDataView (polyfill/extension of DataView)
jBinary (built on jDataView)

Javascript doesn't provide a mechanism to load an object in any form other than simple strings.
Closest you can do is serializing the object to a string, optionally encrypting/compressing it, sending it to the browser, and decrypting/decompressing if necessary, checking for sanity, eval() and pray().
Instead of using eval (which is not quite safe), you can use your own format (alternatively, xml or json for which there are plenty of libs) and parse it yourself.
As a side note, if you want this for obfuscation after the browser gets the usable data (after decrypting/decompressing), it is too easy to circumvent.

Percent encoding can unescape strings into a direct 1<->1 representaion of any binary blob and is also portable across browsers;
unescape("%uFFFF%uFFFF%uFFFF");
Most browser exploit's use this technique for embedding shellcode into HTML pages, it works very well for creating arbitrary binary streams.

jBinary "makes it easy to create, load, parse, modify and save complex binary files and data structures in both browser and Node.js."
I haven't used it, but it's what I found when asking the same question asked here...

Welcome to everyone who found this older post on Google. I figured out a solution that works in Chrome as of 2019, so hopefully this is just some added feature or something most people missed.
You can use a 0b prefix to your number. It won't quite get the binary representation, but you can easily convert it to an integer for storage. For example, you could store the binary number 1010 as such:
var binNum = 0b1010 //Stores as an integer, which would be 10
If you're curious, it also works for hexadecimal with the 0x prefix:
var binNum = 0x1010 //Stores as an integer, which would be 4112

Related

How can I store a very very large number in JavaScript?

What if I want to store a very very large number and then display it. For example factorial of 200.
How can I do this using JavaScript?
I tried the normal way and the result is null or infinity.
function fact(input) {
if(input == 0) {
return 1;
}
return input * fact(input-1);
}
var result = fact(171);
console.log(result);
I tried in normal way and the result is infinity or null.
It seems JavaScript can generate Factorial up to 170.
Look at this picture. This calculator seems able to do it.
The BigInt numeric type is going to be implemented in the future of JavaScript, the proposal is on Stage 3 on the ECMAScript standardization process and it's being supported by major browsers now.
You can use either the BigInt constructor or the numeric literal by appending an n at the end of the number.
In older environments you can use a polyfill.
function fact(input) {
if(input == 0n) {
return 1n;
}
return input * fact(input-1n);
}
const result = fact(171n);
console.log(String(result));
Try this javascript based BigInteger library. There are many to choose from. But i recommend this one https://github.com/peterolson/BigInteger.js
Example:
var num = bigInt("9187239176928376598273465972639458726934756929837450")
.plus("78634075162394756297465927364597263489756289346592");

node.js change in concatenation?

I'm trying to debug some code that another programmer has left for me to maintain. I've just attempted to upgrade from node.js 5 to node.js 8 and my database queries are for some requests coming back with key not found errors
We're using couchbase for the database and our document keys are "encrypted" for security. So we may have a key that starts like this "User_myemail#gmail.com" but we encrypt it using the following method:
function _GetScrambledKey(dbKey)
{
//select encryption key based on db key content
var eKeyIndex = CalculateEncryptionKeyIndex(dbKey, eKeys.length);
var sha = CalculateSHA512(dbKey + eKeyIndex);
return sha;
}
function CalculateEncryptionKeyIndex(str, max)
{
var hashBuf = CalculateSHA1(str);
var count = 0;
for (var i = 0; i < hashBuf.length; i++)
{
count += hashBuf[i];
count = count % max;
}
return count;
}
We then query couchbase for the document with
cb.get("ECB_"+encryptedKey, opts, callback);
In node5 this worked but in node8 we're getting some documents return fine and others return as missing. I outputted the "ECB_"+encryptedKey as an int array and the results have only confused me more. They are different on node5 to node8 but only by one character right in the middle of the array.
Outputting the encryptedKey as an int array on both versions shows this
188,106,14,227,211,70,94,97,63,130,78,246,155,65,6,148,62,215,47,230,211,109,35,99,21,60,178,74,195,13,233,253,187,142,213,213,104,58,168,60,225,148,25,101,155,91,122,77,2,99,102,235,26,71,157,99,6,47,162,152,58,181,21,175
Then outputting the concatenated string, in the same way, shows slightly different results
This is the node8 output
Node8 key: 69,67,66,95,65533,106,14,65533,65533,70,94,97,63,65533,78,65533,65533,65,6,65533,62,65533,47,65533,65533,109,35,99,21,60,65533,74,65533,13,65533,65533,65533,65533,65533,65533,104,58,65533,60,65533,25,101,65533,91,122,77,2,99,102,65533,26,71,65533,99,6,47,65533,65533,58,65533,21,65533
And this is the node5 output
Node5 key: 69,67,66,95,65533,106,14,65533,65533,70,94,97,63,65533,78,65533,65533,65,6,65533,62,65533,47,65533,65533,109,35,99,21,60,65533,74,65533,13,65533,65533,65533,65533,65533,65533,104,58,65533,60,65533,65533,25,101,65533,91,122,77,2,99,102,65533,26,71,65533,99,6,47,65533,65533,58,65533,21,65533
I had to run it through a diff tool to see the difference
Comparing that to the original pre-append array it looks like the 225 has just been dropped in node8. Is 225 significant? I can't understand how that would be possible otherwise unless it's a bug. Does anyone have any ideas?
Looks like this was a change in v8 5.5 https://github.com/nodejs/node/issues/21278
A lot of the issues you are facing, including the concatenation can be cleaned up using newer features from ES6 that are available in node 8.
In general, you should avoid doing string concatenations with the + operator and should use string literals instead. In your case, you should replace the "ECB_"+encryptedKey with `ECB_${encryptedKey}`.
Additionally, if you want to output the contents of the integers values from this concatenated string, then you are better off using .join, the spread operator (...) and the Buffer class from Node as follows:
let encKey = `ECB_${encryptedKey}`;
let tmpBuff = Buffer.from(encKey);
let buffArrVals = [...tmpBuff];
console.log(buffArrVals.join(','));
Also, if you can help it, you really should avoid using var inside of function blocks like it exists in your sample code. var performs something called variable hoisting and causes the variable to become available outside the scope it was declared, which is seldom intended. From node 6+ onward the recommendation is to use let or const for variable declarations to ensure they stay scoped to the block they are declared.

Is binary hashing possible with CryptoJS?

I want to create an HOTP client using javascript similar to SpeakEasy
The above library is intended for server side javascript usage and it uses NodeJS.
I want to do the same thing on front end javascript in a browser but I haven't been able to use CryptoJS to achieve this behavior.
var key = "abc";
var counter = "123";
// create an octet array from the counter
var octet_array = new Array(8);
var counter_temp = counter;
for (var i = 0; i < 8; i++) {
var i_from_right = 7 - i;
// mask 255 over number to get last 8
octet_array[i_from_right] = counter_temp & 255;
// shift 8 and get ready to loop over the next batch of 8
counter_temp = counter_temp >> 8;
}
// There is no such class called as Buffer on Browsers (its node js)
var counter_buffer = new Buffer(octet_array);
var hash = CryptoJS.HmacSHA1(key,counter_buffer);
document.write("hex value "+ hash);
document.write("hash value "+ CryptoJS.enc.Hex.stringify(hash));
I know this is possible on a native platform like java (android) or objective c (ios)
Here is the corresponding implementation HOTP in Objective C but I doubt if it's possible to do on a web based front end.
Also, I highly doubt if such a thing is secure in browser because javascript is viewable from any browser. Any inputs suggestions would be useful. I am doing this for a POC. I am curious if anyone has used Hotp on web based platform.
There is no such language that supports binary data strings in the code. You need to encode the binary data into some format such as Hex or Base64 and let CryptoJS decode it into it's own internal binary format which you then can pass to the various CryptoJS functions:
var wordArrayFromUtf = CryptoJS.enc.Utf8.parse("test");
var wordArrayFromHex = CryptoJS.enc.Hex.parse("74657374"); // "test"
var wordArrayFromB64 = CryptoJS.enc.Base64.parse("dGVzdA=="); // "test"
Other functions are:
wordArrayFromHex.toString(CryptoJS.enc.Utf8) // "test"
CryptoJS.enc.Utf8.stringify(wordArrayFromB64) // "test"
If you pass a string into a CrypoJS function (not these here), it will be assumed to be a Utf8-encoded string. If you don't want that, you need to decode it yourself.
The code at http://caligatio.github.io/jsSHA/ works fine for SHA-512.
Drop the .js files, look in their test/test.html at line 515. It might look like a string to you but it is binary hex.
So their input is binary which is unmistaken. Don't get hung up on the fact it is sitting in a big string.

What is the fastest way to read and parse a file of numerical ASCII pairs in Node.js?

I'm using Node.js to read and parse a file of pairs encoding numbers. I have a file like this:
1561 0506
1204 900
6060 44
And I want to read it as an array, like this:
[[1561,0506],[1204,900],[6060,44]]
For that, I am using a readStream, reading the file as chunks and using native string functions to do the parsing:
fileStream.on("data",function(chunk){
var newLineIndex;
file = file + chunk;
while ((newLineIndex = file.indexOf("\n")) !== -1){
var spaceIndex = file.indexOf(" ");
edges.push([
Number(file.slice(0,spaceIndex)),
Number(file.slice(spaceIndex+1,newLineIndex))]);
file = file.slice(newLineIndex+1);
};
});
That took way to many time, though (4s for the file I need on my machine). I see some reasons:
Use of strings;
use of "Number";
Dynamic array of arrays.
I've rewriten the algorithm without using the builtin string functions, but loops instead and, to my surprise, it became much slower! Is there any way to make it faster?
Caveat: I have not tested the performance of this solution, but it's complete so should be easy to try.
How about using this liner implementation based on the notes in this question.
Using the liner:
var fs = require('fs')
var liner = require('./liner')
var source = fs.createReadStream('mypathhere')
source.pipe(liner)
liner.on('readable', function () {
var line
while (line = liner.read()) {
var parts = line.split(" ");
edges.push([Number(parts[0]), Number(parts[1])]);
}
})
As you can see I also moved the edge array to be an inline constant-sized array separate from the split parts, which I'm guessing would speed up allocation. You could even try swapping out using indexOf(" ") instead of split(" ").
Beyond this you could instrument the code to identify any further bottlenecks.

Appending ArrayBuffers

What is the preferable way of appending/combining ArrayBuffers?
I'm receiving and parsing network packets with a variety of data structures. Incoming messages are read into ArrayBuffers. If a partial packet arrives I need to store it and wait for the next message before re-attempting to parse it.
Currently I'm doing something like this:
function appendBuffer( buffer1, buffer2 ) {
var tmp = new Uint8Array( buffer1.byteLength + buffer2.byteLength );
tmp.set( new Uint8Array( buffer1 ), 0 );
tmp.set( new Uint8Array( buffer2 ), buffer1.byteLength );
return tmp.buffer;
}
Obviously you can't get around having to create a new buffer as ArrayBuffers are of a fixed length, but is it necessary to initialize typed arrays? Upon arrival I just want is to be able to treat the buffers as buffers; types and structures are of no concern.
Why not using a Blob ? (I realize it might not have been available at that time).
Just create a Blob with your data, like var blob = new Blob([array1,array2,string,...]) and turn it back into an ArrayBuffer (if needed) using a FileReader (see this).
Check this : What's the difference between BlobBuilder and the new Blob constructor?
And this : MDN Blob API
EDIT :
I wanted to compare the efficiency of these two methods (Blobs, and the method used in the question) and created a JSPerf : http://jsperf.com/appending-arraybuffers
Seems like using Blobs is slower (In fact, I guess it's the use of Filereader to read the Blob that takes the most time). So now you know ;)
Maybe it would me more efficient when there are more than 2 ArrayBuffer (like reconstructing a file from its chunks).
function concat (views: ArrayBufferView[]) {
let length = 0
for (const v of views)
length += v.byteLength
let buf = new Uint8Array(length)
let offset = 0
for (const v of views) {
const uint8view = new Uint8Array(v.buffer, v.byteOffset, v.byteLength)
buf.set(uint8view, offset)
offset += uint8view.byteLength
}
return buf
}
It seems you've already concluded that there is no way around creating a new array buffer. However, for performance sake, it could be beneficial to append the contents of the buffer to a standard array object, then create a new array buffer or typed array from that.
var data = [];
function receive_buffer(buffer) {
var i, len = data.length;
for(i = 0; i < buffer.length; i++)
data[len + i] = buffer[i];
if( buffer_stream_done())
callback( new Uint8Array(data));
}
Most javascript engines will already have some space set aside for dynamically allocated memory. This method will utilize that space instead of creating numerous new memory allocations, which can be a performance killer inside the operating system kernel. On top of that you'll also shave off a few function calls.
A second, more involved option would be to allocate the memory beforehand.
If you know the maximum size of any data stream then you could create an array buffer of that size, fill it up (partially if necessary) then empty it when done.
Finally, if performance is your primary goal, and you know the maximum packet size (instead of the entire stream) then start out with a handful of array buffers of that size. As you fill up your pre-allocated memory, create new buffers between network calls -- asynchronously if possible.
You could always use DataView (http://www.khronos.org/registry/typedarray/specs/latest/#8) rather than a specific typed array, but as has been mentioned in the comments to your question, you can't actually do much with ArrayBuffer on its own.

Categories