why does digest and digest('hex') result in different outputs? - javascript

I have 2 piece of codes.
1ST ONE
const hash1 = (data) => createHash('sha256').update(data).digest('hex');
var a1 = hash1("A");
var b1 = hash1("B");
console.log(hash1(a1+b1));
2ND ONE
const hash2 = (data) => createHash('sha256').update(data).digest();
var a2 = hash2("A");
var b2 = hash2("B");
console.log(hash2(Buffer.concat([a2,b2])).toString('hex'));
Why do they print the different results ?
digest('hex') and digest() are the same , but in different kind of format, but still the same. So, why do I get different results in console ? is it the + operator when I sum up the hexes versus when i sum up buffers ? why ?

The default encoding for hash.digest([encoding]) is utf-8. utf-8 is a variable-length encoding system. It will only use as many bytes as necessary to represent each character (anywhere between 1-4 bytes).
However, when you specify hex as the encoding, each character is stored as exactly 2 hexadecimal characters.
When you call hash.toString('hex') on a utf-8 encoded hash, the resulting hex representation is equivalent to hashing with hex encoding in the first place (as in hash.digest('hex')).
So, even though the hex representation is the same in each case, the actual data is different. i.e.:
hash.digest() != hash.digest('hex'), but
hash.digest().toString('hex') == hash.digest('hex').

digest('hex') and digest() technically different
Please try this code
var crypto = require('crypto');
const hash1 = (data) => crypto.createHash('sha256').update(data).digest('hex');
var a1 = hash1("A");
var b1 = hash1("B");
//console.log(a1)
//console.log(b1)
console.log(hash1(a1+b1));
const hash2 = (data) => crypto.createHash('sha256').update(data).digest();
var a2 = hash2("A");
var b2 = hash2("B");
//console.log(a2.toString("hex"))
//console.log(b2.toString("hex"))
console.log(hash2(a2.toString("hex") + b2.toString("hex") ).toString("hex"));
In the first you are appending two hex string and passing to hash1
In the second you are appending two non hex string and passing to hash2
Run Nodejs online

Related

Converting time to hexidecimal then to string using javascrit

I am trying to convert present time to hexidecimal then to a regular string variable.
For some reason I can only seem to produce an output in double quotes such as "result" or an object output. I am using Id tags to identify each div which contains different messages. They are being used like this id="somename-hexnumber". The code if sent from the browser to a node.js server and the ID is split up into two words with first section being the person's name then "-" is the split key then the hexidecimal is just the div number so it is easy to find and delete if needed. The code I got so far is small but I am out of ideas now.
var thisRandom = Date.now();
const encodedString = thisRandom.toString(16);
var encoded = JSON.stringify(encodedString);
var tIDs = json.name+'-'+encoded;
var output = $('<div class="container" id="'+tIDs+'" onclick="DelComment(this.id, urank)"><span class="block"><div class="block-text"><p><strong><'+json.name+'></strong> '+json.data+'</p></div></div>');
When a hexidecimal number is produced I want the output to be something like 16FE67A334 and not "16FE67A334" or an object.
Do you want this ?
Demo: https://codepen.io/gmkhussain/pen/QWEdOBW
Code below will convert the time/number value d to hexadecimal.
var thisRandom = Date.now();
function timeToHexFunc(x) {
if ( x < 0) {
x = 0xFFFFFFFF + x + 1;
}
return x.toString(16).toUpperCase();
}
console.log(timeToHexFunc(thisRandom));

How to deal with big numeric values from response in javascript?

I have to read a JSON response which contains the value which is greater than MAX_SAFE_INTEGER without loss of precision. Like I had values
value1 = 232333433534534634534411
And I have to process that value and convert to
value2 = +232333433534534634534411.0000
without any loss of precision?
This can't be done with standard JSON.parse:
JSON.parse('{"prop": 232333433534534634534411}')
// {prop: 2.3233343353453462e+23}
If you have control over the API producing the JSON
You can use a string instead, then create a BigInt out of it (assuming legacy browser support isn't required).
const result = JSON.parse('{"prop": "232333433534534634534411"}')
const int = BigInt(result.prop)
// 232333433534534634534411n
If you need to perform decimal arithmetic on this result with a precision of 4 decimal places, you can multiply it by 10000n, for example:
const tenThousandths = int * 10000n // 2323334335345346345344110000n
const sum = tenThousandths + 55000n + 21n * 2n // 2323334335345346345344165042n
const fractionalPart = sum % 10000n // 5042n
const wholePart = sum / 10000n // 232333433534534634534416n (floor division)
const reStringified = `${wholePart}.${fractionalPart}` // "232333433534534634534416.5042"
const newJson = JSON.stringify({prop: reStringified})
// '{"prop":"232333433534534634534416.5042"}'
For legacy browser support, you could do similar with a library such as BigInteger.js.
If you don't control the API
In this case, you'll need a custom JSON parsing library, such as lossless JSON.
You could split by dot and treat the digits after the dot.
const convert = string => {
const values = string.split('.');
values[1] = (values[1] || '').padEnd(4, 0).slice(0, 4);
return values.join('.');
};
console.log(convert('232333433534534634534411'));
console.log(convert('232333433534534634534411.12345'));

Working with memory to fetch string yields incorrect result

I am following the solutions from here:
How can I return a JavaScript string from a WebAssembly function
and here:
How to return a string (or similar) from Rust in WebAssembly?
However, when reading from memory I am not getting the desired results.
AssemblyScript file, helloWorldModule.ts:
export function getMessageLocation(): string {
return "Hello World";
}
index.html:
<script>
fetch("helloWorldModule.wasm").then(response =>
response.arrayBuffer()
).then(bytes =>
WebAssembly.instantiate(bytes, {imports: {}})
).then(results => {
var linearMemory = results.instance.exports.memory;
var offset = results.instance.exports.getMessageLocation();
var stringBuffer = new Uint8Array(linearMemory.buffer, offset, 11);
let str = '';
for (let i=0; i<stringBuffer.length; i++) {
str += String.fromCharCode(stringBuffer[i]);
}
debugger;
});
</script>
This returns an offset of 32. And finally yields a string that starts too early and has spaces between each letter of "Hello World":
However, if I change the array to an Int16Array, and add 8 to the offset (which was 32), to make an offset of 40. Like so:
<script>
fetch("helloWorldModule.wasm").then(response =>
response.arrayBuffer()
).then(bytes =>
WebAssembly.instantiate(bytes, {imports: {}})
).then(results => {
var linearMemory = results.instance.exports.memory;
var offset = results.instance.exports.getMessageLocation();
var stringBuffer = new Int16Array(linearMemory.buffer, offset+8, 11);
let str = '';
for (let i=0; i<stringBuffer.length; i++) {
str += String.fromCharCode(stringBuffer[i]);
}
debugger;
});
</script>
Then we get the correct result:
Why does the first set of code not work like its supposed to in the links I provided? Why do I need to change it to work with Int16Array to get rid of the space between "H" and "e" for example? Why do I need to add 8 bytes to the offset?
In summary, what on earth is going on here?
Edit: Another clue, is if I use a TextDecoder on the UInt8 array, decoding as UTF-16 looks more correct than decoding as UTF-8:
AssemblyScript uses utf-16: https://github.com/AssemblyScript/assemblyscript/issues/43
Additionally AssemblyScript stores the length of the string in the first 32 or 64 bits.
That's why my code behaves differently. The examples in the links at the top of this post were for C++ and Rust, which do string encoding differently

CryptoJS splitting word array in two

I'm not too handy with byte conversions, so I want to make sure I'm not doing anything dangerous.
I'm simply generating a 512 bit key using CryptoJS pbkdf2.
I then want to split this key in half to generate two 256 bit keys.
generateKeyPair = function(input, salt) {
var output = CryptoJS.PBKDF2(input, salt, { keySize: 512/32 });
var firstHalf = _.clone(output);
var secondHalf = _.clone(output);
var sigBytes = output.sigBytes/2;
firstHalf.words = output.words.slice(0, 10);
secondHalf.words = output.words.slice(10, 20);
firstHalf.sigBytes = sigBytes;
secondHalf.sigBytes = sigBytes;
return [firstHalf.toString(), secondHalf.toString()];
}
The output I get for generateKeyPair("hello", "world") is:
["798ef2617367d80daeacf8b457af7903eebf6d1f384c9fed762b14186036e912",
"0a9782aa773bdafcd9cd259e95381ac9ab26d026fe6a3375a93dc6b2a69e7ac3"]
The underscore here is using lodash. Does this look right?
Your solution seems fine. I recently solved this problem just by splitting the hex string in half. My example takes a key in WordArray format and returns each half in WordArray format.
function splitKey(key) {
const keyString = key.toString()
const firstHalf = keyString.slice(0,keyString.length/2)
const secondHalf = keyString.slice(keyString.length/2,keyString.length)
return [CryptoJS.enc.Hex.parse(firstHalf), CryptoJS.enc.Hex.parse(secondHalf)]
}

JavaScript: reading 3 bytes Buffer as an integer

Let's say I have a hex data stream, which I want to divide into 3-bytes blocks which I need to read as an integer.
For example: given a hex string 01be638119704d4b9a I need to read the first three bytes 01be63 and read it as integer 114275. This is what I got:
var sample = '01be638119704d4b9a';
var buffer = new Buffer(sample, 'hex');
var bufferChunk = buffer.slice(0, 3);
var decimal = bufferChunk.readUInt32BE(0);
The readUInt32BE works perfectly for 4-bytes data, but here I obviously get:
RangeError: index out of range
at checkOffset (buffer.js:494:11)
at Buffer.readUInt32BE (buffer.js:568:5)
How do I read 3-bytes as integer correctly?
If you are using node.js v0.12+ or io.js, there is buffer.readUIntBE() which allows a variable number of bytes:
var decimal = buffer.readUIntBE(0, 3);
(Note that it's readUIntBE for Big Endian and readUIntLE for Little Endian).
Otherwise if you're on an older version of node, you will have to do it manually (check bounds first of course):
var decimal = (buffer[0] << 16) + (buffer[1] << 8) + buffer[2];
I'm using this, if someone knows something wrong with it, please advise;
const integer = parseInt(buffer.toString("hex"), 16)
you should convert three byte to four byte.
function three(var sample){
var buffer = new Buffer(sample, 'hex');
var buf = new Buffer(1);
buf[0] = 0x0;
return Buffer.concat([buf, buffer.slice(0, 3)]).readUInt32BE();
}
You can try this function.

Categories