How do you read and understand this bitshifting code? - javascript

So I found some "bit buffer" code in JavaScript which should help me out on my journey to, well, write a bit buffer (only I'm doing it in a different language).
I took the key parts of the code and pasted it here (for LittleEndian only):
function LittleEndianView(size) {
Object.defineProperty(this, 'native', {
value: new Uint8Array(size)
})
}
LittleEndianView.prototype.get = function(bits, offset) {
let available = (this.native.length * 8 - offset)
if (bits > available) {
throw new Error('Range error')
}
let value = 0
let i = 0
// why loop through like this?
while (i < bits) {
// remaining bits
const remaining = bits - i
const bitOffset = offset & 7
const currentByte = this.native[offset >> 3]
const read = Math.min(remaining, 8 - bitOffset)
const a = 0xFF << read
mask = ~a
const b = currentByte >> bitOffset
readBits = b & mask
const c = readBits << i
value = value | c
offset += read
i += read
}
return value >>> 0
}
LittleEndianView.prototype.set = function(bits, offset) {
const available = (this.native.length * 8 - offset)
if (bits > available) {
throw new Error('Range error')
}
let i = 0
while (i < bits) {
const remaining = bits - i
const bitOffset = offset & 7
const byteOffset = offset >> 3
const finished = Math.min(remaining, 8 - bitOffset)
const mask = ~(0xFF << finished)
const writeBits = value & mask
const value >>= finished
const destMask = ~(mask << bitOffset)
const byte = this.view[byteOffset]
this.native[byteOffset] = (byte & destMask) | (writeBits << bitOffset)
offset += finished
i += finished
}
}
I would like help on the comments, on what each piece means. It seems there are a lot of bit manipulation tricks that I am not aware of. For example, if I search google for "bitshift right" or "bitshift left", I get the obvious answer of shifting right or left. But why did they do the bitshift right there, with that number? Why did they negate that number? Why did they & or | that number?
I am trying to go through this code line by line and comment it out, but I am struggling writing comments because I don't know why they are applying the bitshift operations when they do.
So my main question is, how do you go about reading this bitshifting code and know why the bitshift is performed? I am going to want to be reading how md5 and other way more complex hash algorithms and image processing algorithms and other "bit" algorithms work, but first I think I need to know when to apply the bitshift operators, because just knowing how they work doesn't help much.
As a deeper question, perhaps this could be answered here... If you are given bit manipulation code without comments or good variable names, how do you figure out what it's doing?
Sidenote, is there a book or something that lists all the common bit tricks? I have seen this and others, but they are all very short.

Related

Best approach to solve error calc with large integers wihout any library

I participated on a proggramming contest last week. I used javascript to solve the problems, but I found an error working with big integers. First, this is the code:
const solve03 = (n) => {
n++;
const times = Math.floor(n / 4);
return n - 2 * times;
};
console.log(solve03(87123641123172368));
console.log(solve03(81239812739128371));
The output with js are:
43561820561586184
40619906369564184
I tested the same code with python (that supports large integers):
def solve03(n):
n += 1
times = n // 4
return n - 2 * times
print(solve03(87123641123172368))
print(solve03(81239812739128371))
And the outputs are:
43561820561586185
40619906369564186
I need a way to rewrite the code in js to solve the error calc problem, also, I known there are many libraries to support bigintegers operations, but the contest doesn't allow them.
Check out this snippet using BigInt! It will handle the large integers well 👍
const $ = str => document.querySelector(str);
$("input").addEventListener("keyup", e => {
let aBigInt = BigInt(e.target.value);
aBigInt++;
const times = aBigInt / BigInt(4); //always returns floored
const result = aBigInt - BigInt(2) * times;
$("div").innerText = result;
});
<input type="number">
<div></div>

How to store a 32-bit integer in an arraybuffer?

I seem to not be understanding the Uint32Array. According to what I've read about the Uint8Array I could just feed it a number inside an array (Uint8Array([16]) or Uint8Array([96,56])) and the results are exactly that. However, when I try the same thing for a larger number like Uint32Array([21640]), it seems to truncate it. Where 21640 should equal 5488 in hex, I only get 88. How does this actually work?
Edit: Elaborations
I am also attempting to concatenate several ArrayBuffers together. If I'm not mistaken readAsArrayBuffer produces an Uint8Array, and I am trying to append to that some 32-bit numbers using https://gist.github.com/72lions/4528834
There is so much information and examples on Uint8Array and what little there was on Uint32Array makes me think that one of these 32 would store a value as if it was 4 of the 8.
The largest value of an unsigned 8 bit number is 255. Larger numbers will be truncated or rolled over depending on the os/cpu. If you want to convert a 32 bit numbers in an 8 bit array try something like this.
var number = 21640;
var byte1 = 0xff & number;
var byte2 = 0xff & (number >> 8);
var byte3 = 0xff & (number >> 16);
var byte4 = 0xff & (number >> 24);
var arr1 = Uint8Array([byte1,byte2,byte3,byte4]);
Just reverse the order of the bytes when you create the array depending on if you want little or big endian.
Here is a working example showing 5488 in console
var bigNumber = new Uint32Array([21640]);
console.log(bigNumber[0].toString(16));
Since you've added more to the question. If you wanted to convert
var byte1 = 0x88;
var byte2 = 0x54;
var byte3 = 0;
var byte4 = 0;
var bigValue = (byte4 << 24) | (byte3 << 16) | (byte2 << 8) | (byte1);
console.log(bigValue);
Although you will need to factor in Endianness

Given an ArrayBuffer of known size, get string of hexadecimal pairs [duplicate]

I've got a Javascript ArrayBuffer that I would like to be converted into a hex string.
Anyone knows of a function that I can call or a pre written function already out there?
I have only been able to find arraybuffer to string functions, but I want the hexdump of the array buffer instead.
function buf2hex(buffer) { // buffer is an ArrayBuffer
return [...new Uint8Array(buffer)]
.map(x => x.toString(16).padStart(2, '0'))
.join('');
}
// EXAMPLE:
const buffer = new Uint8Array([ 4, 8, 12, 16 ]).buffer;
console.log(buf2hex(buffer)); // = 04080c10
This function works in four steps:
Converts the buffer into an array.
For each x the array, it converts that element to a hex string (e.g., 12 becomes c).
Then it takes that hex string and left pads it with zeros (e.g., c becomes 0c).
Finally, it takes all of the hex values and joins them into a single string.
Below is another longer implementation that is a little easier to understand, but essentially does the same thing:
function buf2hex(buffer) { // buffer is an ArrayBuffer
// create a byte array (Uint8Array) that we can use to read the array buffer
const byteArray = new Uint8Array(buffer);
// for each element, we want to get its two-digit hexadecimal representation
const hexParts = [];
for(let i = 0; i < byteArray.length; i++) {
// convert value to hexadecimal
const hex = byteArray[i].toString(16);
// pad with zeros to length 2
const paddedHex = ('00' + hex).slice(-2);
// push to array
hexParts.push(paddedHex);
}
// join all the hex values of the elements into a single string
return hexParts.join('');
}
// EXAMPLE:
const buffer = new Uint8Array([ 4, 8, 12, 16 ]).buffer;
console.log(buf2hex(buffer)); // = 04080c10
Here is a sweet ES6 solution, using padStart and avoiding the quite confusing prototype-call-based solution of the accepted answer. It is actually faster as well.
function bufferToHex (buffer) {
return [...new Uint8Array (buffer)]
.map (b => b.toString (16).padStart (2, "0"))
.join ("");
}
How this works:
An Array is created from a Uint8Array holding the buffer data. This is so we can modify the array to hold string values later.
All the Array items are mapped to their hex codes and padded with 0 characters.
The array is joined into a full string.
Here are several methods for encoding an ArrayBuffer to hex, in order of speed. All methods were tested in Firefox initially, but afterwards I went and tested in Chrome (V8). In Chrome the methods were mostly in the same order but it did have slight differenences--the important thing is that #1 is the fastest method in all environments by a huge margin.
If you want to see how slow the currently selected answer is, you can go ahead and scroll to the bottom of this list.
TL;DR
Method #1 (just below this) is the fastest method I tested for encoding to a hex string. If, for some very good reason, you need to support IE, you may need to replace the .padStart call with the .slice trick used in method #6 when precomputing the hex octets to make sure every octet is 2 characters.
1. Precomputed Hex Octets w/ for Loop (Fastest/Baseline)
This approach computes the 2-character hex octets for every possible value of an unsigned byte: [0, 255], and then just maps each value in the ArrayBuffer through the array of octet strings. Credit to Aaron Watters for the original answer using this method.
NOTE: as mentioned by Cref, you may get a performance boost in V8 (Chromium/Chrome/Edge/Brave/etc.) by using the loop to just concatenate hex octets into one big string as you go and then returning the string after the loop. V8 seems to optimize string concatenation very well while Firefox performed better with building up an array and then .joining it into a string at the end as I did in the code below. That would probably be a micro-optimization subject to change with the whims of optimizing JS compilers though..
const byteToHex = [];
for (let n = 0; n <= 0xff; ++n)
{
const hexOctet = n.toString(16).padStart(2, "0");
byteToHex.push(hexOctet);
}
function hex(arrayBuffer)
{
const buff = new Uint8Array(arrayBuffer);
const hexOctets = []; // new Array(buff.length) is even faster (preallocates necessary array size), then use hexOctets[i] instead of .push()
for (let i = 0; i < buff.length; ++i)
hexOctets.push(byteToHex[buff[i]]);
return hexOctets.join("");
}
2. Precomputed Hex Octets w/ Array.map (~30% slower)
Same as the above method, where we precompute an array in which the value for each index is the hex string for the index's value, but we use a hack where we call the Array prototype's map() method with the buffer. This is a more functional approach, but if you really want speed you will always use for loops rather than ES6 array methods, as all modern JS engines optimize them much better.
IMPORTANT: You cannot use new Uint8Array(arrayBuffer).map(...). Although Uint8Array implements the ArrayLike interface, its map method will return another Uint8Array which cannot contain strings (hex octets in our case), hence the Array prototype hack.
function hex(arrayBuffer)
{
return Array.prototype.map.call(
new Uint8Array(arrayBuffer),
n => byteToHex[n]
).join("");
}
3. Precomputed ASCII Character Codes (~230% slower)
Well this was a disappointing experiment. I wrote up this function because I thought it would be even faster than Aaron's precomputed hex octets--boy was I wrong LOL. While Aaron maps entire bytes to their corresponding 2-character hex codes, this solution uses bitshifting to get the hex character for the first 4 bits in each byte and then the one for the last 4 and uses String.fromCharCode(). Honestly I think String.fromCharCode() must just be poorly optimized, since it is not used by very many people and is low on browser vendors' lists of priorities.
const asciiCodes = new Uint8Array(
Array.prototype.map.call(
"0123456789abcdef",
char => char.charCodeAt()
)
);
function hex(arrayBuffer)
{
const buff = new Uint8Array(arrayBuffer);
const charCodes = new Uint8Array(buff.length * 2);
for (let i = 0; i < buff.length; ++i)
{
charCodes[i * 2] = asciiCodes[buff[i] >>> 4];
charCodes[i * 2 + 1] = asciiCodes[buff[i] & 0xf];
}
return String.fromCharCode(...charCodes);
}
4. Array.prototype.map() w/ padStart() (~290% slower)
This method maps an array of bytes using the Number.toString() method to get the hex and then padding the octet with a "0" if necessary via the String.padStart() method.
IMPORTANT: String.padStart() is a relative new standard, so you should not use this or method #5 if you are planning on supporting browsers older than 2017 or so or Internet Explorer. TBH if your users are still using IE you should probably just go to their houses at this point and install Chrome/Firefox. Do us all a favor. :^D
function hex(arrayBuffer)
{
return Array.prototype.map.call(
new Uint8Array(arrayBuffer),
n => n.toString(16).padStart(2, "0")
).join("");
}
5. Array.from().map() w/ padStart() (~370% slower)
This is the same as #4 but instead of the Array prototype hack, we create an actual number array from the Uint8Array and call map() on that directly. We pay in speed though.
function hex(arrayBuffer)
{
return Array.from(new Uint8Array(arrayBuffer))
.map(n => n.toString(16).padStart(2, "0"))
.join("");
}
6. Array.prototype.map() w/ slice() (~450% slower)
This is the selected answer, do not use this unless you are a typical web developer and performance makes you uneasy (answer #1 is supported by just as many browsers).
function hex(arrayBuffer)
{
return Array.prototype.map.call(
new Uint8Array(arrayBuffer),
n => ("0" + n.toString(16)).slice(-2)
).join("");
}
Lesson #1
Precomputing stuff can be a very effective memory-for-speed tradeoff sometimes. In theory, the array of precomputed hex octets can be stored in just 1024 bytes (256 possible hex values ⨉ 2 characters/value ⨉ 2 bytes/character for a UTF-16 string representation used by most/all browsers), which is nothing in a modern computer. Realistically there are some more bytes in there used for storing the array and string lengths and maybe type information since this is JavaScript, but the memory usage is still negligible for a massive performance improvement.
Lesson #2
Help out the optimizing compiler. The browser's JavaScript compiler regularly attempts to understand your code and break it down to the fastest possible machine code for your CPU to execute. Because JavaScript is a very dynamic language, this can be hard to do and sometimes the browser just gives up and leaves all sorts of type checks and worse under-the-hood because it can't be sure that x will indeed be a string or number, and vice versa. Using modern functional programming additions like the .map method of the built-in Array class can cause headaches for the browser because callback functions can capture outside variables and do all sorts of other things that often hurt performance. For-loops are well-studied and relatively simple constructs, so the browser developers have incorporated all sorts of tricks for the compiler to optimize your JavaScript for-loops. Keep it simple.
Here is another solution which is, on Chrome (and probably node too) about 3x faster than the other suggestions using map and toString:
function bufferToHex(buffer) {
var s = '', h = '0123456789ABCDEF';
(new Uint8Array(buffer)).forEach((v) => { s += h[v >> 4] + h[v & 15]; });
return s;
}
Additional bonus: you can easily choose uppercase/lowercase output.
See bench here: http://jsben.ch/Vjx2V
The simplest way to convert arraybuffer to hex:
const buffer = new Uint8Array([ 4, 8, 12, 16 ]);
console.log(Buffer.from(buffer).toString("hex")); // = 04080c10
uint8array.reduce((a, b) => a + b.toString(16).padStart(2, '0'), '')
Surprisingly, it is important to use reduce instead of map. This is because map is reimplemented for typed arrays to return a typed array for each element, instead of a uint8.
The following solution uses precomputed lookup tables for both forward and backward conversion.
// look up tables
var to_hex_array = [];
var to_byte_map = {};
for (var ord=0; ord<=0xff; ord++) {
var s = ord.toString(16);
if (s.length < 2) {
s = "0" + s;
}
to_hex_array.push(s);
to_byte_map[s] = ord;
}
// converter using lookups
function bufferToHex2(buffer) {
var hex_array = [];
//(new Uint8Array(buffer)).forEach((v) => { hex_array.push(to_hex_array[v]) });
for (var i=0; i<buffer.length; i++) {
hex_array.push(to_hex_array[buffer[i]]);
}
return hex_array.join('')
}
// reverse conversion using lookups
function hexToBuffer(s) {
var length2 = s.length;
if ((length2 % 2) != 0) {
throw "hex string must have length a multiple of 2";
}
var length = length2 / 2;
var result = new Uint8Array(length);
for (var i=0; i<length; i++) {
var i2 = i * 2;
var b = s.substring(i2, i2 + 2);
result[i] = to_byte_map[b];
}
return result;
}
This solution is faster than the winner of the previous benchmark:
http://jsben.ch/owCk5 tested in both Chrome and Firefox on a Mac laptop. Also see the benchmark code for a test validation function.
[edit: I change the forEach to a for loop and now it's even faster.]
This one's inspired by Sam Claus' #1 which is indeed the fastest method on here. Still, I've found that using plain string concatenation instead of using an array as a string buffer is even faster! At least it is on Chrome. (which is V8 which is almost every browser these days and NodeJS)
const len = 0x100, byteToHex = new Array(len), char = String.fromCharCode;
let n = 0;
for (; n < 0x0a; ++n) byteToHex[n] = '0' + n;
for (; n < 0x10; ++n) byteToHex[n] = '0' + char(n + 87);
for (; n < len; ++n) byteToHex[n] = n.toString(16);
function byteArrayToHex(byteArray) {
const l = byteArray.length;
let hex = '';
for (let i = 0; i < l; ++i) hex += byteToHex[byteArray[i] % len];
return hex;
}
function bufferToHex(arrayBuffer) {
return byteArrayToHex(new Uint8Array(arrayBuffer));
}
I use this to hexdump ArrayBuffers the same way that Node dumps Buffers.
function pad(n: string, width: number, z = '0') {
return n.length >= width ? n : new Array(width - n.length + 1).join(z) + n;
}
function hexdump(buf: ArrayBuffer) {
let view = new Uint8Array(buf);
let hex = Array.from(view).map(v => this.pad(v.toString(16), 2));
return `<Buffer ${hex.join(" ")}>`;
}
Example (with transpiled js version):
const buffer = new Uint8Array([ 4, 8, 12, 16 ]).buffer;
console.log(hexdump(buffer)); // <Buffer 04 08 0c 10>
In Node, we can use Buffer.from(uint8array, "hex")
If you find this and need to encode / decode even faster and potentially reduce the amount of memory needed, then the already provided answers, then this might work for you.
It leverages the TextEncoder, which is present in any browser (https://caniuse.com/textencoder) and in nodejs, to concat the resulting hex string or to get the hex charcodes.
In nodejs you should use the option already provided like so:
function nodeEncode(arr: Uint8Array) {
return Buffer.from(arr).toString('hex');
}
function nodeDecode(hexString: string) {
return Uint8Array.from(Buffer.from(hexString, 'hex'));
}
But in the browser environment you can use the TextEncoder like so:
const nibbleIntegerToHexCharCode = new TextEncoder().encode("0123456789abcdef");
function uint8ArrayToHexString(input: Uint8Array) {
const output = new Uint8Array(input.length * 2);
for (let i = 0; i < input.length; i++) {
const v = input[i];
output[i * 2 + 0] = nibbleIntegerToHexCharCode[(v & 0xf0) >> 4];
output[i * 2 + 1] = nibbleIntegerToHexCharCode[(v & 0x0f)];
}
return new TextDecoder().decode(output);
}
const charCodeToNibbleInteger = new Uint8Array(0xff + 1);
for (let i = 0; i < charCodeToNibbleInteger.length; i++)
charCodeToNibbleInteger[i] = nibbleIntegerToHexCharCode.findIndex(v => v == i);
function hexStringToUInt8Array(input: string) {
const encodedInput = new TextEncoder().encode(input);
const output = new Uint8Array(encodedInput.length / 2);
for (let i = 0; i < output.length; i++) {
const upper = charCodeToNibbleInteger[encodedInput[i * 2 + 0]] << 4;
const lower = charCodeToNibbleInteger[encodedInput[i * 2 + 1]];
output[i] = upper + lower;
}
return output;
}
The output of the hex function the nodeEncoder function and the uint8ArrayToHexString function are identical but the time to compute them differs.
For 22 MB of UInt8Array:
nodeEncoder takes 70 ms
uint8ArrayToHexString 280 ms
hex takes 1400 ms
There might also be a stark difference in the amount of memory used.

Implementing LLL algorithm as been said on Wikipedia, but getting into serious issues

I am not sure my issue is related to programming or related to concept of LLL algorithm and what has been mentioned on Wikipedia.
I decided to implement LLL algorithm as it has been written on Wikipedia (step-by-step / line-by-line) to actually learn the algorithm and make sure it is truly working but I am getting unexpected or invalid results.
So, I used JavaScript (programming language) and node.js (JavaScript engine) to implement it and this is the git repository to get the complete code.
Long story short, value of K gets out of range, for example when we have only 3 vectors (array size is 3, thus maximum value of index would be 2), but k becomes 3 and it is nonsense.
My code is step-by-step (line-by-line) implementation of the algorithm mentioned on Wikipedia and what I did was only implementing it. So I don't what is the issue.
// ** important
// {b} set of vectors are denoted by this.matrix_before
// {b*} set of vectors are denoted by this.matrix_after
calculate_LLL() {
this.matrix_after = new gs(this.matrix_before, false).matrix; // initialize after vectors: perform Gram-Schmidt, but do not normalize
var flag = false; // invariant
var k = 1;
while (k <= this.dimensions && !flag) {
for (var j = k - 1; j >= 0; j--) {
if (Math.abs(this.mu(k, j)) > 0.5) {
var to_subtract = tools.multiply(Math.round(this.mu(k, j)), this.matrix_before[j], this.dimensions);
this.matrix_before[k] = tools.subtract(this.matrix_before[k], to_subtract, this.dimensions);
this.matrix_after = new gs(this.matrix_before, false).matrix; // update after vectors: perform Gram-Schmidt, but do not normalize
}
}
if (tools.dot_product(this.matrix_after[k], this.matrix_after[k], this.dimensions) >= (this.delta - Math.pow(this.mu(k, k - 1), 2)) * tools.dot_product(this.matrix_after[k - 1], this.matrix_after[k - 1], this.dimensions)) {
if (k + 1 >= this.dimensions) { // invariant: there is some issue, something is wrong
flag = true; // invariant is broken
console.log("something bad happened ! (1)");
}
k++;
// console.log("if; k, j");
// console.log(k + ", " + j);
} else {
var temp_matrix = this.matrix_before[k];
this.matrix_before[k] = this.matrix_before[k - 1];
this.matrix_before[k - 1] = temp_matrix;
this.matrix_after = new gs(this.matrix_before, false).matrix; // update after vectors: perform Gram-Schmidt, but do not normalize
if (k === Math.max(k - 1, 1) || k >= this.dimensions || Math.max(k - 1, 1) >= this.dimensions) { // invariant: there is some issue, something is wrong
flag = true; // invariant is broken
console.log("something bad happened ! (2)");
}
k = Math.max(k - 1, 1);
// console.log("else; k, j");
// console.log(k + ", " + j);
}
console.log(this.matrix_before);
console.log("\n");
} // I added this flag variable to prevent getting exceptions and terminate the loop gracefully
console.log("final: ");
console.log(this.matrix_before);
}
// calculated mu as been mentioned on Wikipedia
// mu(i, j) = <b_i, b*_j> / <b*_j, b*_j>
mu(i, j) {
var top = tools.dot_product(this.matrix_before[i], this.matrix_after[j], this.dimensions);
var bottom = tools.dot_product(this.matrix_after[j], this.matrix_after[j], this.dimensions);
return top / bottom;
}
Here is the screenshot of the algorithm that is on Wikipedia:
Update #1: I added more comments to the code to clarify the question hoping that someone would help.
Just in case you are wondering about the already available implementation of the code, you can type: LatticeReduce[{{0,1},{2,0}}] wolfram alpha to see how this code suppose to behave.
Update #2: I cleaned up the code more and added a validate function to make Gram Schmidt code is working correctly, but still code fails and value of k exceeds number of dimensions (or number of vectors) which doesn't make sense.
The algorithm description in Wikipedia uses rather odd notation -- the vectors are numbered 0..n (rather than, say, 0..n-1 or 1..n), so the total number of vectors is n+1.
The code you've posted here treats this.dimensions as if it corresponds to n in the Wikipedia description. Nothing wrong with that so far.
However, the constructor in the full source file on GitHub sets this.dimensions = matrix[0].length. Two things about this look wrong. The first is that surely matrix[0].length is more like m (the dimension of the space) than n (the number of vectors, minus 1 for unclear reasons). The second is that if it's meant to be n then you need to subtract 1 because the number of vectors is n+1, not n.
So if you want to use this.dimensions to mean n, I think you need to initialize it as matrix.length-1. With the square matrix in your test case, using matrix[0].length-1 would work, but I think the code will then break when you feed in a non-square matrix. The name dimensions is kinda misleading, too; maybe just n to match the Wikipedia description?
Or you could call it something like nVectors, let it equal matrix.length, and change the rest of the code appropriately, which just means an adjustment in the termination condition for the main loop.

Convert A Large Integer To a Hex String In Javascript

I need to find a way to convert a large number into a hex string in javascript. Straight off the bat, I tried myBigNumber.toString(16) but if myBigNumber has a very large value (eg 1298925419114529174706173) then myBigNumber.toString(16) will return an erroneous result, which is just brilliant. I tried writing by own function as follows:
function (integer) {
var result = '';
while (integer) {
result = (integer % 16).toString(16) + result;
integer = Math.floor(integer / 16);
}
}
However, large numbers modulo 16 all return 0 (I think this fundamental issue is what is causing the problem with toString. I also tried replacing (integer % 16) with (integer - 16 * Math.floor(integer/16)) but that had the same issue.
I have also looked at the Big Integer Javascript library but that is a huge plugin for one, hopefully relatively straightforward problem.
Any thoughts as to how I can get a valid result? Maybe some sort of divide and conquer approach? I am really rather stuck here.
Assuming you have your integer stored as a decimal string like '1298925419114529174706173':
function dec2hex(str){ // .toString(16) only works up to 2^53
var dec = str.toString().split(''), sum = [], hex = [], i, s
while(dec.length){
s = 1 * dec.shift()
for(i = 0; s || i < sum.length; i++){
s += (sum[i] || 0) * 10
sum[i] = s % 16
s = (s - sum[i]) / 16
}
}
while(sum.length){
hex.push(sum.pop().toString(16))
}
return hex.join('')
}
The numbers in question are above javascript's largest integer. However, you can work with such large numbers by strings and there are some plugins which can help you do this. An example which is particularly useful in this circumstance is hex2dec
The approach I took was to use the bignumber.js library and create a BigNumber passing in the value as a string then just use toString to convert to hex:
const BigNumber = require('bignumber.js');
const lrgIntStr = '1298925419114529174706173';
const bn = new BigNumber(lrgIntStr);
const hex = bn.toString(16);

Categories