Mistake in this javascript algorithm of roman to integers? - javascript

I apologise in advance as the probablility that I have done something stupid is 1 as it's just been a few minutes since I have started problem solving with coding. That being said, please don't mind, I know I'm dumb.
Problem:
Roman numerals are represented by seven different symbols: I, V, X, L, C, D and M.
(Symbol : Value) =
(I : 1)
(V : 5)
(X : 10)
(L : 50)
(C : 100)
(D : 500)
(M : 1000)
For example, 2 is written as II in Roman numeral, just two one's added together. 12 is written as XII, which is simply X + II. The number 27 is written as XXVII, which is XX + V + II.
Roman numerals are usually written largest to smallest from left to right. However, the numeral for four is not IIII. Instead, the number four is written as IV. Because the one is before the five we subtract it making four. The same principle applies to the number nine, which is written as IX. There are six instances where subtraction is used:
I can be placed before V (5) and X (10) to make 4 and 9.
X can be placed before L (50) and C (100) to make 40 and 90.
C can be placed before D (500) and M (1000) to make 400 and 900.
Given a roman numeral, convert it to an integer.
Example 1:
Input: s = "III"
Output: 3
Explanation: III = 3.
Example 2:
Input: s = "LVIII"
Output: 58
Explanation: L = 50, V= 5, III = 3.
Example 3:
Input: s = "MCMXCIV"
Output: 1994
Explanation: M = 1000, CM = 900, XC = 90 and IV = 4.
My solution:
let old;
var romanToInt = function(s) {
let dict = {I:1,V:5,X:10,L:50,C:100,D:500,M:1000};
let x = 0;
for(let letter of s){
x = dict[letter] + x;
if(old == "I" && letter == "V" || old == "I" && letter == "x"){x = x - 2*old;};
if(old == "X" && letter == "L" || old == "X" && letter == "C"){x = x - 2*old;};
if(old == "C" && letter == "D" || old == "C" && letter == "M"){x = x - 2*old;};
old = letter;
};
return x;
};
console.log(romanToInt("VIII"))
Can you please tell why this isn't working ?

Would you be up for a different solution? Since the logic of roman numerals follows a consistent pattern with a single look back, this would be much easier to write as a loop with a look back condition rather than a sum of multiple conditions on the string value. Let me explain.
Since all string values are associated with a numeric value, you could just convert every character in the string to an integer and then add them after.
To handle the subtraction (thank you for explaining this I honestly had no idea it worked like this) you could look back after every time you convert a character to a numeral and see if the previous numeral is lower than your current value. This would tell you a subtraction is needed. Below, I solve this by keeping the original lower value, lets say 10, and when the current string character value is greater at 50, I take 50 and subtract the previous value 10 twice. This removes the previous lesser value and then again to create the overall subtraction from 50. (10 + 50 - 10 - 10 = 40).
var data = {I:1,V:5,X:10,L:50,C:100,D:500,M:1000};
var input = 'MCMXCIV'
console.log(
input.split('').map((v,i,e) => data[e[i-1]] < data[v] ? data[v]-(2*data[e[i-1]]): data[v]).reduce((c,v) => c+v)
)
A commented version to explain the code
input.split('').map( //split the roman numeral string into an array of characters and then "map" over each one
// v = current roman numeral, i = current index of array, e = all elements of array
(v,i,e) =>
data[e[i-1]] < data[v] // if previous roman numeral value is less than the current ...
? data[v]-(2*data[e[i-1]]) // subtract the current value by 2 times the previous lesser value
: data[v] // convert roman numeral to integer
).reduce((c,v) => c+v) // reduce the array of values into a single integer

Update
Here's something I just learned:
The letters I, X, C can be repeated thrice in succession.
If a lower value digit is written to the left of a higher value digit, it is subtracted.
If a lower value digit is written to the right of a higher value digit, it is added.
Only I, X, and C can be used as subtractive numerals.
In the OP there's a ton of flow control statements (ifs) and I believe it was an attempt to satisfy rule 2. and 4.
Solution
Add c: -100, x: -10, and i: -1 key/values to dict.
Create 3 RegExp objects -- each pattern covers rule 2. and 4.
Before you begin to convert the given string use .replace() to fix all left assigned C, X, I:
...// c, x, i added to dict
c: -100,
x: -10,
i: -1
};
const rgxC = new RegExp(/(C)([DM])/, 'g');
const rgxX = new RegExp(/(X)([LCDM])/, 'g');
const rgxI = new RegExp(/(I)([VXLCDM])/, 'g');
roman = roman.replace(rgxC, 'c$2').replace(rgxX, 'x$2').replace(rgxI, 'i$2');
Example A - Modified OP with solution
Example B - A terse function fromRoman(string)
Example C - An inverse function toRoman(number) I barely read the question brain farted and wrote this, DERP! 😩
Example A
const romanToInt = roman => {
const dict = {
I: 1,
V: 5,
X: 10,
L: 50,
C: 100,
D: 500,
M: 1000,
c: -100,
x: -10,
i: -1
};
const rgxC = new RegExp(/(C)([DM])/, 'g');
const rgxX = new RegExp(/(X)([LCDM])/, 'g');
const rgxI = new RegExp(/(I)([VXLCDM])/, 'g');
roman = roman.replace(rgxC, 'c$2').replace(rgxX, 'x$2').replace(rgxI, 'i$2');
let sum = 0;
for (let char of roman) {
sum = dict[char] + sum;
}
return sum;
};
console.log(romanToInt('MLIX'));
console.log(romanToInt('MCDIX'));
Example B
/**
* Converts a given Roman numeral to a number
* #param {String} Roman numeral to convert
* #return {Number} a Number
*/
function fromRoman(roman) {
// Object of Roman numbers and decimal equivalents
const R = {
M: 1000,
D: 500,
C: 100,
L: 50,
X: 10,
V: 5,
I: 1,
i: -1,
x: -10,
c: -100
};
const rgxC = new RegExp(/(C)([DM])/, 'g');
const rgxX = new RegExp(/(X)([LCDM])/, 'g');
const rgxI = new RegExp(/(I)([VXLCDM])/, 'g');
roman = roman.replace(rgxC, 'c$2').replace(rgxX, 'x$2').replace(rgxI, 'i$2');
/*
Split the Roman numeral into an array of characters
then use each char as the key of object R to get the
number value and add each number and return the sum.
*/
return roman.split('').reduce((sum, r) => sum + R[r], 0);
}
console.log(fromRoman('MCMLXXXVI'));
console.log(fromRoman('LXCIV'));
Example C
/**
* Helper function divides 1st param by the second
* param to get the quotient and remainder
* #param {Number} dividend to divide into
* #param {Number} divisor to divide by
* #return {Array} [0] = quotient [1] = remainder
*/
const div = (dividend, divisor) =>
[Math.floor(dividend / divisor), (dividend % divisor)];
/**
* Converts a given number to a Roman numeral
* #param {Number} int an integer to convert
* #return {String} a Roman numeral
*/
function toRoman(int) {
// A 2D array of Roman numbers and decimal equivalents
const R = [
['M', 1000],
['D', 500],
['C', 100],
['L', 50],
['X', 10],
['V', 5],
['I', 1]
];
/*
Returns a 2D array of only destructured sub-arrays that are of
lesser value at R[?][1] than the given number. Basically
determining what we can divide with and get a whole number
*/
const maxD = R.filter(([r, n]) => int > n);
/*
The outer loop destructures the filtered 2D array on each
iteration, the function div() divides the given number by
maxD[?][1] returning an array called "qr". "qr" contians the
quotient and remainder.
Then in the inner loop maxD[?][0] is added to the empty array
(rArray) as many times as qr[0] (quotient) and qr[1] (remainder)
becomes the new int to divide into.
*/
let roman = [];
for (const [rom, num] of maxD) {
let qr = div(int, num);
int = qr[1];
for (let q = 0; q < qr[0]; q++) {
roman.push(rom);
}
}
/*
Convert array into a string and correct any 9s or 4s to
satisfy rule 1.
*/
return roman.join('').replace('VIIII', 'IX').replace('IIII', 'IV');
};
console.log(toRoman(204));
console.log(toRoman(6089));

var romanToInt = function(s) {
let Roman = {"I": 1, "V":5,"X":10,"L":50,"C":100,"D":500,"M":1000};
let count = 0;
for (var i = 0; i < s.length; i++) {
count += Roman [s[i]];
}
return count;
}

Related

I wonder what this function really does .split(' ').map(x => +x);

I saw this line of code in a correction in a coding game
const tC = readline().split(' ').map(x => +x);
I wonder what it does because when I log this function it render the same thing that this one
const tC = readline().split(' ').map(x => x);
but the rest of the code didn't work
Context :
/** Temperatures (easy) https://www.codingame.com/training/easy/temperatures
* Solving this puzzle validates that the loop concept is understood and that
* you can compare a list of values.
* This puzzle is also a playground to experiment the concept of lambdas in
* different programming languages. It's also an opportunity to discover
* functional programming.
*
* Statement:
* Your program must analyze records of temperatures to find the closest to
* zero.
*
* Story:
* It's freezing cold out there! Will you be able to find the temperature
* closest to zero in a set of temperatures readings?
**/
const N = +readline();
const tC = readline().split(' ').map(x => +x);
let min = Infinity;
for (let i in tC) {
(Math.abs(tC[i]) < Math.abs(min) || tC[i] === -min && tC[i] > 0) && (min = tC[i]);
}
print(min || 0);
Thanks a lot
The .map(x => +x) converts all items in the array to a number. And returns a new array with those converted values.
If you change it to .map(x => x) then the values are left untouched und you just create a copy of the original array. So the strings remain strings which will break the code if numbers are expected.
I personally would avoid the +x syntax and use the more verbose Number(x), and write either .map(x => Number(x)) or .map(Number).
According to this site below are the inputs the program should receive
Line 1: N, the number of temperatures to analyze
Line 2: A string with the N temperatures expressed as integers ranging from -273 to 5526
Let me provide line by line comments with respect to the game rules
// Line 1: reads number temperature inputs. + converts to number
const N = +readline();
// Line 2: reads string of temperatures.
// tC contains an array of temperatures of length N in numbers. + converts to number
const tC = readline().split(' ').map(x => +x);
let min = Infinity;
// iterate over tC array
for (let i in tC) {
// If two numbers are equally close to zero, positive integer has to be considered closest to zero
// set min = current iterating number if it matches above condition
(Math.abs(tC[i]) < Math.abs(min) || tC[i] === -min && tC[i] > 0) && (min = tC[i]);
}
print(min || 0);
Here is the working demo in javascript
modified to make it understandable for beginners.
// Line 1: reads number temperature inputs. + converts to number
// const N = +readline(); SAMPLE ALTERNATIVE
const N = +"5";
// Line 2: reads string of temperatures.
// tC contains an array of temperatures of length N in numbers. + converts to number
// const tC = readline().split(' ').map(x => +x); SAMPLE ALTERNATIVE
const tC = "1 -2 -8 4 5".split(' ').map(x => +x);
let min = Infinity;
// iterate over tC array
for (let i in tC) {
// If two numbers are equally close to zero, positive integer has to be considered closest to zero
// set min = current iterating number if it matches above condition
(Math.abs(tC[i]) < Math.abs(min) || tC[i] === -min && tC[i] > 0) && (min = tC[i]);
}
console.log(min || 0);
function readLine(){
return "123456"
}
var result = readLine().split("").map(x => +x)
console.log(result)
readLine().split("") // splits the string into an array as follows ["1", "2", "3", "4", "5", "6"]
.map(x => +x) // map method returns a new array which will take each item and gives a new array , here number changing from string to numbers as follows [1, 2, 3, 4, 5, 6] since +x is used
// the above is written in es6, which can be re written in es5 as follows
readLine().split("").map(function(x) {
return +x
})
// Note
In es6 if we have a single thing to pass we can avoid the function(x) to x
also we can remove the {} [curly braces and return too]
{return +x} to +x
ES2015
readLine().split("").map(function(x) {
return +x
})
ES2016
readLine().split("").map(x => +x);

Javascript pick winner from random numbers with percentage [duplicate]

I'm trying to devise a (good) way to choose a random number from a range of possible numbers where each number in the range is given a weight. To put it simply: given the range of numbers (0,1,2) choose a number where 0 has an 80% probability of being selected, 1 has a 10% chance and 2 has a 10% chance.
It's been about 8 years since my college stats class, so you can imagine the proper formula for this escapes me at the moment.
Here's the 'cheap and dirty' method that I came up with. This solution uses ColdFusion. Yours may use whatever language you'd like. I'm a programmer, I think I can handle porting it. Ultimately my solution needs to be in Groovy - I wrote this one in ColdFusion because it's easy to quickly write/test in CF.
public function weightedRandom( Struct options ) {
var tempArr = [];
for( var o in arguments.options )
{
var weight = arguments.options[ o ] * 10;
for ( var i = 1; i<= weight; i++ )
{
arrayAppend( tempArr, o );
}
}
return tempArr[ randRange( 1, arrayLen( tempArr ) ) ];
}
// test it
opts = { 0=.8, 1=.1, 2=.1 };
for( x = 1; x<=10; x++ )
{
writeDump( weightedRandom( opts ) );
}
I'm looking for better solutions, please suggest improvements or alternatives.
Rejection sampling (such as in your solution) is the first thing that comes to mind, whereby you build a lookup table with elements populated by their weight distribution, then pick a random location in the table and return it. As an implementation choice, I would make a higher order function which takes a spec and returns a function which returns values based on the distribution in the spec, this way you avoid having to build the table for each call. The downsides are that the algorithmic performance of building the table is linear by the number of items and there could potentially be a lot of memory usage for large specs (or those with members with very small or precise weights, e.g. {0:0.99999, 1:0.00001}). The upside is that picking a value has constant time, which might be desirable if performance is critical. In JavaScript:
function weightedRand(spec) {
var i, j, table=[];
for (i in spec) {
// The constant 10 below should be computed based on the
// weights in the spec for a correct and optimal table size.
// E.g. the spec {0:0.999, 1:0.001} will break this impl.
for (j=0; j<spec[i]*10; j++) {
table.push(i);
}
}
return function() {
return table[Math.floor(Math.random() * table.length)];
}
}
var rand012 = weightedRand({0:0.8, 1:0.1, 2:0.1});
rand012(); // random in distribution...
Another strategy is to pick a random number in [0,1) and iterate over the weight specification summing the weights, if the random number is less than the sum then return the associated value. Of course, this assumes that the weights sum to one. This solution has no up-front costs but has average algorithmic performance linear by the number of entries in the spec. For example, in JavaScript:
function weightedRand2(spec) {
var i, sum=0, r=Math.random();
for (i in spec) {
sum += spec[i];
if (r <= sum) return i;
}
}
weightedRand2({0:0.8, 1:0.1, 2:0.1}); // random in distribution...
Generate a random number R between 0 and 1.
If R in [0, 0.1) -> 1
If R in [0.1, 0.2) -> 2
If R in [0.2, 1] -> 3
If you can't directly get a number between 0 and 1, generate a number in a range that will produce as much precision as you want. For example, if you have the weights for
(1, 83.7%) and (2, 16.3%), roll a number from 1 to 1000. 1-837 is a 1. 838-1000 is 2.
I use the following
function weightedRandom(min, max) {
return Math.round(max / (Math.random() * max + min));
}
This is my go-to "weighted" random, where I use an inverse function of "x" (where x is a random between min and max) to generate a weighted result, where the minimum is the most heavy element, and the maximum the lightest (least chances of getting the result)
So basically, using weightedRandom(1, 5) means the chances of getting a 1 are higher than a 2 which are higher than a 3, which are higher than a 4, which are higher than a 5.
Might not be useful for your use case but probably useful for people googling this same question.
After a 100 iterations try, it gave me:
==================
| Result | Times |
==================
| 1 | 55 |
| 2 | 28 |
| 3 | 8 |
| 4 | 7 |
| 5 | 2 |
==================
Here are 3 solutions in javascript since I'm not sure which language you want it in. Depending on your needs one of the first two might work, but the the third one is probably the easiest to implement with large sets of numbers.
function randomSimple(){
return [0,0,0,0,0,0,0,0,1,2][Math.floor(Math.random()*10)];
}
function randomCase(){
var n=Math.floor(Math.random()*100)
switch(n){
case n<80:
return 0;
case n<90:
return 1;
case n<100:
return 2;
}
}
function randomLoop(weight,num){
var n=Math.floor(Math.random()*100),amt=0;
for(var i=0;i<weight.length;i++){
//amt+=weight[i]; *alternative method
//if(n<amt){
if(n<weight[i]){
return num[i];
}
}
}
weight=[80,90,100];
//weight=[80,10,10]; *alternative method
num=[0,1,2]
8 years late but here's my solution in 4 lines.
Prepare an array of probability mass function such that
pmf[array_index] = P(X=array_index):
var pmf = [0.8, 0.1, 0.1]
Prepare an array for the corresponding cumulative distribution function such that
cdf[array_index] = F(X=array_index):
var cdf = pmf.map((sum => value => sum += value)(0))
// [0.8, 0.9, 1]
3a) Generate a random number.
3b) Get an array of elements that are more than or equal to this number.
3c) Return its length.
var r = Math.random()
cdf.filter(el => r >= el).length
This is more or less a generic-ized version of what #trinithis wrote, in Java: I did it with ints rather than floats to avoid messy rounding errors.
static class Weighting {
int value;
int weighting;
public Weighting(int v, int w) {
this.value = v;
this.weighting = w;
}
}
public static int weightedRandom(List<Weighting> weightingOptions) {
//determine sum of all weightings
int total = 0;
for (Weighting w : weightingOptions) {
total += w.weighting;
}
//select a random value between 0 and our total
int random = new Random().nextInt(total);
//loop thru our weightings until we arrive at the correct one
int current = 0;
for (Weighting w : weightingOptions) {
current += w.weighting;
if (random < current)
return w.value;
}
//shouldn't happen.
return -1;
}
public static void main(String[] args) {
List<Weighting> weightings = new ArrayList<Weighting>();
weightings.add(new Weighting(0, 8));
weightings.add(new Weighting(1, 1));
weightings.add(new Weighting(2, 1));
for (int i = 0; i < 100; i++) {
System.out.println(weightedRandom(weightings));
}
}
How about
int [ ] numbers = { 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 1 , 2 } ;
then you can randomly select from numbers and 0 will have an 80% chance, 1 10%, and 2 10%
This one is in Mathematica, but it's easy to copy to another language, I use it in my games and it can handle decimal weights:
weights = {0.5,1,2}; // The weights
weights = N#weights/Total#weights // Normalize weights so that the list's sum is always 1.
min = 0; // First min value should be 0
max = weights[[1]]; // First max value should be the first element of the newly created weights list. Note that in Mathematica the first element has index of 1, not 0.
random = RandomReal[]; // Generate a random float from 0 to 1;
For[i = 1, i <= Length#weights, i++,
If[random >= min && random < max,
Print["Chosen index number: " <> ToString#i]
];
min += weights[[i]];
If[i == Length#weights,
max = 1,
max += weights[[i + 1]]
]
]
(Now I'm talking with a lists first element's index equals 0) The idea behind this is that having a normalized list weights there is a chance of weights[n] to return the index n, so the distances between the min and max at step n should be weights[n]. The total distance from the minimum min (which we put it to be 0) and the maximum max is the sum of the list weights.
The good thing behind this is that you don't append to any array or nest for loops, and that increases heavily the execution time.
Here is the code in C# without needing to normalize the weights list and deleting some code:
int WeightedRandom(List<float> weights) {
float total = 0f;
foreach (float weight in weights) {
total += weight;
}
float max = weights [0],
random = Random.Range(0f, total);
for (int index = 0; index < weights.Count; index++) {
if (random < max) {
return index;
} else if (index == weights.Count - 1) {
return weights.Count-1;
}
max += weights[index+1];
}
return -1;
}
I suggest to use a continuous check of the probability and the rest of the random number.
This function sets first the return value to the last possible index and iterates until the rest of the random value is smaller than the actual probability.
The probabilities have to sum to one.
function getRandomIndexByProbability(probabilities) {
var r = Math.random(),
index = probabilities.length - 1;
probabilities.some(function (probability, i) {
if (r < probability) {
index = i;
return true;
}
r -= probability;
});
return index;
}
var i,
probabilities = [0.8, 0.1, 0.1],
count = probabilities.map(function () { return 0; });
for (i = 0; i < 1e6; i++) {
count[getRandomIndexByProbability(probabilities)]++;
}
console.log(count);
.as-console-wrapper { max-height: 100% !important; top: 0; }
Thanks all, this was a helpful thread. I encapsulated it into a convenience function (Typescript). Tests below (sinon, jest). Could definitely be a bit tighter, but hopefully it's readable.
export type WeightedOptions = {
[option: string]: number;
};
// Pass in an object like { a: 10, b: 4, c: 400 } and it'll return either "a", "b", or "c", factoring in their respective
// weight. So in this example, "c" is likely to be returned 400 times out of 414
export const getRandomWeightedValue = (options: WeightedOptions) => {
const keys = Object.keys(options);
const totalSum = keys.reduce((acc, item) => acc + options[item], 0);
let runningTotal = 0;
const cumulativeValues = keys.map((key) => {
const relativeValue = options[key]/totalSum;
const cv = {
key,
value: relativeValue + runningTotal
};
runningTotal += relativeValue;
return cv;
});
const r = Math.random();
return cumulativeValues.find(({ key, value }) => r <= value)!.key;
};
Tests:
describe('getRandomWeightedValue', () => {
// Out of 1, the relative and cumulative values for these are:
// a: 0.1666 -> 0.16666
// b: 0.3333 -> 0.5
// c: 0.5 -> 1
const values = { a: 10, b: 20, c: 30 };
it('returns appropriate values for particular random value', () => {
// any random number under 0.166666 should return "a"
const stub1 = sinon.stub(Math, 'random').returns(0);
const result1 = randomUtils.getRandomWeightedValue(values);
expect(result1).toEqual('a');
stub1.restore();
const stub2 = sinon.stub(Math, 'random').returns(0.1666);
const result2 = randomUtils.getRandomWeightedValue(values);
expect(result2).toEqual('a');
stub2.restore();
// any random number between 0.166666 and 0.5 should return "b"
const stub3 = sinon.stub(Math, 'random').returns(0.17);
const result3 = randomUtils.getRandomWeightedValue(values);
expect(result3).toEqual('b');
stub3.restore();
const stub4 = sinon.stub(Math, 'random').returns(0.3333);
const result4 = randomUtils.getRandomWeightedValue(values);
expect(result4).toEqual('b');
stub4.restore();
const stub5 = sinon.stub(Math, 'random').returns(0.5);
const result5 = randomUtils.getRandomWeightedValue(values);
expect(result5).toEqual('b');
stub5.restore();
// any random number above 0.5 should return "c"
const stub6 = sinon.stub(Math, 'random').returns(0.500001);
const result6 = randomUtils.getRandomWeightedValue(values);
expect(result6).toEqual('c');
stub6.restore();
const stub7 = sinon.stub(Math, 'random').returns(1);
const result7 = randomUtils.getRandomWeightedValue(values);
expect(result7).toEqual('c');
stub7.restore();
});
});
Shortest solution in modern JavaScript
Note: all weights need to be integers
function weightedRandom(items){
let table = Object.entries(items)
.flatMap(([item, weight]) => Array(item).fill(weight))
return table[Math.floor(Math.random() * table.length)]
}
const key = weightedRandom({
"key1": 1,
"key2": 4,
"key3": 8
}) // returns e.g. "key1"
here is the input and ratios : 0 (80%), 1(10%) , 2 (10%)
lets draw them out so its easy to visualize.
0 1 2
-------------------------------------________+++++++++
lets add up the total weight and call it TR for total ratio. so in this case 100.
lets randomly get a number from (0-TR) or (0 to 100 in this case) . 100 being your weights total. Call it RN for random number.
so now we have TR as the total weight and RN as the random number between 0 and TR.
so lets imagine we picked a random # from 0 to 100. Say 21. so thats actually 21%.
WE MUST CONVERT/MATCH THIS TO OUR INPUT NUMBERS BUT HOW ?
lets loop over each weight (80, 10, 10) and keep the sum of the weights we already visit.
the moment the sum of the weights we are looping over is greater then the random number RN (21 in this case), we stop the loop & return that element position.
double sum = 0;
int position = -1;
for(double weight : weight){
position ++;
sum = sum + weight;
if(sum > 21) //(80 > 21) so break on first pass
break;
}
//position will be 0 so we return array[0]--> 0
lets say the random number (between 0 and 100) is 83. Lets do it again:
double sum = 0;
int position = -1;
for(double weight : weight){
position ++;
sum = sum + weight;
if(sum > 83) //(90 > 83) so break
break;
}
//we did two passes in the loop so position is 1 so we return array[1]---> 1
I have a slotmachine and I used the code below to generate random numbers. In probabilitiesSlotMachine the keys are the output in the slotmachine, and the values represent the weight.
const probabilitiesSlotMachine = [{0 : 1000}, {1 : 100}, {2 : 50}, {3 : 30}, {4 : 20}, {5 : 10}, {6 : 5}, {7 : 4}, {8 : 2}, {9 : 1}]
var allSlotMachineResults = []
probabilitiesSlotMachine.forEach(function(obj, index){
for (var key in obj){
for (var loop = 0; loop < obj[key]; loop ++){
allSlotMachineResults.push(key)
}
}
});
Now to generate a random output, I use this code:
const random = allSlotMachineResults[Math.floor(Math.random() * allSlotMachineResults.length)]
Enjoy the O(1) (constant time) solution for your problem.
If the input array is small, it can be easily implemented.
const number = Math.floor(Math.random() * 99); // Generate a random number from 0 to 99
let element;
if (number >= 0 && number <= 79) {
/*
In the range of 0 to 99, every number has equal probability
of occurring. Therefore, if you gather 80 numbers (0 to 79) and
make a "sub-group" of them, then their probabilities will get added.
Hence, what you get is an 80% chance that the number will fall in this
range.
So, quite naturally, there is 80% probability that this code will run.
Now, manually choose / assign element of your array to this variable.
*/
element = 0;
}
else if (number >= 80 && number <= 89) {
// 10% chance that this code runs.
element = 1;
}
else if (number >= 90 && number <= 99) {
// 10% chance that this code runs.
element = 2;
}

Javascript / JQuery - Favor Number Range In Math.random() [duplicate]

I'm trying to devise a (good) way to choose a random number from a range of possible numbers where each number in the range is given a weight. To put it simply: given the range of numbers (0,1,2) choose a number where 0 has an 80% probability of being selected, 1 has a 10% chance and 2 has a 10% chance.
It's been about 8 years since my college stats class, so you can imagine the proper formula for this escapes me at the moment.
Here's the 'cheap and dirty' method that I came up with. This solution uses ColdFusion. Yours may use whatever language you'd like. I'm a programmer, I think I can handle porting it. Ultimately my solution needs to be in Groovy - I wrote this one in ColdFusion because it's easy to quickly write/test in CF.
public function weightedRandom( Struct options ) {
var tempArr = [];
for( var o in arguments.options )
{
var weight = arguments.options[ o ] * 10;
for ( var i = 1; i<= weight; i++ )
{
arrayAppend( tempArr, o );
}
}
return tempArr[ randRange( 1, arrayLen( tempArr ) ) ];
}
// test it
opts = { 0=.8, 1=.1, 2=.1 };
for( x = 1; x<=10; x++ )
{
writeDump( weightedRandom( opts ) );
}
I'm looking for better solutions, please suggest improvements or alternatives.
Rejection sampling (such as in your solution) is the first thing that comes to mind, whereby you build a lookup table with elements populated by their weight distribution, then pick a random location in the table and return it. As an implementation choice, I would make a higher order function which takes a spec and returns a function which returns values based on the distribution in the spec, this way you avoid having to build the table for each call. The downsides are that the algorithmic performance of building the table is linear by the number of items and there could potentially be a lot of memory usage for large specs (or those with members with very small or precise weights, e.g. {0:0.99999, 1:0.00001}). The upside is that picking a value has constant time, which might be desirable if performance is critical. In JavaScript:
function weightedRand(spec) {
var i, j, table=[];
for (i in spec) {
// The constant 10 below should be computed based on the
// weights in the spec for a correct and optimal table size.
// E.g. the spec {0:0.999, 1:0.001} will break this impl.
for (j=0; j<spec[i]*10; j++) {
table.push(i);
}
}
return function() {
return table[Math.floor(Math.random() * table.length)];
}
}
var rand012 = weightedRand({0:0.8, 1:0.1, 2:0.1});
rand012(); // random in distribution...
Another strategy is to pick a random number in [0,1) and iterate over the weight specification summing the weights, if the random number is less than the sum then return the associated value. Of course, this assumes that the weights sum to one. This solution has no up-front costs but has average algorithmic performance linear by the number of entries in the spec. For example, in JavaScript:
function weightedRand2(spec) {
var i, sum=0, r=Math.random();
for (i in spec) {
sum += spec[i];
if (r <= sum) return i;
}
}
weightedRand2({0:0.8, 1:0.1, 2:0.1}); // random in distribution...
Generate a random number R between 0 and 1.
If R in [0, 0.1) -> 1
If R in [0.1, 0.2) -> 2
If R in [0.2, 1] -> 3
If you can't directly get a number between 0 and 1, generate a number in a range that will produce as much precision as you want. For example, if you have the weights for
(1, 83.7%) and (2, 16.3%), roll a number from 1 to 1000. 1-837 is a 1. 838-1000 is 2.
I use the following
function weightedRandom(min, max) {
return Math.round(max / (Math.random() * max + min));
}
This is my go-to "weighted" random, where I use an inverse function of "x" (where x is a random between min and max) to generate a weighted result, where the minimum is the most heavy element, and the maximum the lightest (least chances of getting the result)
So basically, using weightedRandom(1, 5) means the chances of getting a 1 are higher than a 2 which are higher than a 3, which are higher than a 4, which are higher than a 5.
Might not be useful for your use case but probably useful for people googling this same question.
After a 100 iterations try, it gave me:
==================
| Result | Times |
==================
| 1 | 55 |
| 2 | 28 |
| 3 | 8 |
| 4 | 7 |
| 5 | 2 |
==================
Here are 3 solutions in javascript since I'm not sure which language you want it in. Depending on your needs one of the first two might work, but the the third one is probably the easiest to implement with large sets of numbers.
function randomSimple(){
return [0,0,0,0,0,0,0,0,1,2][Math.floor(Math.random()*10)];
}
function randomCase(){
var n=Math.floor(Math.random()*100)
switch(n){
case n<80:
return 0;
case n<90:
return 1;
case n<100:
return 2;
}
}
function randomLoop(weight,num){
var n=Math.floor(Math.random()*100),amt=0;
for(var i=0;i<weight.length;i++){
//amt+=weight[i]; *alternative method
//if(n<amt){
if(n<weight[i]){
return num[i];
}
}
}
weight=[80,90,100];
//weight=[80,10,10]; *alternative method
num=[0,1,2]
8 years late but here's my solution in 4 lines.
Prepare an array of probability mass function such that
pmf[array_index] = P(X=array_index):
var pmf = [0.8, 0.1, 0.1]
Prepare an array for the corresponding cumulative distribution function such that
cdf[array_index] = F(X=array_index):
var cdf = pmf.map((sum => value => sum += value)(0))
// [0.8, 0.9, 1]
3a) Generate a random number.
3b) Get an array of elements that are more than or equal to this number.
3c) Return its length.
var r = Math.random()
cdf.filter(el => r >= el).length
This is more or less a generic-ized version of what #trinithis wrote, in Java: I did it with ints rather than floats to avoid messy rounding errors.
static class Weighting {
int value;
int weighting;
public Weighting(int v, int w) {
this.value = v;
this.weighting = w;
}
}
public static int weightedRandom(List<Weighting> weightingOptions) {
//determine sum of all weightings
int total = 0;
for (Weighting w : weightingOptions) {
total += w.weighting;
}
//select a random value between 0 and our total
int random = new Random().nextInt(total);
//loop thru our weightings until we arrive at the correct one
int current = 0;
for (Weighting w : weightingOptions) {
current += w.weighting;
if (random < current)
return w.value;
}
//shouldn't happen.
return -1;
}
public static void main(String[] args) {
List<Weighting> weightings = new ArrayList<Weighting>();
weightings.add(new Weighting(0, 8));
weightings.add(new Weighting(1, 1));
weightings.add(new Weighting(2, 1));
for (int i = 0; i < 100; i++) {
System.out.println(weightedRandom(weightings));
}
}
How about
int [ ] numbers = { 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 1 , 2 } ;
then you can randomly select from numbers and 0 will have an 80% chance, 1 10%, and 2 10%
This one is in Mathematica, but it's easy to copy to another language, I use it in my games and it can handle decimal weights:
weights = {0.5,1,2}; // The weights
weights = N#weights/Total#weights // Normalize weights so that the list's sum is always 1.
min = 0; // First min value should be 0
max = weights[[1]]; // First max value should be the first element of the newly created weights list. Note that in Mathematica the first element has index of 1, not 0.
random = RandomReal[]; // Generate a random float from 0 to 1;
For[i = 1, i <= Length#weights, i++,
If[random >= min && random < max,
Print["Chosen index number: " <> ToString#i]
];
min += weights[[i]];
If[i == Length#weights,
max = 1,
max += weights[[i + 1]]
]
]
(Now I'm talking with a lists first element's index equals 0) The idea behind this is that having a normalized list weights there is a chance of weights[n] to return the index n, so the distances between the min and max at step n should be weights[n]. The total distance from the minimum min (which we put it to be 0) and the maximum max is the sum of the list weights.
The good thing behind this is that you don't append to any array or nest for loops, and that increases heavily the execution time.
Here is the code in C# without needing to normalize the weights list and deleting some code:
int WeightedRandom(List<float> weights) {
float total = 0f;
foreach (float weight in weights) {
total += weight;
}
float max = weights [0],
random = Random.Range(0f, total);
for (int index = 0; index < weights.Count; index++) {
if (random < max) {
return index;
} else if (index == weights.Count - 1) {
return weights.Count-1;
}
max += weights[index+1];
}
return -1;
}
I suggest to use a continuous check of the probability and the rest of the random number.
This function sets first the return value to the last possible index and iterates until the rest of the random value is smaller than the actual probability.
The probabilities have to sum to one.
function getRandomIndexByProbability(probabilities) {
var r = Math.random(),
index = probabilities.length - 1;
probabilities.some(function (probability, i) {
if (r < probability) {
index = i;
return true;
}
r -= probability;
});
return index;
}
var i,
probabilities = [0.8, 0.1, 0.1],
count = probabilities.map(function () { return 0; });
for (i = 0; i < 1e6; i++) {
count[getRandomIndexByProbability(probabilities)]++;
}
console.log(count);
.as-console-wrapper { max-height: 100% !important; top: 0; }
Thanks all, this was a helpful thread. I encapsulated it into a convenience function (Typescript). Tests below (sinon, jest). Could definitely be a bit tighter, but hopefully it's readable.
export type WeightedOptions = {
[option: string]: number;
};
// Pass in an object like { a: 10, b: 4, c: 400 } and it'll return either "a", "b", or "c", factoring in their respective
// weight. So in this example, "c" is likely to be returned 400 times out of 414
export const getRandomWeightedValue = (options: WeightedOptions) => {
const keys = Object.keys(options);
const totalSum = keys.reduce((acc, item) => acc + options[item], 0);
let runningTotal = 0;
const cumulativeValues = keys.map((key) => {
const relativeValue = options[key]/totalSum;
const cv = {
key,
value: relativeValue + runningTotal
};
runningTotal += relativeValue;
return cv;
});
const r = Math.random();
return cumulativeValues.find(({ key, value }) => r <= value)!.key;
};
Tests:
describe('getRandomWeightedValue', () => {
// Out of 1, the relative and cumulative values for these are:
// a: 0.1666 -> 0.16666
// b: 0.3333 -> 0.5
// c: 0.5 -> 1
const values = { a: 10, b: 20, c: 30 };
it('returns appropriate values for particular random value', () => {
// any random number under 0.166666 should return "a"
const stub1 = sinon.stub(Math, 'random').returns(0);
const result1 = randomUtils.getRandomWeightedValue(values);
expect(result1).toEqual('a');
stub1.restore();
const stub2 = sinon.stub(Math, 'random').returns(0.1666);
const result2 = randomUtils.getRandomWeightedValue(values);
expect(result2).toEqual('a');
stub2.restore();
// any random number between 0.166666 and 0.5 should return "b"
const stub3 = sinon.stub(Math, 'random').returns(0.17);
const result3 = randomUtils.getRandomWeightedValue(values);
expect(result3).toEqual('b');
stub3.restore();
const stub4 = sinon.stub(Math, 'random').returns(0.3333);
const result4 = randomUtils.getRandomWeightedValue(values);
expect(result4).toEqual('b');
stub4.restore();
const stub5 = sinon.stub(Math, 'random').returns(0.5);
const result5 = randomUtils.getRandomWeightedValue(values);
expect(result5).toEqual('b');
stub5.restore();
// any random number above 0.5 should return "c"
const stub6 = sinon.stub(Math, 'random').returns(0.500001);
const result6 = randomUtils.getRandomWeightedValue(values);
expect(result6).toEqual('c');
stub6.restore();
const stub7 = sinon.stub(Math, 'random').returns(1);
const result7 = randomUtils.getRandomWeightedValue(values);
expect(result7).toEqual('c');
stub7.restore();
});
});
Shortest solution in modern JavaScript
Note: all weights need to be integers
function weightedRandom(items){
let table = Object.entries(items)
.flatMap(([item, weight]) => Array(item).fill(weight))
return table[Math.floor(Math.random() * table.length)]
}
const key = weightedRandom({
"key1": 1,
"key2": 4,
"key3": 8
}) // returns e.g. "key1"
here is the input and ratios : 0 (80%), 1(10%) , 2 (10%)
lets draw them out so its easy to visualize.
0 1 2
-------------------------------------________+++++++++
lets add up the total weight and call it TR for total ratio. so in this case 100.
lets randomly get a number from (0-TR) or (0 to 100 in this case) . 100 being your weights total. Call it RN for random number.
so now we have TR as the total weight and RN as the random number between 0 and TR.
so lets imagine we picked a random # from 0 to 100. Say 21. so thats actually 21%.
WE MUST CONVERT/MATCH THIS TO OUR INPUT NUMBERS BUT HOW ?
lets loop over each weight (80, 10, 10) and keep the sum of the weights we already visit.
the moment the sum of the weights we are looping over is greater then the random number RN (21 in this case), we stop the loop & return that element position.
double sum = 0;
int position = -1;
for(double weight : weight){
position ++;
sum = sum + weight;
if(sum > 21) //(80 > 21) so break on first pass
break;
}
//position will be 0 so we return array[0]--> 0
lets say the random number (between 0 and 100) is 83. Lets do it again:
double sum = 0;
int position = -1;
for(double weight : weight){
position ++;
sum = sum + weight;
if(sum > 83) //(90 > 83) so break
break;
}
//we did two passes in the loop so position is 1 so we return array[1]---> 1
I have a slotmachine and I used the code below to generate random numbers. In probabilitiesSlotMachine the keys are the output in the slotmachine, and the values represent the weight.
const probabilitiesSlotMachine = [{0 : 1000}, {1 : 100}, {2 : 50}, {3 : 30}, {4 : 20}, {5 : 10}, {6 : 5}, {7 : 4}, {8 : 2}, {9 : 1}]
var allSlotMachineResults = []
probabilitiesSlotMachine.forEach(function(obj, index){
for (var key in obj){
for (var loop = 0; loop < obj[key]; loop ++){
allSlotMachineResults.push(key)
}
}
});
Now to generate a random output, I use this code:
const random = allSlotMachineResults[Math.floor(Math.random() * allSlotMachineResults.length)]
Enjoy the O(1) (constant time) solution for your problem.
If the input array is small, it can be easily implemented.
const number = Math.floor(Math.random() * 99); // Generate a random number from 0 to 99
let element;
if (number >= 0 && number <= 79) {
/*
In the range of 0 to 99, every number has equal probability
of occurring. Therefore, if you gather 80 numbers (0 to 79) and
make a "sub-group" of them, then their probabilities will get added.
Hence, what you get is an 80% chance that the number will fall in this
range.
So, quite naturally, there is 80% probability that this code will run.
Now, manually choose / assign element of your array to this variable.
*/
element = 0;
}
else if (number >= 80 && number <= 89) {
// 10% chance that this code runs.
element = 1;
}
else if (number >= 90 && number <= 99) {
// 10% chance that this code runs.
element = 2;
}

How to create a function that converts a Number to a Bijective Hexavigesimal?

Maybe i am just not that good enough in math, but I am having a problem in converting a number into pure alphabetical Bijective Hexavigesimal just like how Microsoft Excel/OpenOffice Calc do it.
Here is a version of my code but did not give me the output i needed:
var toHexvg = function(a){
var x='';
var let="_abcdefghijklmnopqrstuvwxyz";
var len=let.length;
var b=a;
var cnt=0;
var y = Array();
do{
a=(a-(a%len))/len;
cnt++;
}while(a!=0)
a=b;
var vnt=0;
do{
b+=Math.pow((len),vnt)*Math.floor(a/Math.pow((len),vnt+1));
vnt++;
}while(vnt!=cnt)
var c=b;
do{
y.unshift( c%len );
c=(c-(c%len))/len;
}while(c!=0)
for(var i in y)x+=let[y[i]];
return x;
}
The best output of my efforts can get is: a b c d ... y z ba bb bc - though not the actual code above. The intended output is suppose to be a b c ... y z aa ab ac ... zz aaa aab aac ... zzzzz aaaaaa aaaaab, you get the picture.
Basically, my problem is more on doing the ''math'' rather than the function. Ultimately my question is: How to do the Math in Hexavigesimal conversion, till a [supposed] infinity, just like Microsoft Excel.
And if possible, a source code, thank you in advance.
Okay, here's my attempt, assuming you want the sequence to be start with "a" (representing 0) and going:
a, b, c, ..., y, z, aa, ab, ac, ..., zy, zz, aaa, aab, ...
This works and hopefully makes some sense. The funky line is there because it mathematically makes more sense for 0 to be represented by the empty string and then "a" would be 1, etc.
alpha = "abcdefghijklmnopqrstuvwxyz";
function hex(a) {
// First figure out how many digits there are.
a += 1; // This line is funky
c = 0;
var x = 1;
while (a >= x) {
c++;
a -= x;
x *= 26;
}
// Now you can do normal base conversion.
var s = "";
for (var i = 0; i < c; i++) {
s = alpha.charAt(a % 26) + s;
a = Math.floor(a/26);
}
return s;
}
However, if you're planning to simply print them out in order, there are far more efficient methods. For example, using recursion and/or prefixes and stuff.
Although #user826788 has already posted a working code (which is even a third quicker), I'll post my own work, that I did before finding the posts here (as i didnt know the word "hexavigesimal"). However it also includes the function for the other way round. Note that I use a = 1 as I use it to convert the starting list element from
aa) first
ab) second
to
<ol type="a" start="27">
<li>first</li>
<li>second</li>
</ol>
:
function linum2int(input) {
input = input.replace(/[^A-Za-z]/, '');
output = 0;
for (i = 0; i < input.length; i++) {
output = output * 26 + parseInt(input.substr(i, 1), 26 + 10) - 9;
}
console.log('linum', output);
return output;
}
function int2linum(input) {
var zeros = 0;
var next = input;
var generation = 0;
while (next >= 27) {
next = (next - 1) / 26 - (next - 1) % 26 / 26;
zeros += next * Math.pow(27, generation);
generation++;
}
output = (input + zeros).toString(27).replace(/./g, function ($0) {
return '_abcdefghijklmnopqrstuvwxyz'.charAt(parseInt($0, 27));
});
return output;
}
linum2int("aa"); // 27
int2linum(27); // "aa"
You could accomplish this with recursion, like this:
const toBijective = n => (n > 26 ? toBijective(Math.floor((n - 1) / 26)) : "") + ((n % 26 || 26) + 9).toString(36);
// Parsing is not recursive
const parseBijective = str => str.split("").reverse().reduce((acc, x, i) => acc + ((parseInt(x, 36) - 9) * (26 ** i)), 0);
toBijective(1) // "a"
toBijective(27) // "aa"
toBijective(703) // "aaa"
toBijective(18279) // "aaaa"
toBijective(127341046141) // "overflow"
parseBijective("Overflow") // 127341046141
I don't understand how to work it out from a formula, but I fooled around with it for a while and came up with the following algorithm to literally count up to the requested column number:
var getAlpha = (function() {
var alphas = [null, "a"],
highest = [1];
return function(decNum) {
if (alphas[decNum])
return alphas[decNum];
var d,
next,
carry,
i = alphas.length;
for(; i <= decNum; i++) {
next = "";
carry = true;
for(d = 0; d < highest.length; d++){
if (carry) {
if (highest[d] === 26) {
highest[d] = 1;
} else {
highest[d]++;
carry = false;
}
}
next = String.fromCharCode(
highest[d] + 96)
+ next;
}
if (carry) {
highest.push(1);
next = "a" + next;
}
alphas[i] = next;
}
return alphas[decNum];
};
})();
alert(getAlpha(27)); // "aa"
alert(getAlpha(100000)); // "eqxd"
Demo: http://jsfiddle.net/6SE2f/1/
The highest array holds the current highest number with an array element per "digit" (element 0 is the least significant "digit").
When I started the above it seemed a good idea to cache each value once calculated, to save time if the same value was requested again, but in practice (with Chrome) it only took about 3 seconds to calculate the 1,000,000th value (bdwgn) and about 20 seconds to calculate the 10,000,000th value (uvxxk). With the caching removed it took about 14 seconds to the 10,000,000th value.
Just finished writing this code earlier tonight, and I found this question while on a quest to figure out what to name the damn thing. Here it is (in case anybody feels like using it):
/**
* Convert an integer to bijective hexavigesimal notation (alphabetic base-26).
*
* #param {Number} int - A positive integer above zero
* #return {String} The number's value expressed in uppercased bijective base-26
*/
function bijectiveBase26(int){
const sequence = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
const length = sequence.length;
if(int <= 0) return int;
if(int <= length) return sequence[int - 1];
let index = (int % length) || length;
let result = [sequence[index - 1]];
while((int = Math.floor((int - 1) / length)) > 0){
index = (int % length) || length;
result.push(sequence[index - 1]);
}
return result.reverse().join("")
}
I had to solve this same problem today for work. My solution is written in Elixir and uses recursion, but I explain the thinking in plain English.
Here are some example transformations:
0 -> "A", 1 -> "B", 2 -> "C", 3 -> "D", ..
25 -> "Z", 26 -> "AA", 27 -> "AB", ...
At first glance it might seem like a normal 26-base counting system
but unfortunately it is not so simple.
The "problem" becomes clear when you realize:
A = 0
AA = 26
This is at odds with a normal counting system, where "0" does not behave
as "1" when it is in a decimal place other than then unit.
To understand the algorithm, consider a simpler but equivalent base-2 system:
A = 0
B = 1
AA = 2
AB = 3
BA = 4
BB = 5
AAA = 6
In a normal binary counting system we can determine the "value" of decimal places by
taking increasing powers of 2 (1, 2, 4, 8, 16) and the value of a binary number is
calculated by multiplying each digit by that digit place's value.
e.g. 10101 = 1 * (2 ^ 4) + 0 * (2 ^ 3) + 1 * (2 ^ 2) + 0 * (2 ^ 1) + 1 * (2 ^ 0) = 21
In our more complicated AB system, we can see by inspection that the decimal place values are:
1, 2, 6, 14, 30, 62
The pattern reveals itself to be (previous_unit_place_value + 1) * 2.
As such, to get the next lower unit place value, we divide by 2 and subtract 1.
This can be extended to a base-26 system. Simply divide by 26 and subtract 1.
Now a formula for transforming a normal base-10 number to special base-26 is apparent.
Say the input is x.
Create an accumulator list l.
If x is less than 26, set l = [x | l] and go to step 5. Otherwise, continue.
Divide x by 2. The floored result is d and the remainder is r.
Push the remainder as head on an accumulator list. i.e. l = [r | l]
Go to step 2 with with (d - 1) as input, e.g. x = d - 1
Convert """ all elements of l to their corresponding chars. 0 -> A, etc.
So, finally, here is my answer, written in Elixir:
defmodule BijectiveHexavigesimal do
def to_az_string(number, base \\ 26) do
number
|> to_list(base)
|> Enum.map(&to_char/1)
|> to_string()
end
def to_09_integer(string, base \\ 26) do
string
|> String.to_charlist()
|> Enum.reverse()
|> Enum.reduce({0, nil}, fn
char, {_total, nil} ->
{to_integer(char), 1}
char, {total, previous_place_value} ->
char_value = to_integer(char + 1)
place_value = previous_place_value * base
new_total = total + char_value * place_value
{new_total, place_value}
end)
|> elem(0)
end
def to_list(number, base, acc \\ []) do
if number < base do
[number | acc]
else
to_list(div(number, base) - 1, base, [rem(number, base) | acc])
end
end
defp to_char(x), do: x + 65
end
You use it simply as BijectiveHexavigesimal.to_az_string(420). It also accepts on optional "base" arg.
I know the OP asked about Javascript but I wanted to provide an Elixir solution for posterity.
I have published these functions in npm package here:
https://www.npmjs.com/package/#gkucmierz/utils
Converting bijective numeration to number both ways (also BigInt version is included).
https://github.com/gkucmierz/utils/blob/main/src/bijective-numeration.mjs

How to convert decimal to hexadecimal in JavaScript

How do you convert decimal values to their hexadecimal equivalent in JavaScript?
Convert a number to a hexadecimal string with:
hexString = yourNumber.toString(16);
And reverse the process with:
yourNumber = parseInt(hexString, 16);
If you need to handle things like bit fields or 32-bit colors, then you need to deal with signed numbers. The JavaScript function toString(16) will return a negative hexadecimal number which is usually not what you want. This function does some crazy addition to make it a positive number.
function decimalToHexString(number)
{
if (number < 0)
{
number = 0xFFFFFFFF + number + 1;
}
return number.toString(16).toUpperCase();
}
console.log(decimalToHexString(27));
console.log(decimalToHexString(48.6));
The code below will convert the decimal value d to hexadecimal. It also allows you to add padding to the hexadecimal result. So 0 will become 00 by default.
function decimalToHex(d, padding) {
var hex = Number(d).toString(16);
padding = typeof (padding) === "undefined" || padding === null ? padding = 2 : padding;
while (hex.length < padding) {
hex = "0" + hex;
}
return hex;
}
function toHex(d) {
return ("0"+(Number(d).toString(16))).slice(-2).toUpperCase()
}
For completeness, if you want the two's-complement hexadecimal representation of a negative number, you can use the zero-fill-right shift >>> operator. For instance:
> (-1).toString(16)
"-1"
> ((-2)>>>0).toString(16)
"fffffffe"
There is however one limitation: JavaScript bitwise operators treat their operands as a sequence of 32 bits, that is, you get the 32-bits two's complement.
With padding:
function dec2hex(i) {
return (i+0x10000).toString(16).substr(-4).toUpperCase();
}
The accepted answer did not take into account single digit returned hexadecimal codes. This is easily adjusted by:
function numHex(s)
{
var a = s.toString(16);
if ((a.length % 2) > 0) {
a = "0" + a;
}
return a;
}
and
function strHex(s)
{
var a = "";
for (var i=0; i<s.length; i++) {
a = a + numHex(s.charCodeAt(i));
}
return a;
}
I believe the above answers have been posted numerous times by others in one form or another. I wrap these in a toHex() function like so:
function toHex(s)
{
var re = new RegExp(/^\s*(\+|-)?((\d+(\.\d+)?)|(\.\d+))\s*$/);
if (re.test(s)) {
return '#' + strHex( s.toString());
}
else {
return 'A' + strHex(s);
}
}
Note that the numeric regular expression came from 10+ Useful JavaScript Regular Expression Functions to improve your web applications efficiency.
Update: After testing this thing several times I found an error (double quotes in the RegExp), so I fixed that. HOWEVER! After quite a bit of testing and having read the post by almaz - I realized I could not get negative numbers to work.
Further - I did some reading up on this and since all JavaScript numbers are stored as 64 bit words no matter what - I tried modifying the numHex code to get the 64 bit word. But it turns out you can not do that. If you put "3.14159265" AS A NUMBER into a variable - all you will be able to get is the "3", because the fractional portion is only accessible by multiplying the number by ten(IE:10.0) repeatedly. Or to put that another way - the hexadecimal value of 0xF causes the floating point value to be translated into an integer before it is ANDed which removes everything behind the period. Rather than taking the value as a whole (i.e.: 3.14159265) and ANDing the floating point value against the 0xF value.
So the best thing to do, in this case, is to convert the 3.14159265 into a string and then just convert the string. Because of the above, it also makes it easy to convert negative numbers because the minus sign just becomes 0x26 on the front of the value.
So what I did was on determining that the variable contains a number - just convert it to a string and convert the string. This means to everyone that on the server side you will need to unhex the incoming string and then to determine the incoming information is numeric. You can do that easily by just adding a "#" to the front of numbers and "A" to the front of a character string coming back. See the toHex() function.
Have fun!
After another year and a lot of thinking, I decided that the "toHex" function (and I also have a "fromHex" function) really needed to be revamped. The whole question was "How can I do this more efficiently?" I decided that a to/from hexadecimal function should not care if something is a fractional part but at the same time it should ensure that fractional parts are included in the string.
So then the question became, "How do you know you are working with a hexadecimal string?". The answer is simple. Use the standard pre-string information that is already recognized around the world.
In other words - use "0x". So now my toHex function looks to see if that is already there and if it is - it just returns the string that was sent to it. Otherwise, it converts the string, number, whatever. Here is the revised toHex function:
/////////////////////////////////////////////////////////////////////////////
// toHex(). Convert an ASCII string to hexadecimal.
/////////////////////////////////////////////////////////////////////////////
toHex(s)
{
if (s.substr(0,2).toLowerCase() == "0x") {
return s;
}
var l = "0123456789ABCDEF";
var o = "";
if (typeof s != "string") {
s = s.toString();
}
for (var i=0; i<s.length; i++) {
var c = s.charCodeAt(i);
o = o + l.substr((c>>4),1) + l.substr((c & 0x0f),1);
}
return "0x" + o;
}
This is a very fast function that takes into account single digits, floating point numbers, and even checks to see if the person is sending a hex value over to be hexed again. It only uses four function calls and only two of those are in the loop. To un-hex the values you use:
/////////////////////////////////////////////////////////////////////////////
// fromHex(). Convert a hex string to ASCII text.
/////////////////////////////////////////////////////////////////////////////
fromHex(s)
{
var start = 0;
var o = "";
if (s.substr(0,2).toLowerCase() == "0x") {
start = 2;
}
if (typeof s != "string") {
s = s.toString();
}
for (var i=start; i<s.length; i+=2) {
var c = s.substr(i, 2);
o = o + String.fromCharCode(parseInt(c, 16));
}
return o;
}
Like the toHex() function, the fromHex() function first looks for the "0x" and then it translates the incoming information into a string if it isn't already a string. I don't know how it wouldn't be a string - but just in case - I check. The function then goes through, grabbing two characters and translating those in to ASCII characters. If you want it to translate Unicode, you will need to change the loop to going by four(4) characters at a time. But then you also need to ensure that the string is NOT divisible by four. If it is - then it is a standard hexadecimal string. (Remember the string has "0x" on the front of it.)
A simple test script to show that -3.14159265, when converted to a string, is still -3.14159265.
<?php
echo <<<EOD
<html>
<head><title>Test</title>
<script>
var a = -3.14159265;
alert( "A = " + a );
var b = a.toString();
alert( "B = " + b );
</script>
</head>
<body>
</body>
</html>
EOD;
?>
Because of how JavaScript works in respect to the toString() function, all of those problems can be eliminated which before were causing problems. Now all strings and numbers can be converted easily. Further, such things as objects will cause an error to be generated by JavaScript itself. I believe this is about as good as it gets. The only improvement left is for W3C to just include a toHex() and fromHex() function in JavaScript.
Without the loop:
function decimalToHex(d) {
var hex = Number(d).toString(16);
hex = "000000".substr(0, 6 - hex.length) + hex;
return hex;
}
// Or "#000000".substr(0, 7 - hex.length) + hex;
// Or whatever
// *Thanks to MSDN
Also isn't it better not to use loop tests that have to be evaluated?
For example, instead of:
for (var i = 0; i < hex.length; i++){}
have
for (var i = 0, var j = hex.length; i < j; i++){}
Combining some of these good ideas for an RGB-value-to-hexadecimal function (add the # elsewhere for HTML/CSS):
function rgb2hex(r,g,b) {
if (g !== undefined)
return Number(0x1000000 + r*0x10000 + g*0x100 + b).toString(16).substring(1);
else
return Number(0x1000000 + r[0]*0x10000 + r[1]*0x100 + r[2]).toString(16).substring(1);
}
Constrained/padded to a set number of characters:
function decimalToHex(decimal, chars) {
return (decimal + Math.pow(16, chars)).toString(16).slice(-chars).toUpperCase();
}
For anyone interested, here's a JSFiddle comparing most of the answers given to this question.
And here's the method I ended up going with:
function decToHex(dec) {
return (dec + Math.pow(16, 6)).toString(16).substr(-6)
}
Also, bear in mind that if you're looking to convert from decimal to hex for use in CSS as a color data type, you might instead prefer to extract the RGB values from the decimal and use rgb().
For example (JSFiddle):
let c = 4210330 // your color in decimal format
let rgb = [(c & 0xff0000) >> 16, (c & 0x00ff00) >> 8, (c & 0x0000ff)]
// Vanilla JS:
document.getElementById('some-element').style.color = 'rgb(' + rgb + ')'
// jQuery:
$('#some-element').css('color', 'rgb(' + rgb + ')')
This sets #some-element's CSS color property to rgb(64, 62, 154).
var number = 3200;
var hexString = number.toString(16);
The 16 is the radix and there are 16 values in a hexadecimal number :-)
function dec2hex(i)
{
var result = "0000";
if (i >= 0 && i <= 15) { result = "000" + i.toString(16); }
else if (i >= 16 && i <= 255) { result = "00" + i.toString(16); }
else if (i >= 256 && i <= 4095) { result = "0" + i.toString(16); }
else if (i >= 4096 && i <= 65535) { result = i.toString(16); }
return result
}
If you want to convert a number to a hexadecimal representation of an RGBA color value, I've found this to be the most useful combination of several tips from here:
function toHexString(n) {
if(n < 0) {
n = 0xFFFFFFFF + n + 1;
}
return "0x" + ("00000000" + n.toString(16).toUpperCase()).substr(-8);
}
AFAIK comment 57807 is wrong and should be something like:
var hex = Number(d).toString(16);
instead of
var hex = parseInt(d, 16);
function decimalToHex(d, padding) {
var hex = Number(d).toString(16);
padding = typeof (padding) === "undefined" || padding === null ? padding = 2 : padding;
while (hex.length < padding) {
hex = "0" + hex;
}
return hex;
}
And if the number is negative?
Here is my version.
function hexdec (hex_string) {
hex_string=((hex_string.charAt(1)!='X' && hex_string.charAt(1)!='x')?hex_string='0X'+hex_string : hex_string);
hex_string=(hex_string.charAt(2)<8 ? hex_string =hex_string-0x00000000 : hex_string=hex_string-0xFFFFFFFF-1);
return parseInt(hex_string, 10);
}
As the accepted answer states, the easiest way to convert from decimal to hexadecimal is var hex = dec.toString(16). However, you may prefer to add a string conversion, as it ensures that string representations like "12".toString(16) work correctly.
// Avoids a hard-to-track-down bug by returning `c` instead of `12`
(+"12").toString(16);
To reverse the process you may also use the solution below, as it is even shorter.
var dec = +("0x" + hex);
It seems to be slower in Google Chrome and Firefox, but is significantly faster in Opera. See http://jsperf.com/hex-to-dec.
I'm doing conversion to hex string in a pretty large loop, so I tried several techniques in order to find the fastest one. My requirements were to have a fixed-length string as a result, and encode negative values properly (-1 => ff..f).
Simple .toString(16) didn't work for me since I needed negative values to be properly encoded. The following code is the quickest I've tested so far on 1-2 byte values (note that symbols defines the number of output symbols you want to get, that is for 4-byte integer it should be equal to 8):
var hex = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f'];
function getHexRepresentation(num, symbols) {
var result = '';
while (symbols--) {
result = hex[num & 0xF] + result;
num >>= 4;
}
return result;
}
It performs faster than .toString(16) on 1-2 byte numbers and slower on larger numbers (when symbols >= 6), but still should outperform methods that encode negative values properly.
Converting hex color numbers to hex color strings:
A simple solution with toString and ES6 padStart for converting hex color numbers to hex color strings.
const string = `#${color.toString(16).padStart(6, '0')}`;
For example:
0x000000 will become #000000
0xFFFFFF will become #FFFFFF
Check this example in a fiddle here
How to convert decimal to hexadecimal in JavaScript
I wasn't able to find a brutally clean/simple decimal to hexadecimal conversion that didn't involve a mess of functions and arrays ... so I had to make this for myself.
function DecToHex(decimal) { // Data (decimal)
length = -1; // Base string length
string = ''; // Source 'string'
characters = [ '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'A', 'B', 'C', 'D', 'E', 'F' ]; // character array
do { // Grab each nibble in reverse order because JavaScript has no unsigned left shift
string += characters[decimal & 0xF]; // Mask byte, get that character
++length; // Increment to length of string
} while (decimal >>>= 4); // For next character shift right 4 bits, or break on 0
decimal += 'x'; // Convert that 0 into a hex prefix string -> '0x'
do
decimal += string[length];
while (length--); // Flip string forwards, with the prefixed '0x'
return (decimal); // return (hexadecimal);
}
/* Original: */
D = 3678; // Data (decimal)
C = 0xF; // Check
A = D; // Accumulate
B = -1; // Base string length
S = ''; // Source 'string'
H = '0x'; // Destination 'string'
do {
++B;
A& = C;
switch(A) {
case 0xA: A='A'
break;
case 0xB: A='B'
break;
case 0xC: A='C'
break;
case 0xD: A='D'
break;
case 0xE: A='E'
break;
case 0xF: A='F'
break;
A = (A);
}
S += A;
D >>>= 0x04;
A = D;
} while(D)
do
H += S[B];
while (B--)
S = B = A = C = D; // Zero out variables
alert(H); // H: holds hexadecimal equivalent
You can do something like this in ECMAScript 6:
const toHex = num => (num).toString(16).toUpperCase();
If you are looking for converting Large integers i.e. Numbers greater than Number.MAX_SAFE_INTEGER -- 9007199254740991, then you can use the following code
const hugeNumber = "9007199254740991873839" // Make sure its in String
const hexOfHugeNumber = BigInt(hugeNumber).toString(16);
console.log(hexOfHugeNumber)
To sum it all up;
function toHex(i, pad) {
if (typeof(pad) === 'undefined' || pad === null) {
pad = 2;
}
var strToParse = i.toString(16);
while (strToParse.length < pad) {
strToParse = "0" + strToParse;
}
var finalVal = parseInt(strToParse, 16);
if ( finalVal < 0 ) {
finalVal = 0xFFFFFFFF + finalVal + 1;
}
return finalVal;
}
However, if you don't need to convert it back to an integer at the end (i.e. for colors), then just making sure the values aren't negative should suffice.
I haven't found a clear answer, without checks if it is negative or positive, that uses two's complement (negative numbers included). For that, I show my solution to one byte:
((0xFF + number +1) & 0x0FF).toString(16);
You can use this instruction to any number bytes, only you add FF in respective places. For example, to two bytes:
((0xFFFF + number +1) & 0x0FFFF).toString(16);
If you want cast an array integer to string hexadecimal:
s = "";
for(var i = 0; i < arrayNumber.length; ++i) {
s += ((0xFF + arrayNumber[i] +1) & 0x0FF).toString(16);
}
In case you're looking to convert to a 'full' JavaScript or CSS representation, you can use something like:
numToHex = function(num) {
var r=((0xff0000&num)>>16).toString(16),
g=((0x00ff00&num)>>8).toString(16),
b=(0x0000ff&num).toString(16);
if (r.length==1) { r = '0'+r; }
if (g.length==1) { g = '0'+g; }
if (b.length==1) { b = '0'+b; }
return '0x'+r+g+b; // ('#' instead of'0x' for CSS)
};
var dec = 5974678;
console.log( numToHex(dec) ); // 0x5b2a96
This is based on Prestaul and Tod's solutions. However, this is a generalisation that accounts for varying size of a variable (e.g. Parsing signed value from a microcontroller serial log).
function decimalToPaddedHexString(number, bitsize)
{
let byteCount = Math.ceil(bitsize/8);
let maxBinValue = Math.pow(2, bitsize)-1;
/* In node.js this function fails for bitsize above 32bits */
if (bitsize > 32)
throw "number above maximum value";
/* Conversion to unsigned form based on */
if (number < 0)
number = maxBinValue + number + 1;
return "0x"+(number >>> 0).toString(16).toUpperCase().padStart(byteCount*2, '0');
}
Test script:
for (let n = 0 ; n < 64 ; n++ ) {
let s=decimalToPaddedHexString(-1, n);
console.log(`decimalToPaddedHexString(-1,${(n+"").padStart(2)}) = ${s.padStart(10)} = ${("0b"+parseInt(s).toString(2)).padStart(34)}`);
}
Test results:
decimalToPaddedHexString(-1, 0) = 0x0 = 0b0
decimalToPaddedHexString(-1, 1) = 0x01 = 0b1
decimalToPaddedHexString(-1, 2) = 0x03 = 0b11
decimalToPaddedHexString(-1, 3) = 0x07 = 0b111
decimalToPaddedHexString(-1, 4) = 0x0F = 0b1111
decimalToPaddedHexString(-1, 5) = 0x1F = 0b11111
decimalToPaddedHexString(-1, 6) = 0x3F = 0b111111
decimalToPaddedHexString(-1, 7) = 0x7F = 0b1111111
decimalToPaddedHexString(-1, 8) = 0xFF = 0b11111111
decimalToPaddedHexString(-1, 9) = 0x01FF = 0b111111111
decimalToPaddedHexString(-1,10) = 0x03FF = 0b1111111111
decimalToPaddedHexString(-1,11) = 0x07FF = 0b11111111111
decimalToPaddedHexString(-1,12) = 0x0FFF = 0b111111111111
decimalToPaddedHexString(-1,13) = 0x1FFF = 0b1111111111111
decimalToPaddedHexString(-1,14) = 0x3FFF = 0b11111111111111
decimalToPaddedHexString(-1,15) = 0x7FFF = 0b111111111111111
decimalToPaddedHexString(-1,16) = 0xFFFF = 0b1111111111111111
decimalToPaddedHexString(-1,17) = 0x01FFFF = 0b11111111111111111
decimalToPaddedHexString(-1,18) = 0x03FFFF = 0b111111111111111111
decimalToPaddedHexString(-1,19) = 0x07FFFF = 0b1111111111111111111
decimalToPaddedHexString(-1,20) = 0x0FFFFF = 0b11111111111111111111
decimalToPaddedHexString(-1,21) = 0x1FFFFF = 0b111111111111111111111
decimalToPaddedHexString(-1,22) = 0x3FFFFF = 0b1111111111111111111111
decimalToPaddedHexString(-1,23) = 0x7FFFFF = 0b11111111111111111111111
decimalToPaddedHexString(-1,24) = 0xFFFFFF = 0b111111111111111111111111
decimalToPaddedHexString(-1,25) = 0x01FFFFFF = 0b1111111111111111111111111
decimalToPaddedHexString(-1,26) = 0x03FFFFFF = 0b11111111111111111111111111
decimalToPaddedHexString(-1,27) = 0x07FFFFFF = 0b111111111111111111111111111
decimalToPaddedHexString(-1,28) = 0x0FFFFFFF = 0b1111111111111111111111111111
decimalToPaddedHexString(-1,29) = 0x1FFFFFFF = 0b11111111111111111111111111111
decimalToPaddedHexString(-1,30) = 0x3FFFFFFF = 0b111111111111111111111111111111
decimalToPaddedHexString(-1,31) = 0x7FFFFFFF = 0b1111111111111111111111111111111
decimalToPaddedHexString(-1,32) = 0xFFFFFFFF = 0b11111111111111111111111111111111
Thrown: 'number above maximum value'
Note: Not too sure why it fails above 32 bitsize
rgb(255, 255, 255) // returns FFFFFF
rgb(255, 255, 300) // returns FFFFFF
rgb(0,0,0) // returns 000000
rgb(148, 0, 211) // returns 9400D3
function rgb(...values){
return values.reduce((acc, cur) => {
let val = cur >= 255 ? 'ff' : cur <= 0 ? '00' : Number(cur).toString(16);
return acc + (val.length === 1 ? '0'+val : val);
}, '').toUpperCase();
}
Arbitrary precision
This solution take on input decimal string, and return hex string. A decimal fractions are supported. Algorithm
split number to sign (s), integer part (i) and fractional part (f) e.g for -123.75 we have s=true, i=123, f=75
integer part to hex:
if i='0' stop
get modulo: m=i%16 (in arbitrary precision)
convert m to hex digit and put to result string
for next step calc integer part i=i/16 (in arbitrary precision)
fractional part
count fractional digits n
multiply k=f*16 (in arbitrary precision)
split k to right part with n digits and put them to f, and left part with rest of digits and put them to d
convert d to hex and add to result.
finish when number of result fractional digits is enough
// #param decStr - string with non-negative integer
// #param divisor - positive integer
function dec2HexArbitrary(decStr, fracDigits=0) {
// Helper: divide arbitrary precision number by js number
// #param decStr - string with non-negative integer
// #param divisor - positive integer
function arbDivision(decStr, divisor)
{
// algorithm https://www.geeksforgeeks.org/divide-large-number-represented-string/
let ans='';
let idx = 0;
let temp = +decStr[idx];
while (temp < divisor) temp = temp * 10 + +decStr[++idx];
while (decStr.length > idx) {
ans += (temp / divisor)|0 ;
temp = (temp % divisor) * 10 + +decStr[++idx];
}
if (ans.length == 0) return "0";
return ans;
}
// Helper: calc module of arbitrary precision number
// #param decStr - string with non-negative integer
// #param mod - positive integer
function arbMod(decStr, mod) {
// algorithm https://www.geeksforgeeks.org/how-to-compute-mod-of-a-big-number/
let res = 0;
for (let i = 0; i < decStr.length; i++)
res = (res * 10 + +decStr[i]) % mod;
return res;
}
// Helper: multiply arbitrary precision integer by js number
// #param decStr - string with non-negative integer
// #param mult - positive integer
function arbMultiply(decStr, mult) {
let r='';
let m=0;
for (let i = decStr.length-1; i >=0 ; i--) {
let n = m+mult*(+decStr[i]);
r= (i ? n%10 : n) + r
m= n/10|0;
}
return r;
}
// dec2hex algorithm starts here
let h= '0123456789abcdef'; // hex 'alphabet'
let m= decStr.match(/-?(.*?)\.(.*)?/) || decStr.match(/-?(.*)/); // separate sign,integer,ractional
let i= m[1].replace(/^0+/,'').replace(/^$/,'0'); // integer part (without sign and leading zeros)
let f= (m[2]||'0').replace(/0+$/,'').replace(/^$/,'0'); // fractional part (without last zeros)
let s= decStr[0]=='-'; // sign
let r=''; // result
if(i=='0') r='0';
while(i!='0') { // integer part
r=h[arbMod(i,16)]+r;
i=arbDivision(i,16);
}
if(fracDigits) r+=".";
let n = f.length;
for(let j=0; j<fracDigits; j++) { // frac part
let k= arbMultiply(f,16);
f = k.slice(-n);
let d= k.slice(0,k.length-n);
r+= d.length ? h[+d] : '0';
}
return (s?'-':'')+r;
}
// -----------
// TESTS
// -----------
let tests = [
["0",2],
["000",2],
["123",0],
["-123",0],
["00.000",2],
["255.75",5],
["-255.75",5],
["127.999",32],
];
console.log('Input Standard Abitrary');
tests.forEach(t=> {
let nonArb = (+t[0]).toString(16).padEnd(17,' ');
let arb = dec2HexArbitrary(t[0],t[1]);
console.log(t[0].padEnd(10,' '), nonArb, arb);
});
// Long Example (40 digits after dot)
let example = "123456789012345678901234567890.09876543210987654321"
console.log(`\nLong Example:`);
console.log('dec:',example);
console.log('hex: ',dec2HexArbitrary(example,40));
The problem basically how many padding zeros to expect.
If you expect string 01 and 11 from Number 1 and 17. it's better to use Buffer as a bridge, with which number is turn into bytes, and then the hex is just an output format of it. And the bytes organization is well controlled by Buffer functions, like writeUInt32BE, writeInt16LE, etc.
import { Buffer } from 'buffer';
function toHex(n) { // 4byte
const buff = Buffer.alloc(4);
buff.writeInt32BE(n);
return buff.toString('hex');
}
> toHex(1)
'00000001'
> toHex(17)
'00000011'
> toHex(-1)
'ffffffff'
> toHex(-1212)
'fffffb44'
> toHex(1212)
'000004bc'
Here's my solution:
hex = function(number) {
return '0x' + Math.abs(number).toString(16);
}
The question says: "How to convert decimal to hexadecimal in JavaScript". While, the question does not specify that the hexadecimal string should begin with a 0x prefix, anybody who writes code should know that 0x is added to hexadecimal codes to distinguish hexadecimal codes from programmatic identifiers and other numbers (1234 could be hexadecimal, decimal, or even octal).
Therefore, to correctly answer this question, for the purpose of script-writing, you must add the 0x prefix.
The Math.abs(N) function converts negatives to positives, and as a bonus, it doesn't look like somebody ran it through a wood-chipper.
The answer I wanted, would have had a field-width specifier, so we could for example show 8/16/32/64-bit values the way you would see them listed in a hexadecimal editing application. That, is the actual, correct answer.

Categories