This is the javascript to generate a random hex color:
'#'+(Math.random()*0xFFFFFF<<0).toString(16);
could anyone talk me through it?
I understand how the Math.random works (as well as the to String at the end), but I don't understand the syntax after that. Questions I have:
How can Math.random() multiplied by F output a number?
What does the <<0 mean?
What does the parameter of 16 on toString mean? (does it mean no
more than 16 letters?)
Would really appreciate any help with this.
Thanks,
Raph
It looks like you picked this up on codegolf.
How can Math.random() multiplied by F output a number?
It is not multiplied by F. 0xFFFFFF is converted to 16777215, as 0xFFFFFF is just the hexadecimal way of writing 16777215.
What does the <<0 mean?
<< is a bitshift operator. <<0 shifts all bits 0 places to the left (filler: 0). This doesn't make any sense though. In this case it just deletes any decimal places.
What does the parameter of 16 on toString mean? (does it mean no more than 16 letters?)
The 16 is the parameter for the numeral system. (2 is binary, 8 is octal, 10 is decimal/normal, 16 is hexadecimal, etc.).
The best way to generate random HEX color:
It contains of tow functions:
The fest one picks a random hex number.
and the second one creates an array with hex values.
// Returns one possible value of the HEX numbers
function randomHex() {
var hexNumbers = [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
'A',
'B',
'C',
'D',
'E',
'F'
]
// picking a random item of the array
return hexNumbers[Math.floor(Math.random() * hexNumbers.length)];
}
// Genarates a Random Hex color
function hexGenerator() {
hexValue = ['#'];
for (var i = 0; i < 6; i += 1) {
hexValue.push(randomHex());
}
return hexValue.join('');
}
// print out the random HEX color
document.write(hexGenerator());
good luck
Related
I have a few scientists for clients and they have some problems with how toPrecision is rounding in JavaScript. They want everything rounded to a MAX of 3 sig figs which works most of the time but let me give a few examples of what they want:
Lab value to rounded value
123.5 to 124
1234 to 1230
12.0 to 12.0
0.003 to 0.003
So in other words round things with more than 3 sig figs down to 3. If something has 1 or 2 sig figs DONT append a zero (as that implies the lab was more accurate then they really were) but also in the case of 12.0 DONT remove the zero (as that implies the lab is less accurate then they really were)
Using toPrecision works for all examples given except for the 12.0 to 12.0 example:
var nums = [123.5, 1234, 12.0, 0.003]
var out = nums.map(num => parseFloat(num.toPrecision(3)))
// => [124, 1230, 12, 0.003]
It rounds numbers with more than 3 sig figs to 3, but if you use a number with a .0 or .00 on the end it fails. The reason for this is that the JavaScript engine equates 1.00 to 1, and 12.0 to 12, so the problem is actually not toPrecision, but rather JavaScript itself.
To work around this, what you can do is input numbers as strings, and use toPrecision if there isn't a decimal zero, otherwise operate on the string itself:
var nums = ['123.5', '1234', '12.0', '0.003', '1.000', '1236.00'];
var out = nums.map(str => {
if (/\.0+$/.test(str)) { // test if it ends with .0 or .00, etc.
// use alternative string method:
var zeros = str.match(/[0]+$/)[0].length; // count the no. of trailing zeros
var sigfigs = parseFloat(str).toString().length; // no. of other sig figs
var zerosNeeded = 3 - sigfigs;
if (zerosNeeded < 0) {
return parseFloat(parseFloat(str).toPrecision(3)).toFixed();
} else {
return str.substring(0, sigfigs + 1 + zerosNeeded); // +1 for the decimal point
}
} else {console.log(str)
return parseFloat(parseFloat(str).toPrecision(3)).toString()
}
});
// => ["124", "1230", "12.0", "0.003", "1.00", "1240"]
This works, however as the result must be in a string format if you need to work with floating point numbers and similar, I'd recommend using a different language such as Python. Anyway, I hope this helps!
All you need to do is to write a custom parser.
See this example:
const data = [123.5, 1234, 12.0, 0.003, 100.0, 1.0];
data.forEach(n => {
const result = customToPrecision(n, 3);
console.log(`${n} -> ${result}`);
});
function customToPrecision(number, precision){
let result = number.toPrecision(precision);
// Check if original number is a float
if (number % 1 !== 0){
result = result
.replace(/0+$/, '') // Remove trailing zeros
.replace(/\.$/, '.0'); // Add one zero to incomplete decimal
}
return result;
}
I'm new to JavaScript and I use Node-Red to read an write from a Database.
I receive from the database an object that contains the status of 8 digital inputs.
Each inputs is represented as a bit.
I'm looking for a method to combine each bits into a byte.
This is the object that I receive from the database:
array[1]
0: object
idx: 10
ts: "2018-11-21T06:12:45.000Z"
in_0: 1
in_1: 1
in_2: 1
in_3: 1
in_4: 1
in_5: 1
in_6: 1
in_7: 1
in_x represent the input position.
As out I would like to receive a byte that represent the combination of each single byte.
For example:
in0: 0,
in1: 1,
in2: 0,
in3: 0,
in4: 0,
in5: 1,
in6: 0,
in7: 0,
The output byte will be: 00100001 in binary that converted to byte is 33
Any suggestions?
Thanks in advance.
The following code works as you requested*:
var output =
arr[0].in_0 +
(arr[0].in_1 << 1) +
(arr[0].in_2 << 2) +
(arr[0].in_3 << 3) +
(arr[0].in_4 << 4) +
(arr[0].in_5 << 5) +
(arr[0].in_6 << 6) +
(arr[0].in_7 << 7);
This code assumes that each variable can only be a 1 or a 0. Anything else will result in nonsense.
I have used the Left Bit Shift operator (<<) to obtain the power of two for each on bit.
You have specified that in_7 is the Most Significant Bit. If it is actually the Least Significant Bit, reverse the order of the in_x variables.
*The result is not a byte, but it does contain the number that I think you're expecting.
I have a weird requirement,
My destination only supports one integer, But I want to send two integers to it and later I want to get them back from a response.
for example,
allowed input:
{
'task': 2
}
I have subtask kind of a logic in my side, But my target is not aware of this. So, without letting know the target, can I somehow pack two integers and get decode them back in future?
Can this be achieved with hexadecimal?
You can combine any two numbers and get both numbers back using their product (a * b) as long as a * (a * b) + b < Number.MAX_SAFE_INTEGER
Here's a demo snippet:
(() => {
document.addEventListener("click", handleStuff);
// formula: c = (a * (a * b)) + b
// as long as c < 9007199254740991
const combine = (a, b) => ({
a: a,
b: b,
get c() { return this.a * this.b; },
get combined() { return this.a * this.c + this.b; },
get unraveled() { return [
Math.floor(this.combined / this.c),
this.combined % this.c ]; }
});
const log = txt => document.querySelector("pre").textContent = txt;
let numbers = combine(
+document.querySelector("#num1").value,
+document.querySelector("#num2").value );
function handleStuff(evt) {
if (evt.target.nodeName.toLowerCase() === "button") {
if (evt.target.id === "combine") {
numbers = combine(
+document.querySelector("#num1").value,
+document.querySelector("#num2").value );
if (numbers.combined > Number.MAX_SAFE_INTEGER) {
log (`${numbers.combined} too large, unraveled will be unreliable`);
} else {
log (`Combined ${numbers.a} and ${numbers.b} to ${numbers.combined}`);
}
} else {
log(`${numbers.combined} unraveled to ${numbers.unraveled}`);
}
}
}
})();
input[type=number] {width: 100px;}
<p>
<input type="number" id="num1"
value="12315" min="1"> first number
</p>
<p>
<input type="number" id="num2"
value="231091" min="1"> second number
</p>
<p>
<button id="combine">combine</button>
<button id="unravel">unravel</button>
</p>
<pre id="result"></pre>
Note: #RallFriedl inspired this answer
JSFiddle
Yes, you can, assuming your two integers don't contain more information than the one integer can handle.
Let's assume your tasks and sub tasks are in the range 1..255. Then you can encode
combined = (task * 256) + subtask
And decode
task = combined / 256
subtask = combined % 256
At first, you don't have to convert an integer to hexadecimal to do this. An integer is a value and decimal, hexadecimal or binary is a representation to visualize that value. So all you need is integer arithmetics to achieve your goal.
According to this answer the maximum allowed integer number in javascript would be 9007199254740991. If you write this down in binary you'll get 53 ones, which means there are 53 bits available to store within an integer. Now you can split up this into two or more smaller ranges as you need.
For example let's say you need to save three numbers, the first is always lower 4.294.967.296 (32-bit), the second always lower 65.536 (16-bit) and the third always lower 32 (5-bit). If you sum up all the bits of these three values, you'll get 53 bits which means it would perfectly match.
To pack all these values into one, all you need is to move them at the right bit position within the integer. In my example I'd like to let the 32 bit number on the lowest position, then the 16 bit number and at the highest position the 5 bit number:
var max32bitValue = 3832905829; // binary: 1110 0100 0111 0101 1000 0000 0110 0101
var max16bitValue = 47313; // binary: 1011 1000 1101 0001
var max5bitValue = 17; // binary: 1000 1
var packedValue = max32bitValue // Position is at bit 0, so no movement needed.
+ max16bitValue << 32 // Move it up next to the first number.
+ max5bitValue << 48; // Move it up next to the second number (32 + 16)
This single integer value can now be stored, cause is a perfectly valid javascript integer value, but for us it holds three values.
To get all three values out of the packed value, we have to pick the correct bits out of it. This involves two steps, first remove all unneeded bits on the lower side (by using shift right), then remove all unneeded bits on the higher side (by masking out):
var max32bitValueRead = packedValue & Math.pow(2, 32); // No bits on the lower side, just mask the higher ones;
var max16bitValueRead = (packedValue >> 32) & Math.pow(2, 16); // Remove first 32 bits and set all bits higher then 16 bits to zero;
var max5bitValueRead = (packedValue >> 48); // Remove first 48 bits (32 + 16). No higher bits there, so no mask needed.
So hope this helps to understand, how to put multiple integer values into one, if the ranges of these values don't exceed the maximum bit range. Depending on your needs you could put two values with 26 bits each into this or move the range like one 32 bit value and one 21 bit value or a 48 bit value and a 5 bit value. Just be sure what your maximum value for each one could be and set the width accordingly (maybe add one to three bits, just to be sure).
I wouldn't suggest using hexadecimal if you can not have 2 sequential numbers. Try converting to an ASCII character and then back. So if you wanted to send:
{ 'task': 21 }
You could set the 21 to a character like:
var a = 55; var b = String.fromCharCode(a); var send2 = { 'task': b };
And to convert it back it would be: var res = { 'task': b }; var original = res.task.charCodeAt();
Let's say I have two items in an array, e.g:
["a", "b"]
Now let's say I have a function called random that chooses a random item from this array, e.g:
function random() {
// do awesome random stuff here...
return random_choice;
}
How can I get the random function to return "a" 80% of the time and "b" 20% of the time?
I'm not really sure as to what this is called but for example if I ran the console.log(random()); 10 times the result should look a little something like this:
>>> "a"
>>> "a"
>>> "a"
>>> "a"
>>> "a"
>>> "a"
>>> "a"
>>> "a"
>>> "b"
>>> "b"
"a" get's returned 8/10 times and "b" gets returned 2/10 times.
NOTE: the "results" above are just an example, I understand that they won't always be that perfect and they don't have to be.
Quickest answer would be:
var result = Math.random() >= 0.2 ? "a" : "b";
Generalized solution for any number of values
function biasedRandomSelection(values, probabilities) {
// generate random number (zero to one).
var rand = Math.random();
// cumulative probability, starting at 0
var cumulativeProb = 0;
// loop through the raw `probabilities` array
for(var i=0; i<probabilities.length; i++) {
// increment cumulative probability
cumulativeProb += probabilities[i];
// test for `rand` being less than the cumulative probaility;
// when true, return return the corresponding value.
if(rand < cumulativeProb) return values[i];
}
}
The knack here is to test against a rolling "cumulative probability", derived from the raw probabilities.
Sample calls :
biasedRandomSelection(['a', 'b'], [0.8, 0.2]); // as per the question
biasedRandomSelection(['a', 'b'], [0.2, 0.8]); // reversed probailities
biasedRandomSelection(['a', 'b', 'c', 'd', 'e'], [0.4, 0.1, 0.2, 0.2, 0.1]); // larger range of values/probailities
Demo
As written, biasedRandomSelection() performs no range checking.
A safer version would check that :
values and probabilities were congruant
the sum of the probabilities was 1.
Math.random() returns a number in [0;1). Just use p < 0.2 / p < 0.8 to have a biased result instead of the the unbiased p < 0.5.
If you want the first N outcomes to be deterministic, then you can use a simple counter i++ < N.
A bit extended:
var array="a".repeat(8)+"b".repeat(2);
var random=()=>array[Math.floor(Math.random()*array.length)];
alert(random());
This also works for more than two results and all different kinds of probability.
Note that array is not really an array but rather a lookupstring
..
http://jsbin.com/besapetidi/edit?console
Is it possible to get the integers that, being results of powers of two, forms a value?
Example:
129 resolves [1, 128]
77 resolves [1, 4, 8, 64]
I already thought about using Math.log and doing also a foreach with a bitwise comparator. Is any other more beautiful solution?
The easiest way is to use a single bit value, starting with 1 and shift that bit 'left' until its value is greater than the value to check, comparing each bit step bitwise with the value. The bits that are set can be stored in an array.
function GetBits(value) {
var b = 1;
var res = [];
while (b <= value) {
if (b & value) res.push(b);
b <<= 1;
}
return res;
}
console.log(GetBits(129));
console.log(GetBits(77));
console.log(GetBits(255));
Since shifting the bit can be seen as a power of 2, you can push the current bit value directly into the result array.
Example
You can adapt solutions from other languages to javascript. In this SO question you'll find some ways of solving the problem using Java (you can choose the one you find more elegant).
decomposing a value into powers of two
I adapted one of those answers to javascript and come up with this code:
var powers = [], power = 0, n = 129;// Gives [1,128] as output.
while (n != 0) {
if ((n & 1) != 0) {
powers.push(1 << power);
}
++power;
n >>>= 1;
}
console.log(powers);
Fiddle
Find the largest power of two contained in the number.
Subtract from the original number and Add it to list.
Decrement the exponent and check if new 2's power is less than the number.
If less then subtract it from the original number and add it to list.
Otherwise go to step 3.
Exit when your number comes to 0.
I am thinking of creating a list of all power of 2 numbers <= your number, then use an addition- subtraction algorithm to find out the group of correct numbers.
For example number 77:
the group of factors is { 1,2,4,8,16,32,64} [ 64 is the greatest power of 2 less than or equal 77]
An algorithm that continuously subtract the greatest number less than or equal to your number from the group you just created, until you get zero.
77-64 = 13 ==> [64]
13-8 = 7 ==> [8]
7-4 = 3 ==> [4]
3-2 = 1 ==> [2]
1-1 = 0 ==> [1]
Hope you understand my algorithm, pardo my bad english.
function getBits(val, factor) {
factor = factor || 1;
if(val) {
return (val % 2 ? [factor] : []).concat(getBits(val>>1, factor*2))
}
return [];
}
alert(getBits(77));