Can you get powers of 10 faster than O(log n)? - javascript

I know that exponentiation is O(log n) or worse for most cases, but I'm getting lost trying to understand of how numbers are represented themselves. Take JavaScript, for example, because it has several native number formats:
100000 === 1E5 && 100000 === 0303240
>>> true
Internally, don't they all end up being stored and manipulated as binary values stored in memory? If so, is the machine able to store the decimal and scientific-notation representations as fast as it does the octal?
And thus, would you expect +("1E" + n) to be faster than Math.pow(10, n)?
Mostly this question is about how 1E(n) works, but in trying to think about the answer myself I became more curious about how the number is parsed and stored in the first place. I would appreciate any explanation you can offer.

I don't think string manipulation could be faster, because at least concatenation creates a new object (memory allocation, more job for GC), Math.pow usually comes to single machine instruction.
Moreover, some modern JS VMs do hotspot optimisation, producing machine code from javascript. There is chance of it for Math.pow, but nearly impossible for the string magic.
If you 100% sure that Math.pow works slow in your application (I just cannot believe in it), you could use array lookup, it should work fastest possible: [1,10,100,1000,10000,...][n]. Array will be relatively small and complexity is O(1).

but I'm getting lost trying to understand of how numbers are represented themselves. Take JavaScript, for example, because it has several native number formats:
Internally, don't they all end up being stored and manipulated as binary values stored in memory?
Yep In javascript, there is only one number type a 64bit float type therefore
1 === 1.0
http://www.hunlock.com/blogs/The_Complete_Javascript_Number_Reference
If so, is the machine able to store the decimal and scientific-notation representations as fast as it does the octal?
Yes again because there is only one type. (Maybe there is a minute difference but it should be negligible)
However, for this specific case there is a limit on the numbers that can be represented ~ 1e300, therefore the runtime is O(~300) = O(1) all other numbers are represented as +/- Infinity.
And thus, would you expect +("1E" + n) to be faster than Math.pow(10, n)?
Not quite! 1E100 is faster than Math.pow(10,n)
However +("1E"+n) is slower than Math.pow(10,n);
Not because of string and memory allocation, but because the JS interpreter has to parse the string and convert it into a number, and that is slower than the native Math.pow(num,num) operation.
jsperf test

I ran a jsperf on the options.
var sum = 0;
for (var i = 1; i < 20; ++i){
sum += +("1E" + i);
}
is slow because of string concatenation.
var sum = 0;
for (var i = 0; i < 20; ++i){
Math.pow(10, i);
}
is therefore faster, since it operates on numbers only.
var sum = 0;
sum += 1e0;
sum += 1e1;
...
sum += 1e19;
is fastest, but only likely since 1ex for a constant are precomputed values.
To get the best peformance, you might want to precompute the answers for yourself.

Math.pow doesn't distinguish between numbers so it is just as slow for every number, provided that the interpreter doesn't optimize for integers. It is likely to allocate just a few floats to run. I am ignoring parsing time.
"1E"+n will allocate 2~3 string objects which might have quite a substantial memory overhead, destroy intermediates, and reparse it as a number. Unlikely to be faster than pow. I am again ignoring the parse time.

Related

Why is array.includes an order of magnitude faster than set.has in javascript?

Well having grown up in C++, I am always conscious about what algorithm would fit what. So when I notice the application started to behave sluggish on mobile phones I immediately started looking at the data structures and how they are represented.
I am noticing a very strange effect Array.includes is an order of Magnitude faster than Set.has. Even though Set.has much more potential to be optimized for lookup: it's the whole idea of using a set.
My initialize code is (this code is outside the timing for the tests):
function shuffle(a) {
for (let i = a.length - 1; i > 0; i--) {
const j = Math.floor(Math.random() * (i + 1));
[a[i], a[j]] = [a[j], a[i]];
}
}
const arr = []
for (let i = 0; i < 1000; i+=1) {
arr.push(i);
};
shuffle(arr);
const prebuildset=new Set(arr);
And the tests are:
(new Set(arr)).has(-1); //20.0 kOps/s
arr.includes(-1); //632 kOps/s
(new Set(arr)).has(0); //20.0 kOps/s
arr.includes(0); //720 kOps/s
prebuildset.has(-1); //76.7 kOps/s
prebuildset.has(0); //107 kOps/s
Tested with chrome 73.0.3683.103 on Ubuntu 18.04 using https://jsperf.com/set-array-has-test/1
I kind of can expect the versions that create a set on the fly to be slower than directly testing an array for inclusion. (Though I'd wonder why chrome doesn't JIT optimize the array away - I've also tested using a literal array and a literal array vs using a variable doesn't matter at all in speed).
However even prebuild sets are an order of magnitude slower than an array-inclusion test: even for the most negative case (entry not inside the array).
Why is this? What kind of black magic is happening?
EDIT: I've updated the tests to shuffle the results so as to not skew too much to an early stoppage for array.includes()- While no longer 10 times as slow it is still many times slower, very relevant and out of what I expect it to be.
I'll start by stating that I'm not an expert on JavaScript engine implementations and performance optimization; but in general, you should not trust these kind of tests to give you a reliable assessment of performance.
Time complexity of the underlying algorithm only becomes a meaningful factor over very (very) large numbers, and as a rule of thumb, 1000 is certainly not such a large number, especially for a simple array of integer values.
Over a small amount of millisecond-timed operations, you are going to have many other things happening in the engine at a similar time scale that will throw your measurements off wildly. Optimizations, unexpected overheads, and so on.
As an example, I edited your tests by simply increasing the size of the array to 100,000. The results on my poor old laptop look like this:
arr.includes(-1); //3,323 Ops/s
arr.includes(0); //6,132 Ops/s
prebuildset.has(-1); //41,923,084 Ops/s
prebuildset.has(0); //39,613,278 Ops/s
Which is, clearly, extremely different from your results. My point is, don't try to measure microperformance for small tasks. Use the data structure that makes the most sense for your project, keep your code clean and reasonable, and if you need to scale, prepare accordingly.

Efficient computation of n choose k in Node.js

I have some performance sensitive code on a Node.js server that needs to count combinations. From this SO answer, I used this simple recursive function for computing n choose k:
function choose(n, k) {
if (k === 0) return 1;
return (n * choose(n-1, k-1)) / k;
}
Then since we all know iteration is almost always faster than recursion, I wrote this function based on the multiplicative formula:
function choosei(n,k){
var result = 1;
for(var i=1; i <= k; i++){
result *= (n+1-i)/i;
}
return result;
}
I ran a few benchmarks on my machine. Here are the results of just one of them:
Recursive x 178,836 ops/sec ±7.03% (60 runs sampled)
Iterative x 550,284 ops/sec ±5.10% (51 runs sampled)
Fastest is Iterative
The results consistently showed that the iterative method is indeed about 3 to 4 times faster than the recursive method in Node.js (at least on my machine).
This is probably fast enough for my needs, but is there any way to make it faster? My code has to call this function very frequently, sometimes with fairly large values of n and k, so the faster the better.
EDIT
After running a few more tests with le_m's and Mike's solutions, it turns out that while both are significantly faster than the iterative method I proposed, Mike's method using Pascal's triangle appears to be slightly faster than le_m's log table method.
Recursive x 189,036 ops/sec ±8.83% (58 runs sampled)
Iterative x 538,655 ops/sec ±6.08% (51 runs sampled)
LogLUT x 14,048,513 ops/sec ±9.03% (50 runs sampled)
PascalsLUT x 26,538,429 ops/sec ±5.83% (62 runs sampled)
Fastest is PascalsLUT
The logarithmic look up method has been around 26-28 times faster than the iterative method in my tests, and the method using Pascal's triangle has been about 1.3 to 1.8 times faster than the logarithmic look up method.
Note that I followed le_m's suggestion of pre-computing the logarithms with higher precision using mathjs, then converted them back to regular JavaScript Numbers (which are always double-precision 64 bit floats).
Never compute factorials, they grow too quickly. Instead compute the result you want. In this case, you want the binomial numbers, which have an incredibly simple geometric construction: you can build pascal's triangle, as you need it, and do it using plain arithmetic.
Start with [1] and [1,1]. The next row has [1] at the start, [1+1] in the middle, and [1] at the end: [1,2,1]. Next row: [1] at the start, the sum of the first two terms in spot 2, the sum of the next two terms in spot three, and [1] at the end: [1,3,3,1]. Next row: [1], then 1+3=4, then 3+3=6, then 3+1=4, then [1] at the end, and so on and so on. As you can see, no factorials, logarithms, or even multiplications: just super fast addition with clean integer numbers. So simple, you can build a massive lookup table by hand.
And you should.
Never compute in code what you can compute by hand and just include as constants for immediate lookup; in this case, writing out the table for up to something around n=20 is absolutely trivial, and you can then just use that as your "starting LUT" and probably never even access the high rows.
But, if you do need them, or more, then because you can't build an infinite lookup table you compromise: you start with a pre-specified LUT, and a function that can "fill it up" to some term you need that's not in it yet:
// step 1: a basic LUT with a few steps of Pascal's triangle
const binomials = [
[1],
[1,1],
[1,2,1],
[1,3,3,1],
[1,4,6,4,1],
[1,5,10,10,5,1],
[1,6,15,20,15,6,1],
[1,7,21,35,35,21,7,1],
[1,8,28,56,70,56,28,8,1],
// ...
];
// step 2: a function that builds out the LUT if it needs to.
module.exports = function binomial(n,k) {
while(n >= binomials.length) {
let s = binomials.length;
let nextRow = [];
nextRow[0] = 1;
for(let i=1, prev=s-1; i<s; i++) {
nextRow[i] = binomials[prev][i-1] + binomials[prev][i];
}
nextRow[s] = 1;
binomials.push(nextRow);
}
return binomials[n][k];
};
Since this is an array of ints, the memory footprint is tiny. For a lot of work involving binomials, we realistically don't even need more than two bytes per integer, making this a minuscule lookup table: we don't need more than 2 bytes until you need binomials higher than n=19, and the full lookup table up to n=19 takes up a measly 380 bytes. This is nothing compared to the rest of your program. Even if we allow for 32 bit ints, we can get up to n=35 in a mere 2380 bytes.
So the lookup is fast: either O(constant) for previously computed values, (n*(n+1))/2 steps if we have no LUT at all (in big O notation, that would be O(n²), but big O notation is almost never the right complexity measure), and somewhere in between for terms we need that aren't in the LUT yet. Run some benchmarks for your application, which will tell you how big your initial LUT should be, simply hard code that (seriously. these are constants, they're are exactly the kind of values that should be hard coded), and keep the generator around just in case.
However, do remember that you're in JavaScript land, and you are constrained by the JavaScript numerical type: integers only go up to 2^53, beyond that the integer property (every n has a distinct m=n+1 such that m-n=1) is not guaranteed. This should hardly ever be a problem, though: once we hit that limit, we're dealing with binomial coefficients that you should never even be using.
The following algorithm has a run-time complexity of O(1) given a linear look-up table of log-factorials with space-complexity O(n).
Limiting n and k to the range [0, 1000] makes sense since binomial(1000, 500) is already dangerously close to Number.MAX_VALUE. We would thus need a look-up table of size 1000.
On a modern JavaScript engine, a compact array of n numbers has a size of n * 8 bytes. A full look-up table would thus require 8 kilobytes of memory. If we limit our input to the range [0, 100], the table would only occupy 800 bytes.
var logf = [0, 0, 0.6931471805599453, 1.791759469228055, 3.1780538303479458, 4.787491742782046, 6.579251212010101, 8.525161361065415, 10.60460290274525, 12.801827480081469, 15.104412573075516, 17.502307845873887, 19.987214495661885, 22.552163853123425, 25.19122118273868, 27.89927138384089, 30.671860106080672, 33.50507345013689, 36.39544520803305, 39.339884187199495, 42.335616460753485, 45.38013889847691, 48.47118135183523, 51.60667556776438, 54.78472939811232, 58.00360522298052, 61.261701761002, 64.55753862700634, 67.88974313718154, 71.25703896716801, 74.65823634883016, 78.0922235533153, 81.55795945611504, 85.05446701758152, 88.58082754219768, 92.1361756036871, 95.7196945421432, 99.33061245478743, 102.96819861451381, 106.63176026064346, 110.32063971475739, 114.0342117814617, 117.77188139974507, 121.53308151543864, 125.3172711493569, 129.12393363912722, 132.95257503561632, 136.80272263732635, 140.67392364823425, 144.5657439463449, 148.47776695177302, 152.40959258449735, 156.3608363030788, 160.3311282166309, 164.32011226319517, 168.32744544842765, 172.3527971391628, 176.39584840699735, 180.45629141754378, 184.53382886144948, 188.6281734236716, 192.7390472878449, 196.86618167289, 201.00931639928152, 205.1681994826412, 209.34258675253685, 213.53224149456327, 217.73693411395422, 221.95644181913033, 226.1905483237276, 230.43904356577696, 234.70172344281826, 238.97838956183432, 243.2688490029827, 247.57291409618688, 251.8904022097232, 256.22113555000954, 260.5649409718632, 264.9216497985528, 269.2910976510198, 273.6731242856937, 278.0675734403661, 282.4742926876304, 286.893133295427, 291.3239500942703, 295.76660135076065, 300.22094864701415, 304.6868567656687, 309.1641935801469, 313.65282994987905, 318.1526396202093, 322.66349912672615, 327.1852877037752, 331.7178871969285, 336.26118197919845, 340.815058870799, 345.37940706226686, 349.95411804077025, 354.5390855194408, 359.1342053695754, 363.73937555556347];
function binomial(n, k) {
return Math.exp(logf[n] - logf[n-k] - logf[k]);
}
console.log(binomial(5, 3));
Explanation
Starting with the original iterative algorithm, we first replace the product with a sum of logarithms:
function binomial(n, k) {
var logresult = 0;
for (var i = 1; i <= k; i++) {
logresult += Math.log(n + 1 - i) - Math.log(i);
}
return Math.exp(logresult);
}
Our loop now sums over k terms. If we rearrange the sum, we can easily see that we sum over consecutive logarithms log(1) + log(2) + ... + log(k) etc. which we can replace by a sum_of_logs(k) which is actually identical to log(k!). Precomputing these values and storing them in our lookup-table logf then leads to the above one-liner algorithm.
Computing the look-up table:
I recommend precomputing the lookup-table with higher precision and converting the resulting elements to 64-bit floats. If you do not need that little bit of additional precision or want to run this code on the client side, use this:
var size = 1000, logf = new Array(size);
logf[0] = 0;
for (var i = 1; i <= size; ++i) logf[i] = logf[i-1] + Math.log(i);
Numerical precision:
By using log-factorials, we avoid precision problems inherent to storing raw factorials.
We could even use Stirling's approximation for log(n!) instead of a lookup-table and still get 12 significant figures for above computation in both run-time and space complexity O(1):
function logf(n) {
return n === 0 ? 0 : (n + .5) * Math.log(n) - n + 0.9189385332046728 + 0.08333333333333333 / n - 0.002777777777777778 * Math.pow(n, -3);
}
function binomial(n , k) {
return Math.exp(logf(n) - logf(n - k) - logf(k));
}
console.log(binomial(1000, 500)); // 2.7028824094539536e+299
Using Pascal's triangle is a fast method for calculating n choose k.
The fastest method I know of would be to make use of the results from "On the Complexity of Calculating Factorials". Just calculate all 3 factorials, then perform the two division operations, each with complexity M(n logn).

Portable hashCode implementation for binary data

I am looking for a portable algorithm for creating a hashCode for binary data. None of the binary data is very long -- I am Avro-encoding keys for use in kafka.KeyedMessages -- we're probably talking anywhere from 2 to 100 bytes in length, but most of the keys are in the 4 to 8 byte range.
So far, my best solution is to convert the data to a hex string, and then do a hashCode of that. I'm able to make that work in both Scala and JavaScript. Assuming I have defined b: Array[Byte], the Scala looks like this:
b.map("%02X" format _).mkString.hashCode
It's a little more elaborate in JavaScript -- luckily someone already ported the basic hashCode algorithm to JavaScript -- but the point is being able to create a Hex string to represent the binary data, I can ensure the hashing algorithm works off the same inputs.
On the other hand, I have to create an object twice the size of the original just to create the hashCode. Luckily most of my data is tiny, but still -- there has to be a better way to do this.
Instead of padding the data as its hex value, I presume you could just coerce the binary data into a String so the String has the same number of bytes as the binary data. It would be all garbled, more control characters than printable characters, but it would be a string nonetheless. Do you run into portability issues though? Endian-ness, Unicode, etc.
Incidentally, if you got this far reading and don't already know this -- you can't just do:
val b: Array[Byte] = ...
b.hashCode
Luckily I already knew that before I started, because I ran into that one early on.
Update
Based on the first answer given, it appears at first blush that java.util.Arrays.hashCode(Array[Byte]) would do the trick. However, if you follow the javadoc trail, you'll see that this is the algorithm behind it, which is as based on the algorithm for List and the algorithm for byte combined.
int hashCode = 1;
for (byte e : list) hashCode = 31*hashCode + (e==null ? 0 : e.intValue());
As you can see, all it's doing is creating a Long representing the value. At a certain point, the number gets too big and it wraps around. This is not very portable. I can get it to work for JavaScript, but you have to import the npm module long. If you do, it looks like this:
function bufferHashCode(buffer) {
const Long = require('long');
var hashCode = new Long(1);
for (var value of buff.values()) { hashCode = hashCode.multiply(31).add(value) }
return hashCode
}
bufferHashCode(new Buffer([1,2,3]));
// hashCode = Long { low: 30817, high: 0, unsigned: false }
And you do get the same results when the data wraps around, sort of, though I'm not sure why. In Scala:
java.util.Arrays.hashCode(Array[Byte](1,2,3,4,5,6,7,8,9,10))
// res30: Int = -975991962
Note that the result is an Int. In JavaScript:
bufferHashCode(new Buffer([1,2,3,4,5,6,7,8,9,10]);
// hashCode = Long { low: -975991962, high: 197407, unsigned: false }
So I have to take the low bytes and ignore the high, but otherwise I get the same results.
This functionality is already available in Java standard library, look at the Arrays.hashCode() method.
Because your binary data are Array[Byte], here is how you can verify it works:
println(java.util.Arrays.hashCode(Array[Byte](1,2,3))) // prints 30817
println(java.util.Arrays.hashCode(Array[Byte](1,2,3))) // prints 30817
println(java.util.Arrays.hashCode(Array[Byte](2,2,3))) // prints 31778
Update: It is not true that the Java implementation boxes the bytes. Of course, there is conversion to int, but there's no way around that. This is the Java implementation:
public static int hashCode(byte a[]) {
if (a == null) return 0;
int result = 1;
for (byte element : a) result = 31 * result + element;
return result;
}
Update 2
If what you need is a JavaScript implementation that gives the same results as a Scala/Java implementation, than you can extend the algorithm by, e.g., taking only the rightmost 31 bits:
def hashCode(a: Array[Byte]): Int = {
if (a == null) {
0
} else {
var hash = 1
var i: Int = 0
while (i < a.length) {
hash = 31 * hash + a(i)
hash = hash & Int.MaxValue // taking only the rightmost 31 bits
i += 1
}
hash
}
}
and JavaScript:
var hashCode = function(arr) {
if (arr == null) return 0;
var hash = 1;
for (var i = 0; i < arr.length; i++) {
hash = hash * 31 + arr[i]
hash = hash % 0x80000000 // taking only the rightmost 31 bits in integer representation
}
return hash;
}
Why do the two implementations produce the same results? In Java, integer overflow behaves as if the addition was performed without loss of precision and then bits higher than 32 got thrown away and & Int.MaxValue throws away the 32nd bit. In JavaScript, there is no loss of precision for integers up to 253 which is a limit the expression 31 * hash + a(i) never exceeds. % 0x80000000 then behaves as taking the rightmost 31 bits. The case without overflows is obvious.
This is the meat of algorithm used in the Java library:
int result 1;
for (byte element : a) result = 31 * result + element;
You comment:
this algorithm isn't very portable
Incorrect. If we are talking about Java, then provided that we all agree on the type of the result, then the algorithm is 100% portable.
Yes the computation overflows, but it overflows exactly the same way on all valid implementations of the Java language. A Java int is specified to be 32 bits signed two's complement, and the behavior of the operators when overflow occurs is well-defined ... and the same for all implementations. (The same goes for long ... though the size is different, obviously.)
I'm not an expert, but my understanding is that Scala's numeric types have the same properties as Java. Javascript is different, being based on IEE 754 double precision floating point. However, with case you should be able to code the Java algorithm portably in Javascript. (I think #Mifeet's version is wrong ...)

How to generate random numbers in a very large range via javascript?

I was using this function for a long time and was happy with it. You probably saw it millions of times. It is even in the example section of the MDN documentation for Math.random()!
function random(min, max) {
return Math.floor(Math.random() * (max - min + 1)) + min
};
However when I called it on really large range it performed really poorly. Here are some results:
for(var i=0;i<100;i++) { console.log(random(0, 34359738368)) }
34064924616
6800671568
30945277424
2591785504
16404206304
29609031808
14821448928
10712020504
26471102024
21454653384
33180253592
28189739360
27189739528
1159593656
24058421888
13727549496
21995862272
20907450968
28767901872
8055552544
2856286816
28137132160
22775692392
21141911808
16418994064
28151646560
19928528408
11100796192
24022825648
17873139800
10310184976
7425284936
27043756016
2521657024
2864339728
8080550424
8812058632
8867252312
18571554760
19600873680
33687248280
14707542936
28864740112
26338252144
7877957776
28207487968
2268429496
14461565136
28062983608
5637084472
29651319832
31910601904
19776200528
16996597392
2478335752
4751145704
24803500872
21899551216
23144535632
19854787112
8490486080
14932659320
8625736560
11379900040
32357265704
33852039680
2826278800
4648275784
27363699728
14164020752
22279817656
25238815424
16569505656
30065335928
9904863008
26944796040
23179908064
19887944032
27944730648
16242926184
6518696400
25727832240
7496221976
19014687568
5685988776
34324757344
12538943128
21639530152
9532790800
25800487608
34329978920
10871183016
23748271688
23826614456
11774681408
667541072
1316689640
4539806456
2323113432
7782744448
Hardly random at all. All numbers are even.
My question is this: What is the CANONICAL way (if any) to overcome this problem? I have the impression that the above random function is the go-to function for random numbers in range. Thanks in advance.
The WebCrypto API (supported in draft by all the major browsers) provides cryptographically random numbers....
/* assuming that window.crypto.getRandomValues is available */
var array = new Uint32Array(10);
window.crypto.getRandomValues(array);
console.log("Your lucky numbers:");
for (var i = 0; i < array.length; i++) {
console.log(array[i]);
}
W3C standard
https://www.w3.org/TR/WebCryptoAPI/
Example from here.
https://developer.mozilla.org/en-US/docs/Web/API/RandomSource/getRandomValues
The answer in general is don't use Math.random. It gets the job done, but it's not especially good. On top of that, any number in Javascript greater than 0xffffffffUL isn't represented by integer values--it's an IEEE 754 value with a behavior noted on the MDN site: "Note that as numbers in JavaScript are IEEE 754 floating point numbers with round-to-nearest-even behavior...."
And that's what you're seeing.
If you want larger random numbers, then you'll probably have to get something like Mersenne Twister or Blum-Blum-Shub 32-bit random integer values and multiply them. That will eliminate the rounding-off problem.
Thats wierd! Well you know there is no such thing as truly random when in comes to computers. There is always an algorithm used. So you found a number that causes even's for this particular algorithm.
I tried it out, it isn't necessarily caused by large numbers. More likely some kind of factorization of the number instead. Just try another number, even larger if you like and you should get output that isn't all even. Ex. 134359738368 which is even larger doesn't out all odd or even numbers.

JavaScript 'var' Data/Object Sizes

Does JavaScript optimize the size of variables stored in memory? For instance, will a variable that has a boolean value take up less space than one that has an integer value?
Basically, will the following array:
var array = new Array(8192);
for (var i = 0; i < array.length; i++)
array[i] = true;
be any smaller in the computer's memory than:
var array = new Array(8192);
far (var i = 0; i < array.length; i++)
array[i] = 9;
Short answer: Yes.
Boolean's generally (and it will depend on the user agent and implementation) will take up 4 bytes, while integer's will take up 8.
Check out this other StackOverflow question to see how some others managed to measure memory footprints in JS: JavaScript object size
Edit: Section 8.5 of the ECMAScript Spec states the following:
The Number type has exactly 18437736874454810627 values, representing the doubleprecision 64-bit format IEEE 754 values as specified in the IEEE Standard for Binary Floating-Point Arithmetic
... so all numbers should, regardless of implementation, be 8 bytes.
Well, js has only one number type, which is a 64-bit float. Each character in a string is 16 bits ( src: douglas crockford's , javascript the good parts ). Handling of bools is probably thus interpreter implementation specific. if I remember correctly though, the V8 engine surely handles the 'Boolean' object as a 'c bool'.

Categories