Backstory
I was playing around with finding large powers of 2. My method was splitting the large numbers into an array of smaller values and the multiplying each of them to get the next value. The size I chose for the smaller chunks was a maximum size of 1e15. Then I decided to see how performance changed if I used the new array buffers and had to reduce the maximum size down to 1e9. Something weird happened I got a performance boost not from using the array buffers, but from using a smaller integer. This doesn't make sense since the larger the number the fewer times the function has to cycle through the array.
Code
var i=3
function run(){
loop(50000,Math.pow(10,i),i++);
if(i<16)setTimeout(run,100)
}
run();
function loop(pin,lim,id){
var pow=110503;
pow=pin||pow;
var l=pow-1,val=[2];
var t1,t2;
//console.time(id);
t1=new Date();
while(l--){
val=multiply(val,lim);
}
//console.timeEnd(id);
t2=new Date();
console.log(id,' ',t2-t1);
}
function multiply(a,lim){
var l=a.length,val=0,carry=0;
while(l--){
val=a[l]*2+carry;
carry=0;
if(val>lim-1){
var b=val%lim;
carry=(val-b)/lim;
val=b;
}
a[l]=val;
}
if(carry>0){a.unshift(carry)}
return a;
}
Results
IE10
3 5539
4 4213
5 3329
6 2720
7 2341
8 2153
9 1948
10 1699
11 1508
12 1401
13 1309
14 1208
15 1133
Chrome
3 5962
4 4385
5 3851
6 3242
7 2533
8 2207
9 1940
10 1794
11 1542
12 1604
13 1560
14 1414
15 1331
Firefox
3 3651
4 2732
5 2279
6 1853
7 1615
8 1408
9 1256
10 2375
11 2034
12 1874
13 1723
14 1600
15 1504
Question
As you can see Firefox outperforms both IE10 and Chrome up til numbers that are 9 digits long and then it takes a sudden jump in time. So, why does it do this? I guess it may have something to do with switching between numbers that can by stored in 32 bytes. I suppose 32 byte numbers are more efficient to work with; so for smaller numbers they use them and then switch to a larger integer type if they need to. But if that is true why does it never catch up to Chrome's and IE's performance and would switching cause that much of a performance penalty?
Both SpiderMonkey (Firefox) and V8 (Chrome) use 32-bit integers when they can, switching to doubles (not larger integers) when the numbers no longer fit into a 32-bit int. But note that "can" is determined heuristically, because there are costs to mixing doubles and ints due to having to convert back and forth, so it's possible that V8 is not deciding to specialize to ints here.
Edit: deleted the part about needing the source, since it's all right there in the original post!
Related
I'm using the following to record the frame rate of an application:
let _lastCalledTime;
let _fps;
let _frame = 0;
let _csv = 'Frame,Timestamp,FPS';
const _refreshLoop = () =>
window.requestAnimationFrame((timestamp) => {
if (!_lastCalledTime) {
_lastCalledTime = timestamp;
_fps = 0;
} else {
const delta = (timestamp - _lastCalledTime) / 1000;
_lastCalledTime = timestamp;
_fps = 1 / delta;
}
_csv += `\n${_frame++},${timestamp},${_fps}`;
_refreshLoop();
});
_refreshLoop();
Which is a variation of some code I found here: https://www.growingwiththeweb.com/2017/12/fast-simple-js-fps-counter.html
Every time a frame is rendered, the elapsed time since the last frame is calculated using the timestamp parameter passed to the callback. This is used to calculate the FPS and the values logged as a CSV.
I have a MacBook and a Raspberry Pi 3, both running at 60 fps, and I want to calculate the performance of the application. The MacBook reports a very precise value and, once stable, reports a value very close to 60 fps:
Frame
Timestamp (ms)
FPS
0
188.835
0
1
238.833
20.000800032001283
2
255.499
60.00240009600385
3
272.165
60.002400096003754
4
338.829
15.000600024000963
5
405.493
15.000600024000963
6
422.159
60.00240009600385
7
438.825
60.00240009600385
8
455.765
59.03187721369541
9
472.431
60.00240009600385
10
489.097
60.00240009600385
11
505.763
60.00240009600385
12
522.429
60.00240009600385
13
539.095
60.002400096003655
14
555.761
60.00240009600405
The Raspberry Pi has a less-precise reading for timestamp (1 ms) leading to a stable frame rate of 62.5/58.8 fps:
Frame
Timestamp (ms)
FPS
0
1303
0
1
1394
10.989010989010989
2
1411
58.8235294117647
3
1428
58.8235294117647
4
1444
62.5
5
1461
58.8235294117647
6
1477
62.5
7
1689
4.716981132075472
8
2321
1.5822784810126582
9
2443
8.19672131147541
10
2455
83.33333333333333
11
2487
31.25
12
2505
55.55555555555556
13
2521
62.5
14
2537
62.5
The bit that is confusing me is that the Raspberry Pi sometimes reports intervals of less than 16 ms, suggesting frame rates of much more than 60 fps, e.g.:
Frame
Timestamp (ms)
FPS
106
4378
40.00
107
4380
500.00
108
4397
58.82
109
4412
66.67
110
4428
62.50
111
4450
45.45
112
4462
83.33
113
4478
62.50
So my question is: how can this be? My initial thought was that multiple callbacks might be being called for the same frame, but in that case they would receive the same value for timestamp (per the spec). My two other suspicions are that, either timestamp is very inaccurate, or requestAnimationFrame() is not actually locked to the display's refresh rate and is sometimes executing faster.
requestAnimationFrame (rAF) is not "forced" to be locked to the display refresh rate no. A simple reason for that is that there may very well be no actual "display", e.g in an headless browser. Still that browser will need rAF to fire at some interval.
You don't specify which browsers you are testing this on, but Chrome and Firefox will tie rAF to the V-Sync signal when there is one. I'm not sure what they do with adaptive sync monitors (like G-Sync) though. Also to be noted, the first call to rAF from a "non-animated" document, is actually scheduled to fire as soon as possible in both browsers.
Then in WebKit browsers, they don't look at the monitor at all and instead use a simple timer to try to reach 60FPS no matter the actual display rate. (Note that this is true only for rAF, CSS animations are synced to the monitor).
And this is all in agreement with the specs... which leave some leeway to the user-agent as to when it should update the rendering:
A browsing context has a rendering opportunity if the user agent is currently able to present the contents of the browsing context to the user, accounting for hardware refresh rate constraints and user agent throttling for performance reasons, but considering content presentable even if it's outside the viewport.
I am trying to implement a protocol which use a certain checksum calculation I am unable to reproduce.
The specification says the checksum should be "7 bit, 2’s complement sum of command and message field
(m.s.b. = 0)".
Which according to me should be possible to calculate with:
const data = [0x04, 0x00, 0x10, 0x10, 0x18, 0x57, 0x05]
let sum = 0x00
for (let value of data) {
sum += value
}
const chk = 256 - sum // OR (~sum + 1) & 0xFF
console.log('0x' + chk.toString(16).padStart(2, '0'))
See, https://repl.it/repls/UntidySpotlessInternalcommand.
However, the result I get is 0x68, while the example I have says it should be 0x78.
Am I misunderstanding something in terms of calculating 2's complement sum?
The example is taken from a successfully executed command which is seen in a console window provided by the manufacturer.
Breaks down into:
SOM 10 02
CMD 04 (CONNECTED)
DATA 00 10 10 18 57
BTC 05
CHK 78
EOM 10 03
You should contact the manufacturer. Even using a programming calculator and making sure to use only 7 bits, the checksum comes out to 0x68. I'm not entirely sure your calculation is correct as per another comment it might not be 7 bit. But the sum of the numbers you provided is a 7 bit number anyway, so in the example you gave it shouldn't matter. It might matter for other data though. But definitely contact the company because the correct checksum does seem to be 0x68.
Today i have a hard work, and hard issue:
lets say i have js function which each time we call it, it return a number :
let's say i have list number from 1 to 100 in array, and make it into columns and rows:
1 2 3 4 5 6
7 8 9 10 11 12
13 14 15 16 17
18 19 20 21 ...
each time call function, it return random number in 1 > 100, but i need to check:
if number is 1, 2, 3 or 7, 8, 9 ( because array number will sort to 6 columns each row, so i need to detect number return from function is one of 3 digis left, or 3 digis right ).
and because number range not fixed, i dont know what math or solution to detect this case.
anyone know it?
To get a column number (or, more precisely, "left 3 or right 3") subtract 1 and do a % modulus operator: (value-1) % 6. The result will be 0..2 (or "left column") or 3..5 (so "right column").
I have a working app that needs to store up to 4 matrixes of integer data per record. I'm not sure how to get there with Titanium and SQlite.
A record will contain at least 1 but up to 4 matrixes of integers:
The matrix size is variable, each matrix consists of:
1 - 20 rows with 3 columns per row
OR
1 - 20 rows with 6 columns per row
The matrix structure will be identical for each record, i.e. 3 3x20 matrixes in
a record or 4 6x10 matrixes in a record. At this point my app starts, allows the user to choose the matrix parameters then accepts the data entry to fill in the matrix values. The matrixes are actually an JS array of arrays. How can I store an array of arrays and read it back in when I need to?
Edit: Let me see if I can clarify...
The app I'm working on is a scorecard for archery tournaments, similar in concept to a scorecard in golf. In archery you shoot for a set number of ends with a set number of arrows shot per end. The app asks for the number of ends (up to 20) and the number of arrows shot per end (3 or 6). After each shot the archer enters the score (an integer value). so for argument's sake say we're scoring for three ends with three arrows per end. We might see something like this:
arrow scores
8 8 9 (end 1)
7 9 10 (end 2)
9 9 10 (end 3)
There's my matrix that I need to save for this individual record. However, the next tournament I need to score may have a different number of ends and arrows:
arrow scores
7 8 9 10 10 10 (end 1)
10 9 9 7 8 10 (end 2)
9 6 6 6 9 9 (end 3)
7 8 6 7 8 8 (end 4)
10 10 9 8 8 8 (end 5)
Let's simplify and say that I want to store the scorecard for one archer per record. I already have my data entry and score tabulation working. I just don't understand the best way to store matrices as illustrated above.
I would recommend against storing an array of arrays. Generally speaking, its not terribly efficient to write abstractions with code that are meaningful for reasoning about matrices when make array of arrays a firm concept. The only exception I've seen is matlab/octave.
I've always found I end up with simpler code when I flatten the data. With a flat array you'll have to manage the indexing yourself. A few helper functions make that simple to reason about.
I'm not given much info to go on, but I'd assume that putting the data into two different tables will make things simpler.
CREATE TABLE mat3x20 (i1j1, i1j2, i1j2 ....
CREATE TABLE mat6x10 (i1j1, i1j2, i1j2 ....
Otherwise you have some weird behavior flag in the data which will make it less obvious what the code is supposed to be doing somewhere in the call stack.
Say I have this:
// different things you can do
var CAN_EAT = 1,
CAN_SLEEP = 2,
CAN_PLAY = 4,
CAN_DANCE = 8,
CAN_SWIM = 16,
CAN_RUN = 32,
CAN_JUMP = 64,
CAN_FLY = 128,
CAN_KILL = 256,
CAN_BE_JESUS = Math.pow(2, 70);
// the permissions that I have
var MY_PERMS = CAN_EAT | CAN_SLEEP | CAN_PLAY | CAN_BE_JESUS;
// can I eat?
if(MY_PERMS & CAN_EAT) alert('You can eat!'); /* RUNS */
// can I sleep?
if(MY_PERMS & CAN_SLEEP) alert('You can sleep!'); /* RUNS */
// can I play?
if(MY_PERMS & CAN_PLAY) alert('You can play!'); /* RUNS */
// can I be jesus?
if(MY_PERMS & CAN_BE_JESUS) alert('You can be jesus!'); /* WONT RUN */
Then if I run it, it will print out that I can eat, sleep and play. It will not print out that I can be jesus, because that number is 2^70. If I make the number 2^31 then it will work (I'm on a 64bit machine but must be running Chrome in 32bit mode when I ran the above example).
I face this problem in PHP all the time as well, when dealing with bitwise operators. Often I can work the scenario I'm in to make it so having a maximum of 31 or 63 things in my list isn't a big deal, but sometimes I need to have much more than that. Is there any way around this limitation? Bitwise operators are so speedy, and convenient.
Well, the problem with this is apparently, as you suspected, the width of the integer in javascript. According to this, numbers in js can go up to 2^53, so you can have 53 different bits. According to this, in 64-bit machines, php goes all the way up to 2^63 - 1, so you get 62 bits.
If you need more, you should re-think your design - could you perhaps divide the flags into 2 (or more) groups, where each group has its own meaning (like food-related actions, other actions, anything else, etc.)?
You can read more about it in the ECMAScript Language Specification, ECMAScript is a subset of JavaScript, check here and here.
` Some ECMAScript operators deal only with integers in the range -2^31
through 2^31 - 1, inclusive, or in the range 0 through 2^32-1, inclusive.
These operators accept any value of the Number type but first convert
each such value to one of 2^32 integer values.
See the descriptions of the ToInt32 and ToUint32 operators in 9.5 and
9.6, respectively. `