webkit profiler - javascript

what are the 'self' and 'total' columns? The 'total' column does not add up to 100% (much higher) and it looks like the self does. I suspect self is non-cumulative and total is. So if methodA calls methodB calls methodC, in self Id see the % for each method call individually, whereas in total methodA would show the total of all three methods, methodB would show the 2, and so on.
Is this correct?

Suppose you have this program:
main() calls A() calls B() calls C(), and C hangs in a loop for 10 seconds.
The CPU-profiler would say something like this:
total time: 10 sec
routine self% inclusive%
main 0 100
A 0 100
B 0 100
C 100 100
The self time of C would be 10 seconds, 100%. The self time of the others would be essentially zero.
The total (inclusive) time of every one of them would be 10 seconds or 100%. You don't add those up.
On the other hand, suppose C spends its 10 seconds doing I/O.
Then the CPU-only profiler would say something like this:
total time: 0 sec
routine self% inclusive%
main ? ?
A ? ?
B ? ?
C ? ?
because the only actual CPU time it uses is so short that basically no samples hit it, so to get the percents it is dividing by zero.
OTOH if the samples were on wall-clock time, you would get the first output.
A better type of profiler is one that samples the call stack, on wall clock time and tells you inclusive time as a percent of total, and gives it to you at the line-of-code level, not just for functions. That's useful because it's a direct measure of how much could be saved if the line were executed less, and almost no problem can hide from it. Examples of such profilers are Zoom and LTProf, and I'm told OProfile can do it. There's a simple method that works with any language and requires only a debugger.
Here's a discussion of the issues.

Related

Firebase functions - interval inside of an interval, to reduce invocations

The costs of using Firebase cloud functions becomes greater when the amount of invocations becomes greater.
I was wondering if that means that I can reduce the cost when I invoke/run a function x amount of times in a function. I've tested this and it turned out that the invocations were reduced.
This is what I mean:
exports.functionName = functions.region("europe-west2").pubsub.schedule('every 1 minutes')
.onRun((context) => { //counts as invocation
console.log("Running...")
var timesRun = 0;
var interval = setInterval(() => {
timesRun += 1;
if(timesRun === 6){
console.log("Stopping interval...")
clearInterval(interval);
}
console.log("Executing...")
//code... for example fetching json
}, 10000); //doesn't count as invocation
});
With this, I can run my code 5 times a minute, while the official invocation is equal to 1.
Is this really more efficient, or am I missing something?
With this, I can run my code 5 times a minute, while the official invocation is equal to 1. Is this really more efficient, or am I missing something?
With Cloud Functions you pay for both invocation count and CPU/memory usage duration. So while your approach reduces the number of invocations, it increases the amount of time you're using the CPU/memory.
Which one comes out cheaper should be a matter of putting the data into the pricing calculator.
Note that while the calculator shows only full seconds, you're actually billed for compute time per 100ms:
Compute time is measured in 100ms increments, rounded up to the nearest increment. For example, a function executing for 260ms would be billed as 300ms.
According to the documentation, your function times out after 1 minute, extended to 9 minutes.
Whatever you do into that time, will count as 1 execution time.
if you start interacting with other services like Firestore, those operations are billed separated.

Milliseconds of SetInterval() in chrome console

Can anyone please say what these numbers are. They are increasing so fast. Is that the no of times the function executes?
var time = setInterval(function() {
var b = document.getElementsByTagName('a')[22].innerHTML;
if (b == "name") {
document.getElementsByTagName('a')[22].click();
clearInterval(time);
} else {
console.log("sript started");
}
}, 10);
Those are the number of times the console.log("Script Activated") message has been triggered. Chrome automatically groups consecutively identical log messages rather than write it out each one on a new line. This makes it easier to see previous messages that would normally get scrolled off the top of the console too quickly.
In your case, the interval's callback function is triggering the log message every 10 milliseconds, so it's increment that count very quickly because it will occur 100 times a second.
EDIT: In a comment on another answer you asked why setting the interval value to 10000000000 caused the interval to go extremely quickly, rather than once every ~115 days.
This is because the number exceeds the maximum size a signed 32-bit integer can be is aproximately 2.1 billion (2,147,483,647). Once it exceeds that amount, it "wraps" around to the negative numbers. When setInterval() receives a negative number for the interval milliseconds, it simply rounds the value up to 4 milliseconds. This results in the interval occurring as quickly as possible, about 1000 times a second. I say "about" because there is no guarantee it will go this quickly on slower hardware.
It's console.log() output's times.
log one time show num 1, log two times show num 2.

Measure CPU Load Difference Between Four Similar Javascript Functions

Why this is important to me
I have a site where I need to have a countdown timer running to show people how much time they have left to complete an action.
This timer will run for days and probably just use MomentJS to say something like "in 4 days" from MomentJS's to() function.
However when we have an hour left to go I'm going to switch over to a countdown by minutes timer, eventually when the minutes get low enough, I'm going to have a seconds timer involved. When we're getting down to the very last few minutes I'm going to even display milliseconds.
The problem
Pretty much there are two main techniques to animate the countdown timer.
setInterval()
requestAnimationFrame()
Well, right away I noticed that the requestAnimationFrame() method was much more smooth to the eye, it works great - especially when I'm displaying milliseconds. However it wasn't long when I noticed that my poor computer started to get a little warm. All of this "animation" is causing quite a load on my CPU. I tried to use CPU monitors, and looked around at ways to see how much load this puts on my CPU but, overall I can't really find a tool that gives me a clear graph of what kind of CPU load my little countdown timer is using.
So, I decided to find a way to limit the FPS and see if that would reduce my problem. And yes, it does. If you use setTimeout() in tandem with requestAnimationFrame() you can set a waiting time before you call your next function.
Which raises the question, if you're using setTimeout() - Why don't you just use setInterval() and forget about the extra optimization that requestAnimationFrame() gives you?
I did some looking around and found another method which simply checks to see if the right amount of interval time has passed since the last time requestAnimationFrame() called your function. I made some optimizations to how this code works and ended up with one of the two functions I'm trying to measure below.
In the end, I'd really like to have a more clear way to measure this - because the activity monitor on my mac is hardly reliable tool to give an accurate reading - and I can't find a way to measure just the code I'm running.
Chrome has some more tools, the profiler, and the timeline - which are both very helpful, but they don't give me the metric I'm looking for - CPU load.
The Code:
Here are four code snippets, which do exactly the same thing - all of them use:
MomentJS
CountdownJS
jQuery
The code is 100% identical, the only difference is how I am limiting the FPS of the animation.
I'd like to find a way to measure (as precisely as possible) the difference between the four functions in how much CPU load they are taking. And then I'd like to then change the FPS around to see if I can find an acceptable load for my application and then I can find the sweet spot - the right amount of FPS, during the different timer stages.
Technique 1 - setTimeout()
var now = moment(); // new Date().getTime();
var then = moment().add(60, 'seconds'); // new Date(now + 60 * 1000);
$(".now").text(moment(now).format('h:mm:ss a'));
$(".then").text(moment(then).format('h:mm:ss a'));
$(".duration").text(moment(now).to(then));
(function timerLoop() {
setTimeout(function(){
$(".difference").text(moment().to(then));
$(".countdown").text(countdown(then, null, countdown.YEARS | countdown.MONTHS | countdown.DAYS | countdown.HOURS | countdown.MINUTES | countdown.SECONDS | countdown.MILLISECONDS).toString());
requestAnimationFrame(timerLoop);
}, 1000/30);
})();
<script src="https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.13.0/moment.min.js" type="text/javascript"></script>
<script src="https://cdn.rawgit.com/mckamey/countdownjs/master/countdown.min.js" type="text/javascript"></script>
<script src="https://code.jquery.com/jquery-3.0.0.min.js" type="text/javascript"></script>
<div>
The time is now: <span class="now"></span>, a timer will go off <span class="duration"></span> at <span class="then"></span>
</div>
<div>The timer is set to go off <span class="difference"></span></div>
<div class="countdown"></div>
Technique 2 - Delta between intervals
var now = moment(); // new Date().getTime();
var then = moment().add(60, 'seconds'); // new Date(now + 60 * 1000);
$(".now").text(moment(now).format('h:mm:ss a'));
$(".then").text(moment(then).format('h:mm:ss a'));
$(".duration").text(moment(now).to(then));
var fps = 30;
var interval = 1000/fps;
var performanceTime = performance.now();
var performanceDelta;
(function updateCountdown(time) {
performanceDelta = time - performanceTime;
if (performanceDelta > interval) {
performanceTime = time - (performanceDelta % interval);
$(".difference").text(moment().to(then));
$(".countdown").text(countdown(then, null, countdown.YEARS | countdown.MONTHS | countdown.DAYS | countdown.HOURS | countdown.MINUTES | countdown.SECONDS | countdown.MILLISECONDS).toString());
}
requestAnimationFrame(updateCountdown);
})();
<script src="https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.13.0/moment.min.js" type="text/javascript"></script>
<script src="https://cdn.rawgit.com/mckamey/countdownjs/master/countdown.min.js" type="text/javascript"></script>
<script src="https://code.jquery.com/jquery-3.0.0.min.js" type="text/javascript"></script>
<div>
The time is now: <span class="now"></span>, a timer will go off <span class="duration"></span> at <span class="then"></span>
</div>
<div>The timer is set to go off <span class="difference"></span></div>
<div class="countdown"></div>
Technique 3 - setInterval()
var now = moment(); // new Date().getTime();
var then = moment().add(60, 'seconds'); // new Date(now + 60 * 1000);
$(".now").text(moment(now).format('h:mm:ss a'));
$(".then").text(moment(then).format('h:mm:ss a'));
$(".duration").text(moment(now).to(then));
var fps = 30;
var interval = 1000/fps;
setInterval(function updateCountdown() {
$(".difference").text(moment().to(then));
$(".countdown").text(countdown(then, null, countdown.YEARS | countdown.MONTHS | countdown.DAYS | countdown.HOURS | countdown.MINUTES | countdown.SECONDS | countdown.MILLISECONDS).toString());
}, interval);
<script src="https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.13.0/moment.min.js" type="text/javascript"></script>
<script src="https://cdn.rawgit.com/mckamey/countdownjs/master/countdown.min.js" type="text/javascript"></script>
<script src="https://code.jquery.com/jquery-3.0.0.min.js" type="text/javascript"></script>
<div>
The time is now: <span class="now"></span>, a timer will go off <span class="duration"></span> at <span class="then"></span>
</div>
<div>The timer is set to go off <span class="difference"></span></div>
<div class="countdown"></div>
It would also be interesting to see a completely un-throttled version like so:
Technique 4 - No throttle
var now = moment(); // new Date().getTime();
var then = moment().add(60, 'seconds'); // new Date(now + 60 * 1000);
$(".now").text(moment(now).format('h:mm:ss a'));
$(".then").text(moment(then).format('h:mm:ss a'));
$(".duration").text(moment(now).to(then));
(function timerLoop() {
$(".difference").text(moment().to(then));
$(".countdown").text(countdown(then, null, countdown.YEARS | countdown.MONTHS | countdown.DAYS | countdown.HOURS | countdown.MINUTES | countdown.SECONDS | countdown.MILLISECONDS).toString());
requestAnimationFrame(timerLoop);
})();
// CountdownJS: http://countdownjs.org/
// MomentJS: http://momentjs.com/
// jQuery: https://jquery.com/
// Rawgit: http://rawgit.com/
// Light reading about the requestAnimationFrame pattern:
// http://www.paulirish.com/2011/requestanimationframe-for-smart-animating/
// https://css-tricks.com/using-requestanimationframe/
<script src="https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.13.0/moment.min.js" type="text/javascript"></script>
<script src="https://cdn.rawgit.com/mckamey/countdownjs/master/countdown.min.js" type="text/javascript"></script>
<script src="https://code.jquery.com/jquery-3.0.0.min.js" type="text/javascript"></script>
<div>
The time is now: <span class="now"></span>, a timer will go off <span class="duration"></span> at <span class="then"></span>
</div>
<div>The timer is set to go off <span class="difference"></span></div>
<div class="countdown"></div>
Simply Put: How does one measure the CPU load difference between four similar javascript functions?
Does anyone already know which of these is going to be more performant? (I know that is not really a word)
Answering my own question:
The Short Answer:
Worst Performance
Clearly setInterval() is the worst solution. Because setInterval() still runs while you are not on the tab, wasting CPU and therefor battery life.
Best Animation (Microseconds)
Clearly the Delta Interval Math calculation method is the most smooth and most accurate way to calculate interval time. When you combined this algorithm with the accuracy of calculating frames times using performance.now() you can achieve results accurate to the microsecond with your animation frames.
(and yes, even requestAnimationFrame() uses performance.now() time as the first argument it passes to the callback function).
And yes folks, I really mean microseconds. That's 1s/1000ms/1000µs.
Go ahead, test it now. Open up your console and type: performance.now() And you'll get a number that looks like 2132438.165 - those are milliseconds since the browser rendered the first frame.
(Which is extra cool because µ is a greek character +10 nerd points)
Best CPU Performance (Milliseconds)
Combine requestAnimationFrame() (which allows your animation to sleep when you switch tabs) with setTimeout() which can throttle the FPS of your animation to any desired millisecond interval.
Keep in mind however, that the difference between this method and the Delta Interval Math method is very VERY slightly different. I don't even have a way to quantify how small of a difference it is. From what I can tell it might be one fourth to one eighth more efficient. But you lose a lot of smoothness for it. your choice.
The Long Answer:
I'm still looking forward to a better way to do this, perhaps in a way that allows me to compare data between different functions.
Until then, I was able to generate these images using Google's Javascript CPU Profiler
Listing in the order in which I believe they are performant, but with titles that match the original question:
Technique 3 - setInterval()
Technique 1 - setTimeout()
Technique 2 - Delta between intervals
Technique 4 - No throttle
Visual Analysis
Well, from the looks of it, I'd rank the different functions in this order of performance:
setInterval() function.
setTimeout() wrapping therequestAnimationFrame()` call.
Delta Interval Math inside the recursive function called by requestAnimationFrame()
No FPS Throttle simply looping with requestAnimationFrame()
Conclusion: [Edited: 06/25/2016]
The setInterval() function is better than any requestAnimationFrame() pattern that I have found at least for CPU usage reasons ONLY WHILE YOU ARE ON THE TAB.
So if CPU cost or longer battery life are the larger concern for your application, than go with requestAnimationFrame() and throttle the FPS using either:
The setTimeout() method which seams to be a little less work for the CPU (best of both worlds) - but admittedly less smooth than the method below.
or
The Delta Interval Math which seams to be a little bit more smooth of an animation (And using the technique outlines in this Question/Answer with the time from performance.now() and the time reported from the requestAnimationFrame() (which is just the current performance.now() provided in the first argument to the callback function). Because we're using the most accurate algorithm I've seen to calculate the animation frame (You're actually calculating down to the millionth of a second, or a microsecond 1s/1000ms/1000µs). There is not a HUGE difference in the CPU load from this technique and setTimeout() - but there IS a difference (reference the attached images)
Why is requestAnimationFrame() is the bomb-digity?" - Because: when you're not on that tab, it gets turned off by the browser, so your animations are "waiting" for you to come back. AND using performance.now() you're getting microsecond (µs) accuracy on your animation.
It is said that requestAnimationFrame() is an "optimized" way to make sure your animation works along side other browser redrawing so that your animation "fits" with what the browser is doing, it also comes at a 60FPS callback cost.
Steps I took to generate these photos:
Created individual blank HTML files with nothing but absolutely what I needed for the tests.
Rebooted my computer, opened only my chrome browser to an about:blank tab.
Opened each "test benchmark" HTML file I previously created individually and one at a time.
At precisely 50 seconds into the countdown timer I clicked start on Google's Javascript CPU Profiler and precisely 10 seconds later I clicked stop (I used a special clicking macro built into my gamer-mouse's special driver software that works pretty well - Once I click the button it clicks start and 10 seconds later another click on stop - good enough for these photos)
Saved the profile to my computer
Loaded each of the profiles into the profiler tool on the about:blank tab.
Resized the window to frame the data well
Screenshot [shift] + [cmd] + [4] and then you press space on the keyboard to frame the screenshot exactly around the window.
Unresolved Problem:
I still don't have a way to see what the CPU % usage is. This graph leave me wanting for something more detailed. The other Google Javascript CPU Profiler screens are also a little vague in this sense. Maybe I just don't know how to use the tool.
If anyone knows of a better benchmark for testing this, please send an answer over and I'll review it. I hope it's better than this mess. And thanks in advance.
The main issue here is having a way to quantify hard data from one CPU profile to another and compare them. This is what I'm missing the most.

What is the reason JavaScript setTimeout is so inaccurate?

I got this code over here:
var date = new Date();
setTimeout(function(e) {
var currentDate = new Date();
if(currentDate - date >= 1000) {
console.log(currentDate, date);
console.log(currentDate-date);
}
else {
console.log("It was less than a second!");
console.log(currentDate-date);
}
}, 1000);
In my computer, it always executes correctly, with 1000 in the console output. Interestedly in other computer, the same code, the timeout callback starts in less than a second and the difference of currentDate - date is between 980 and 998.
I know the existence of libraries that solve this inaccuracy (for example, Tock).
Basically, my question is: What are the reasons because setTimeout does not fire in the given delay? Could it be the computer that is too slow and the browser automatically tries to adapt to the slowness and fires the event before?
PS: Here is a screenshot of the code and the results executed in the Chrome JavaScript console:
It's not supposed to be particularly accurate. There are a number of factors limiting how soon the browser can execute the code; quoting from MDN:
In addition to "clamping", the timeout can also fire later when the page (or the OS/browser itself) is busy with other tasks.
In other words, the way that setTimeout is usually implemented, it is just meant to execute after a given delay, and once the browser's thread is free to execute it.
However, different browsers may implement it in different ways. Here are some tests I did:
var date = new Date();
setTimeout(function(e) {
var currentDate = new Date();
console.log(currentDate-date);
}, 1000);
// Browser Test1 Test2 Test3 Test4
// Chrome 998 1014 998 998
// Firefox 1000 1001 1047 1000
// IE 11 1006 1013 1007 1005
Perhaps the < 1000 times from Chrome could be attributed to inaccuracy in the Date type, or perhaps it could be that Chrome uses a different strategy for deciding when to execute the code—maybe it's trying to fit it into the a nearest time slot, even if the timeout delay hasn't completed yet.
In short, you shouldn't use setTimeout if you expect reliable, consistent, millisecond-scale timing.
In general, computer programs are highly unreliable when trying to execute things with higher precision than 50 ms. The reason for this is that even on an octacore hyperthreaded processor the OS is usually juggling several hundreds of processes and threads, sometimes thousands or more. The OS makes all that multitasking work by scheduling all of them to get a slice of CPU time one after another, meaning they get 'a few milliseconds of time at most to do their thing'.
Implicity this means that if you set a timeout for 1000 ms, chances are far from small that the current browser process won't even be running at that point in time, so it's perfectly normal for the browser not to notice until 1005, 1010 or even 1050 milliseconds that it should be executing the given callback.
Usually this is not a problem, it happens, and it's rarely of utmost importance. If it is, all operating systems supply kernel level timers that are far more precise than 1 ms, and allow a developer to execute code at precisely the correct point in time. JavaScript however, as a heavily sandboxed environment, doesn't have access to kernel objects like that, and browsers refrain from using them since it could theoretically allow someone to attack the OS stability from inside a web page, by carefully constructing code that starves other threads by swamping it with a lot of dangerous timers.
As for why the test yields 980 I'm not sure - that would depend on exactly which browser you're using and which JavaScript engine. I can however fully understand if the browser just manually corrects a bit downwards for system load and/or speed, ensuring that "on average the delay is still about the correct time" - it would make a lot of sense from the sandboxing principle to just approximate the amount of time required without potentially burdening the rest of the system.
Someone please correct me if I am misinterpreting this information:
According to a post from John Resig regarding the inaccuracy of performance tests across platforms (emphasis mine)
With the system times constantly being rounded down to the last queried time (each about 15 ms apart) the quality of performance results is seriously compromised.
So there is up to a 15 ms fudge on either end when comparing to the system time.
I had a similar experience.
I was using something like this:
var iMillSecondsTillNextWholeSecond = (1000 - (new Date().getTime() % 1000));
setTimeout(function ()
{
CountDownClock(ElementID, RelativeTime);
}, iMillSecondsTillNextWholeSecond);//Wait until the next whole second to start.
I noticed it would Skip a Second every couple Seconds, sometimes it would go for longer.
However, I'd still catch it Skipping after 10 or 20 Seconds and it just looked rickety.
I thought, "Maybe the Timeout is too slow or waiting for something else?".
Then I realized, "Maybe it's too fast, and the Timers the Browser is managing are off by a few Milliseconds?"
After adding +1 MilliSeconds to my Variable I only saw it skip once.
I ended up adding +50ms, just to be on the safe side.
var iMillSecondsTillNextWholeSecond = (1000 - (new Date().getTime() % 1000) + 50);
I know, it's a bit hacky, but my Timer is running smooth now. :)
Javascript has a way of dealing with exact time frames. Here’s one approach:
You could just save a Date.now when you start to wait, and create an interval with a low ms update frame, and calculate the difference between the dates.
Example:
const startDate = Date.now()
setInterval(() => {
const currentDate = Date.now()
if (currentDate - startDate === 1000 {
// it was a second
clearInterval()
return
}
// it was not a second
}, 50)

get Math.round to round up to next minute even if over by a second

Related to my previous question javascript math.round and math.floor work fine in IE and Opera but not Chrome, Safari or Firefox. Getting NaN
I have updated fiddle here http://jsfiddle.net/8VmPm/
The reason behind this script may seem a little cloudy so I'll try to explain. CDRs (Call detail records) from a certain network operator come in seconds for voice records and bytes for data records. I'm writing a "calculator" to convert seconds to minutes and bytes to megabytes.
I also took it a step further and added some example plans in there to check for overage.
My problem (which is simply cosmetic, but I am OCD like that) is in the first calculation (Math.round(minusage / 60)) (lines 22 and 23 in fiddle). I need it to round up to the next minute even if the value entered in the Usage Used field is over by 1 second.
Example:
In the Minute Which Plan? dropdown, pick Plan A (500 Minutes)
In the Usage Used field, enter in 30001 (which is 500 minutes and 1 second)
Click the Calculate button
Results in the Usage Summary field will be:
500 minutes used. -1 of 500 minutes remaining
Currently, it will not say, "501 minutes used" until the value entered in Usage Used field is 30030 or greater (for Plan A)
I need it to go to the next minute up even if over by 1 second (otherwise it'll confuse the non-techies who will be using it)
Any help with this matter would be greatly appreciated!
Take a look at Math.ceil(...). It is a JavaScript method for rounding up to the nearest integer. Here is a link to the MDN page.

Categories