Chrome web inspector : CPU profiler - javascript

My program takes about 20 sec to load (lot of svg objects). I am trying to profile using chrome web inspector. It shows a total of 19.16 sec, but the breakdown doesn't quite add up. I tried bottom up, top down and sorted in different combination. Still cannot identify the bottle neck. I could see the data receive within few milliseconds from server, but takes long to render it.
Also in the %ge view, the total is 98%, but the rest is less than 0.05% individually and doesn't seems right.
In my previous qn, I asked how to show #calls and average. I doubt a recursive call may cause this, but at the same time the total time should reflect that.
How can I identify the function which causes this delay. Any help appreciated.

You can use this function to log time differences between calls:
var timeVal = new Date().getTime();
var log = function(name){
var str = new Date().toLocaleTimeString();
var newTime = new Date().getTime();
str += " (" + (newTime - timeVal) + "ms)";
timeVal = newTime;
console.log(str, name)
}
Use it like this:
log("prepare for something")
// do something
log("something happened");
Output:
16:57:46 (2496ms) prepare for something
16:57:46 (130ms) something happened

Related

How can I time a "Loading Screen" page on Jmeter/Selenium

After I upload a file to the website, the website showcases a loading screen. The loading screen is up depending on the size of the file. I would like to measure the duration of how long the loading page was up for. I am a beginner in jmeter and programming, so I do not know if there is a much better idea than the one that I currently have.
Here is what I got so far.
var node = implicitFind(pkg.By.xpath("//div[id]")); //xpath of the loading screen
var increment = 1;
while (node != null) {
if (increment == 1)
var before = new Date().getTime(); //gets current time of test
increment++;
}
var after = new Date().getTime();
WDS.log.info('------- Time taken for loading screen = ' + (after - before) + ' ms');
/*
The reason why I added an increment was so the before time can be recorded only on the
first loop rather than every loop. The loop ends when xpath no longer exist, which is
when the after time is recorded.
*/
The issue with this code is that jmeter never breaks the loop, even if condition is false. The xpath is from a text that shows up when the loading screen shows up. Please help if there is a better way or if there is a flaw with my current code. Thanks y'all!
It's hard to say what's wrong without seeing your implicitFind function code, you might want to re-visit it and check for element visibility or invisibility i.e. use WebElement.isDisplayed() function
More information: The WebDriver Sampler: Your Top 10 Questions Answered
long startTime = System.currentTimeMillis();
driver.get("http://infoall.org");
new WebDriverWait(driver, 10).until(ExpectedConditions.presenceOfElementLocated(By.id("Calculate")));
long endTime = System.currentTimeMillis();
long totalTime = endTime - startTime;
System.out.println("Total Page Load Time: " + totalTime + "milliseconds");

Measuring page load timings using JavaScript

I have created a script in JavaScript that is injected into our Ext JS application during automated browser testing. The script measures the amount of time taken to load the data in our grids.
Specifically, the script polls each grid, looks to see if there is a first row or a 'no data' message, and once all grids have satisfied this condition the script records the value between Date.now() and performance.timing.fetchStart, and treats this as the time the page took to load.
This script works more or less as expected, however when compared with human measured timings (Google stopwatch ftw), the time reported by this test is consistently around 300 milliseconds longer than when measured by stopwatch.
My questions are these:
Is there a hole in this logic that would lead to incorrect results?
Are there any alternative and accurate ways to achieve this
measurement?
The script is as follows:
function loadPoll() {
var i, duration,
dataRow = '.firstRow', noDataRow = '.noData',
grids = ['.grid1', '.grid2', '.grid3','.grid4', 'grid5', 'grid6', 'grid7'];
for (i = 0; i < grids.length; ++i) {
var data = grids[i] + ' ' + dataRow,
noData = grids[i] + ' ' + noDataRow;
if (!(document.querySelector(data) || document.querySelector(noData))) {
window.setTimeout(loadPoll, 100);
return;
}
}
duration = Date.now() - performance.timing.fetchStart;
window.loadTime = duration;
}
loadPoll();
Some considerations:
Although I am aware that human response time can be slow, I am sure
that the 300 millisecond inconsistency is not introduced by the human
factor of using Google stopwatch.
Looking at the code it might appear that the polling of multiple
elements could lead to the 300 ms inconsistency, however when I
change the number of elements being monitored from 7 to 1, there
still appears to be a 300 ms surplus in the time reported by the
automated test.
Our automated tests are executed in a framework controlled by
Selenium and Protractor.
Thanks in advance if you are able to provide any insight to this!
If you use performance.now() the time should be accurate to 5 microseconds. According to MDN:
The performance.now() method returns a DOMHighResTimeStamp, measured
in milliseconds, accurate to five thousandths of a millisecond (5
microseconds).
The returned value represents the time elapsed since the time origin
(the PerformanceTiming.navigationStart property).
If I were you I would revise my approach to how the actual measuring of the time is captured. Rather than evaluating the time for each loadPoll() call, you can evaluate how many calls you can perform for a given period of time. In other words you can count the number of function iterations for a longer period of time, eg 1000 milliseconds. Here's how this can be done:
var timeout = 1000;
var startTime = new Date().getTime();
var elapsedTime = 0;
for (var iterations = 0; elapsedTime < timeout; iterations++) {
loadPoll();
elapsedTime = new Date().getTime() - startTime;
}
// output the number of achieved iterations
console.log(iterations);
This approach will give you more consistent and accurate time estimates. Faster systems will simply achieve a greater number of iterations. Keep in mind that setInterval()/setTimeout() are not perfectly precise and for really small interval timers these functions may give you invalid results due to garbage collection, demands from events and many other things that can run in parallel while your code is being executed.

Javascript keydown timing

I am working on a very time-sensitive application that uses key presses for user input. As I am talking milliseconds here, I went ahead and tried a version like this:
function start() {
//stim.style.display = "block";
rt_start = new Date().getTime();
response_allowed = 1;
}
function end() {
var t = rt_end - rt_start;
//stim.style.display = "none";
log.innerHTML = t;
i++;
if (i < iterations) {
setTimeout('start();', 1000);
}
}
var rt_start;
var rt_end;
var iterations = 100;
var i = 0;
var response_allowed = 0;
var stim;
var log;
$(document).ready(function() {
document.onkeydown = function(e) {
if (response_allowed == 1) {
rt_end = new Date().getTime();
response_allowed = 0;
end();
}
};
stim = document.getElementById('stim');
log = document.getElementById('log');
start();
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
<div id="log"></div>
<img src="https://www.gravatar.com/avatar/cfefd93404e6b0eb3cde02b4b6df4e2b?s=128&d=identicon&r=PG&f=1" id="stim" />
And it works fine, usually sub-5ms timers (just holding down a key). But as soon as I modify the code to display the image (uncommenting the two lines), this slows down a lot to about 30ms.
Can someone point me into the direction why exactly this is the case and how to possibly avoid this additional delay?
Thanks
I would recommend using a DOMHighResTimeStamp where available (with a polyfill for browsers that don't provide it).
It's a high-resolution timestamp (designed with accurate measurement in mind) to be used (e.g.) with the Navigation Timing and Web Performance APIs (search for this in the Mozilla Developer Network, as I can't share more than two links within a single post).
The quick way to get a DOMHighResTimeStamp - much like you do with var ts = new Date().getTime(); to get a regular millisecond timestamp - is:
var ts = performance.now();
As I said above, take a look at the Web Performance API at MDN. It will be very helpful if your application is really time-sensitive.
EDIT:
About your snippet, it seems to me that if you hold a key down, you will be always limited to the resolution of the keydown event (which fires continuously, but not every milissecond). You can easily see this behavior if you press a character key down (continuously) with a text editor and check for how many times per second the character is written. This, I guess, is controlled via an OS setting.
You are also limited to the "drift" associated with setTimeout/setInterval. You see, setTimeout queues something for execution after a given delay, but it does not guarantee timely execution. It's a "best effort" scenario and, if the browser is busy doing something, it will drift significantly. Meaning: if you use a setTimeout to re-enable a response_allowed variable after 1 second, you can expect it to re-enable it after "about" (but not exactly) 1 second.

javascript countdown drift

I have been coding a penny auction site and running into a problem with the countdowns. The starting time seems to be a little different on different machines (usually a discrepancy of about a second, but sometimes 2 or 3), which obviously is a big issue for bidders. I'm thinking a big part of the answer is simply network lag, but (a) are there other factors involved? and (b) is there a way to correct for network lag somehow?
I've tried hitting the server via an Ajax call every second, and that works well enough (though there's always a bit of lag) but I'd rather not have to do that because it'll be hard on the server.
JavaScript development is not my forte, so I'd appreciate any tips and feedback!
Here's my code, as generated on the server
jQuery(document).ready(function() {
var aid = " . $aid . ";
var loadTime = Math.floor(jQuery.now()/1000);
//alert(loadTime);
serverTime = " . time() . ";
var clockDiff = loadTime - serverTime;
var diff;
auctionExpirationValue" . $aid . " = " . $expiry . ";
var newServerTime = setInterval(function() {
diff = window['auctionExpirationValue' + aid] - Math.floor((jQuery.now())/1000) + clockDiff;
diff_string = parse_countdown(diff);
jQuery('#auction-expiry').html(diff_string);
},1000);
});
the clockDiff variable is to account for any clock differences between the user's machine and the server. Obviously, if one machine is ahead or behind the user would see different values in the countdown.
As you can see, the code loops every second (or, more or less every second, I understand it's not exact), calculates the difference between now and the auction expiry (compensating with clockDiff), formats it and displays it. Pretty simple. The auctionExpirationValue*** global variable is used to store the auction expiry time locally as a timestamp.
My client has also informed me that on his iPad, the countdown would sometimes drift a little, in addition to the original discrepancy. What's the explanation there?

JavaScript anti-flood spam protection?

I was wondering if it were possible to implement some kind of crude JavaScript anti-flood protection.
My code receives events from a server through AJAX, but sometimes these events can be quite frequent (they're not governed by me).
I have attempted to come up with a method of combating this, and I've written a small script: http://jsfiddle.net/Ry5k9/
var puts = {};
function receiverFunction(id, text) {
if ( !puts[id] ) {
puts = {};
puts[id] = {};
}
puts[id].start = puts[id].start || new Date();
var count = puts[id].count = puts[id].count + 1 || 0;
var time = (new Date() - puts[id].start) * 0.001;
$("text").set("text", (count / time.toFixed()).toString() + " lines/second");
doSomethingWithTextIfNotSpam(text);
}
};
which I think could prove effective against these kinds of attacks, but I'm wondering if it can be improved or perhaps rewritten?
So far, I think everything more than 3 or 2.5 lines per second seems like spam, but as time progresses forward (because start mark was set... well... at the start), an offender could simply idle for a while and then commence the flood, effectively never passing 1 line per minute.
Also, I would like to add that I use Mootools and Lo-Dash libraries (maybe they provide some interesting methods), but it would be preferable if this can be done using native JS.
Any insight is greatly appreciated!
If you are concerned about the frequency a particular javascript function fires, you could debounce the function.
In your example, I guess it would be something like:
onSuccess: function(){ _.debounce(someOtherFunction, timeOut)};
where timeout is the maximum frequency you want someOtherFunction to be called.
I know you asked about native JavaScript, but maybe take a look at RxJS.
RxJS or Reactive Extensions for JavaScript is a library for
transforming, composing, and querying streams of data. We mean all
kinds of data too, from simple arrays of values, to series of events
(unfortunate or otherwise), to complex flows of data.
There is an example on that page which uses the throttle method to "Ignores values from an observable sequence which are followed by another value before dueTime" (see source).
keyup = Rx.Observable.fromEvent(input, 'keyup').select(function(ev) {
return ev.target.value;
}).where(function(text) {
return text.length > 2;
}).throttle(500)
.distinctUntilChanged()
There might be a similar way to get your 2.5-3 per second and ignore the rest of the events until the next second.
I've spent many days pondering on effective measures to forbid message-flooding, until I came across the solution implemented somewhere else.
First, we need three things, penalty and score variables, and a point in time where last action occured:
var score = 0;
var penalty = 200; // Penalty can be fine-tuned.
var lastact = new Date();
Next, we decrease score by the distance between the previous message and current in time.
/* The smaller the distance, more time has to pass in order
* to negate the score penalty cause{d,s}.
*/
score -= (new Date() - lastact) * 0.05;
// Score shouldn't be less than zero.
score = (score < 0) ? 0 : score;
Then we add the message penalty and check if it crosses the threshold:
if ( (score += penalty) > 1000 ) {
// Do things.
}
Shouldn't forget to update last action afterwards:
lastact = new Date();

Categories