Profiling Javascript performance using API - javascript

Hey I have been exploring the google-chrome debugging tools and profiling tools. Its truly amazing. One useful stuff I encountered is I get the call tree of the executions and their time taken (both self time and total children time count) in a full tree structure.
So I would like to know if we can get the call-tree with their timings in a tree structure using some JS API. I can use this tree timings to benchmark the different versions of my code and various other cool stuffs could be done, if I can automate.
I couldnot find much useful stuff over net. Can you give me some direction to this? Thanks in advance.
Let me know in comments if some parts are unclear in the question.I will rephrase myself.

If you want to benchmark different versions of your app, you can easily achieve this in Profiles in Chrome DevTools. You can record and save them to your computer, and then load them again anytime in the future. It's not just for the current session.
For example, you record your profile for Version 1. A few days later you load up your app in Chrome, record the new Profile and then import the old one and compare the charts or the tree view.
You could even open the saved files on your computer, which are stored in JSON format. You have all the data you need in there to play with. You can run a server to parse that data and extract the relevant information into a format you like. The amount of data can be huge and slow to handle though.
Update regarding comment:
Both console.timeline and console.timelineEnd were deprecated and replaced with console.time and console.timeEnd. However, there are no return values to store, so you can't do anything with the results in JavaScript. However, you can use window.performance:
var start = window.performance.now();
// your function
var end = window.performance.now();
var timeSpent = (end - start);
var stack = new Error().stack; // get call stack
You could then do what you like with the results.
If you want to time a function from a 3rd party, you could override it and apply the original function in between:
var oldFunc = myFunc;
myFunc = function() {
var start = window.performance.now();
var returnVal = oldFunc.apply(this, arguments);
var end = window.performance.now();
var timeSpent = (end - start);
var stack = new Error().stack; // get call stack
return returnVal;
}

You can use console.profile() and console.profileEnd() to start and stop recording a profile. After it runs, you can see the result in the Profiles tab of the developer tools.

Related

NetSuite restlet write performance is poor

Edit: I've redacted actual numbers and replaced them with pseudoswears because I've been told sharing performance data is against Netsuite's TOS.
I'm integrating our accounting system with NetSuite using restlets, and overall it has gone pretty well with the glaring exception of performance. I have figured out that nlapiLoadRecord is Satan's own child from a performance perspective, so I avoid it whenever possible favoring the search api and now my read restlets are pretty snappy. However, whenever I ask a restlet to write anything it is as slow as a tortoise stuck in cold tar. I'm generally profiling the nlapiSubmitRecord itself at between SLOW and REALLY DAMN SLOW seconds. This seems insane to me. No one would use NetSuite if the performance were always this poor for writing. I'll include a couple of code examples below. Any tips on speeding up NetSuite restlet write performance would be appreciated.
In this first one receivedInvoice is the incoming data, and findCurrencyCode and findCustomerByCustomerNumber are well performing functions that do those things. I just clocked this at a nearly unbelievable HOLY MONKEYS THAT IS SLOW seconds for a simple invoice with one line item, nearly all of the time passing while I waited for nlapiSubmitRecord to finish.
var createdInvoice = nlapiCreateRecord('invoice');
createdInvoice.setFieldValue('customform', Number(receivedInvoice.transactionType));
createdInvoice.setFieldValue('memo', receivedInvoice.message);
createdInvoice.setFieldValue('duedate', receivedInvoice.dateDue);
createdInvoice.setFieldValue('currency', findCurrencyCode(receivedInvoice.currencyUnit));
createdInvoice.setFieldValue('location', Number(receivedInvoice.location));
createdInvoice.setFieldValue('postingperiod', findPostingPeriod(receivedInvoice.datePosted));
var customer = findCustomerByCustomerNumber(receivedInvoice.customerNumber);
createdInvoice.setFieldValue('entity', customer.customerId );
createdInvoice.setFieldValue('custbody_end_user', customer.customerId );
createdInvoice.setFieldValue('department', customer.departmentId);
var itemCount = receivedInvoice.items.length;
for(var i = 0; i < itemCount; i++)
{
createdInvoice.selectNewLineItem('item');
createdInvoice.setCurrentLineItemValue('item', 'item',receivedInvoice.items[i].item);
createdInvoice.setCurrentLineItemValue('item', 'quantity', receivedInvoice.items[i].quantity);
createdInvoice.setCurrentLineItemValue('item', 'rate',receivedInvoice.items[i].price);
createdInvoice.setCurrentLineItemValue('item', 'custcol_list_rate',receivedInvoice.items[i].price);
createdInvoice.setCurrentLineItemValue('item', 'amount',receivedInvoice.items[i].totalAmount);
createdInvoice.setCurrentLineItemValue('item', 'description',receivedInvoice.items[i].description);
createdInvoice.commitLineItem('item');
}
var recordNumber = nlapiSubmitRecord(createdInvoice,false,true);
In this one I think I'm committing a performance heresy by opening the record in dynamic mode, but I'm not sure how else to get the possible line items. Simply opening a new record in dynamic mode clocks in at around SLOW seconds. Again, the submit is where most time is being eaten (often around OH DEAR SWEET MOTHER OF HORRIBLE seconds), although this one eats a decent amount of time as I mess with the line items, again presumably because I have opened the record in dynamic mode.
var customerPayment = nlapiCreateRecord('customerpayment',{recordmode: 'dynamic'});
customerPayment.setFieldValue('customer', parseInt(customerId));
customerPayment.setFieldValue('payment', paymentAmount);
customerPayment.setFieldValue('paymentmethod', paymentMethod);
customerPayment.setFieldValue('checknum', transactionId);
customerPayment.setFieldValue('currency', currency);
customerPayment.setFieldValue('account', account);
var applyCount = customerPayment.getLineItemCount('apply');
if(applyCount>0)
{
for(var i=1;i<=applyCount;i++)
{
var thisInvoice = customerPayment.getLineItemValue('apply','refnum',i);
if(thisInvoice == invoiceToPay)
{
customerPayment.selectLineItem('apply', i);
customerPayment.setCurrentLineItemValue('apply','apply','T');
customerPayment.setCurrentLineItemValue('apply', 'amount',paymentAmount);
customerPayment.commitLineItem('apply');
}
}
}
nlapiSubmitRecord(customerPayment,false,true);
A few thoughts:
(just to get it off my chest) Integrating your accounting system with Netsuite just sounds odd. Netsuite is an accounting system and generally is the accounting system of record for the orgs that use it. If you are not using Netsuite for accounting you might want to consider what utility it has for the price and get off it.
When I integrate an external system with Netsuite I generally try to make it async. I do this by getting the raw info into a custom record and then kick off a scheduled script to process the queued update. This lets my api return quickly. When I process the queue I store errors in the queue record so that if anything comes up I can fix the data or code and re-submit it.
One apparently major source of slow transaction submit (aside from slow UE scripts) is the state of your books. I had a startup customer who did really well but they never closed their books and they were using IIRC Average Costing. Every time they saved a posting transaction Netsuite was recalculating the whole period (which at the point where things were grinding to a halt was about 2 years of records for a very busy site). When they started closing periods transaction save times went way down. When they converted to standard costing transaction save times went down again (I imagine LIFO would be faster than Avg and slower than standard but YMMV)
Also notes on your code.
The normal way to create an invoice is
nlapiTransformRecord('customer', customer.customerId, 'invoice'); or
nlapiTransformRecord('customer', customer.customerId, 'invoice', {recordmode:'dynamic'});
I've never tested whether this has an effect on submit times but it might help since NS will start the save from a slightly better place (grasping at straws but with NS sometimes non-obvious changes have speed benefits)
Also not sure how changing the customform in dynamic mode works. When you know the form you can also add that to the init defaults:
{recordmode:'dynamic', customform:receivedInvoice.transactionType}
One of the reasons why you Submit is slow might be there are a lot of User Event scripts attached to the record. Since it is the Restlet that saves the record, it will trigger the User Event scripts.

Web Worker Creating Garbage

I'm working on a project that utilizes web workers. It seems that the workers are generating quite a bit of extra garbage that has to be collected from the message passing.
I'm sending three things to the worker via post message from the main thread. First is just a number, second is an array with 7 numbers, and 3rd is the date. The firs two are properties of an object as seen below. This is called every 16ms on RAF for about 20 objects. The GC ends up collecting 12MB every 2 seconds or so. I'm wondering if there is a way to do this without creating so much garbage? Thanks for any help!
//planet num (property of object) is just a number like: 1
//planetele looks like this (property of an object)
//[19.22942, 313.4868, 0.04441, 0.7726, 170.5310, 73.9893, 84.3234]
//date is just the date object
//posted to worker like so:
planetWorker.postMessage({
"planetnum": planet.num,
"planetele": planet.ele,
"date": datet
});
//the worker.js file uses that information to do calculations
//and sends back the planet number, with xyz coordinates. (4 numbers)
postMessage({data: {planetnum : planetnum, planetpos: planetpos}});
I tried two different avenues and ended up using a combination of them. First, before I sent some of the elements over I used JSON.stringify to convert them to strings, then JSON.parse to get them back once they were sent to the worker. For the array I ended up using transferable objects. Here is a simplified example of what I did:
var ast = [];
ast.elements = new Float64Array([0.3871, 252.2507, 0.20563, 7.005, 77.4548, 48.3305, 0.2408]);
ast.num = 1;
var astnumJ = JSON.stringify(ast.num); // Probably not needed, just example
// From main thread, post message to worker
asteroidWorker.postMessage({
"asteroidnum": astnumJ,
"asteroidele": ast.elements.buffer
},[ast.elements.buffer]);
This sends the array to the worker, it doesn't copy it, which reduces the garbage made. It is now not accessible in the main thread, so once the worker posts the message, you have to send the array back to the main thread or it wont be accessible as a property of ast anymore. In my case, because I have 20 - 30 ast objects, I need to make sure they all have their elements restored via post message before I call another update to them. I did this with a simple counter in a loop.
// In worker.js
asteroidele = new Float64Array(e.data.asteroidele); // cast to type
asteroidnum = JSON.parse(e.data.asteroidnum); // parse JSON
// Do calculations with this information in worker then return it to the main thread
// Post message from worker back to main
self.postMessage({
asteroidnum : asteroidnum,
asteroidpos : asteroidpos, // Calculated position with elements
asteroidele : asteroidele // Return the elements buffer back to main
});
// Main thread worker onmessage function
asteroidWorker.onmessage = function(e){
var data1 = e.data;
ast.ele = data1.asteroidele; // Restore elements back to ast object
}
Not sure this is the best approach yet, but it does work for sending the array to and from the worker without making a bunch of extra garbage. I think the best approach here will be to send the array to the worker and leave it there, then just return updated positions. Working on that still.

How to debug time between $ionicView.beforeEnter and $ionicView.enter

I use ionic, and it takes more than 1s between $ionicView.beforeEnter and $ionicView.enter.
How can I find which part of my code is taking so much time? Batarang is not helping me much and I can't figure out an easy way of doing this...
Probably not very helpful but when I had a similar issue I could not find the culprit using Chrome debug profiler and had to comment/exclude parts of the code in my controller, which I transition to, one by one. The problem was that some third party calendar component being configured in the controller init stage was slowing the transition (view display). Once commented out everything worked fine. Since this was not my code and I did not want to mess with it I had to add a progress spinner on the transition.
Maybe you can use the debug tools provided with Chrome like Timeline or Profile : you start the profiling or the record of the timeline and check what happens between $ionicView.beforeEnter and $ionicView.enter. Both features have a search module so you can look for $ionicView.beforeEnter and see what is taking time until $ionicView.enter. It's probably not the easiest way but hope it will help you.
Have you tried monitoring the traffic in the Network tab in console? The time in ms is a good indicator of which xhr calls are running longer than expected... run a speed test.
Otherwise, if your using chrome, I would just drop some debugger statements throughout the flow of that Ionic view's state.
I cannot think of an easy way of doing this. But extending on what #nico_ mentioned, using the chrome javascript debug tool, you should set a breakpoint on your $ioniView.beforeEnter and a breakpoint on $ionicView.enter then step through the code between the breakpoints.
You can read more about chrome's javascript debug tools and how to set up breakpoints here:
https://developer.chrome.com/devtools/docs/javascript-debugging
There is no code "between the breakpoints"... There are so much functions called between the 2 events...
-- Tyrius
You should try to identify functions that are taking too much time to run.
Note: I assume your app is not slowed down by downloads, you can check you downloads time in Chrome Dev Tools (If TTFB is too high, you might have a server-side slowness).
If you know which functions are called, you have two possibilities :
When there is a few functions and not called too many times:
function ExampleFunction() {
console.time("ExampleFunction");
// ExampleFunction code
// ...
console.timeEnd("ExampleFunction");
// output in console time between time() call and timeEnd() call
}
If there is a lot of functions called many times :
I suggest you to use my little JS tool called MonitorJS to help you identify code blocks taking too much time to run :
function ExampleFunction() {
let mId = Monitor.Start("ExampleFunction");
// ExampleFunction code
// ...
Monitor.Stop(mId);
}
and when you what to see which function is taking too much time, call this function :
function ShowMonitorRecords() {
// get all records sorted by total_time desc
let records = Monitor.GetAll().sort((a,b)=>{ return b.total_time - a.total_time; });
// print every records
for (let record of records) {
let avg = record.total_time / record.call_count;
console.log(record.name + ": " + record.total_time.toFixed(2)
+ "ms - called " + record.call_count
+ " time(s) - average time : "+ avg.toFixed(2) +"ms");
}
}
// will output something like :
// Foo: 7234.33ms - called 2 time(s) - average time : 3617.16ms
// Bar: 6104.25ms - called 3 time(s) - average time : 2034.75ms
Once you know which function is taking too much time, you can reduce the scope of Start/Stop to identify the exact block of code slowing down your app and refactor.
Hope this helps !

ImageJ: .show() does not show an image

I very new to javascript and I would like to process some images in Fiji. I have been using the macro language for a while, but I am trying to get familiar with the formal ImageJ/Fiji API. I am trying to run the folowing simplistic piece of code, it runs with no errors but it does not show any image in the end. What's going wrong?
importClass(Packages.ij.plugin.filter.GaussianBlur);
var image = IJ.openImage("/home/.../KNIIC_BC_Cam2_AirBubble2_Image1038.bmp");
IJ.run(image, "8-bit", "");
var dpl = image.getProcessor().duplicate();
var gs = new GaussianBlur();
gs.blur(dpl,20);
new ImagePlus(gs).show();
Thanks in advance
The problem is the way how you deal with the ImagePlus: in the last line, you try to create a new ImagePlus, but there is no chance that this contains any information of your loaded image.
GaussianBlur processes an ImageProcessor that you'll get via the ImagePlus#getProcessor() method. If you look at the API documentation, you'll also see that blur(ImageProcessor,double) is deprecated in favor of one of the other methods: you might want to use blurGaussian(ImageProcessor, double, double, double)here.
This code would work:
importClass(Packages.ij.plugin.filter.GaussianBlur);
var imp = IJ.openImage("http://imagej.nih.gov/ij/images/clown.jpg");
IJ.run(imp, "8-bit", "");
var ip = imp.getProcessor();
var gs = new GaussianBlur();
gs.blurGaussian(ip,20,20,0.01);
imp.show();
however it uses the low level way of interfering with the GaussianBlur class. To make your life easy, you can also record the javascript command in the GUI via Plugins > Macros > Record... and then choosing Record: Javascript before performing the Gaussian blur via Process > Filters > Gaussian Blur.... This would make your code much shorter:
var imp = IJ.openImage("http://imagej.nih.gov/ij/images/clown.jpg");
IJ.run(imp, "8-bit", "");
IJ.run(imp, "Gaussian Blur...", "sigma=20");
imp.show();
For general help with Javascript scripting in ImageJ, see these two links to the Fiji wiki.
Edit: Starting from ImageJ 1.47n5, ImageProcessor has a new method blurGaussian(double sigma), shortening the above (low level) code to:
var imp = IJ.openImage("http://imagej.nih.gov/ij/images/clown.jpg");
IJ.run(imp, "8-bit", "");
imp.getProcessor().blurGaussian(20);
imp.show();

Loading large amount of data into memory - most efficient way to do this?

I have a web-based documentation searching/viewing system that I'm developing for a client. Part of this system is a search system that allows the client to search for a term[s] contained in the documentation. I've got the necessary search data files created, but there's a lot of data that needs to be loaded, and it takes anywhere from 8-20 seconds to load all the data. The data is broken into 40-100 files, depending on what documentation needs to be searched. Each file is anywhere from 40-350kb.
Also, this application must be able to run on the local file system, as well as through a webserver.
When the webpage loads up, I can generate a list of what search data files I need load. This entire list must be loaded before the webpage can be considered functional.
With that preface out of the way, let's look at how I'm doing it now.
After I know that the entire webpage is loaded, I call a loadData() function
function loadData(){
var d = new Date();
var curr_min = d.getMinutes();
var curr_sec = d.getSeconds();
var curr_mil = d.getMilliseconds();
console.log("test.js started background loading, time is: " + curr_min + ":" + curr_sec+ ":" + curr_mil);
recursiveCall();
}
function recursiveCall(){
if(file_array.length > 0){
var string = file_array.pop();
setTimeout(function(){$.getScript(string,recursiveCall);},1);
}
else{
var d = new Date();
var curr_min = d.getMinutes();
var curr_sec = d.getSeconds();
var curr_mil = d.getMilliseconds();
console.log("test.js stopped background loading, time is: " + curr_min + ":" + curr_sec+ ":" + curr_mil);
}
}
What this does is processes an array of files sequentially, taking a 1ms break between files. This helps prevent the browser from being completely locked up during the loading process, but the browser still tends to get bogged down by loading the data. Each of the files that I'm loading look like this:
AddToBookData(0,[0,1,2,3,4,5,6,7,8]);
AddToBookData(1,[0,1,2,3,4,5,6,7,8]);
AddToBookData(2,[0,1,2,3,4,5,6,7,8]);
Where each line is a function call that is adding data to an array. The "AddToBookData" function simply does the following:
function AddToBookData(index1,value1){
BookData[BookIndex].push([index1,value1]);
}
This is the existing system. After loading all the data, "AddToBookData" can get called 100,000+ times.
I figured that was pretty inefficient, so I wrote a script to take the test.js file which contains all the function calls above, and processed it to change it into a giant array which is equal to the data structure that BookData is creating. Instead of making all the function calls that the old system did, I simply do the following:
var test_array[..........(data structure I need).......]
BookData[BookIndex] = test_array;
I was expecting to see a performance increase because I was removing all the function calls above, this method takes slightly more time to create the exact data structure. I should note that "test_array" holds slightly over 90,000 elements in my real world test.
It seems that both methods of loading data have roughly the same CPU utilization. I was surprised to find this, since I was expecting the second method to require little CPU time, since the data structure is being created before hand.
Please advise?
Looks like there are two basic areas for optimising the data loading, that can be considered and tackled separately:
Downloading the data from the server. Rather than one large file you should gain wins from parallel loads of multiple smaller files. Experiment with number of simultaneous loads, bear in mind browser limits and diminishing returns of having too many parallel connections. See my parallel vs sequential experiments on jsfiddle but bear in mind that the results will vary due to the vagaries of pulling the test data from github - you're best off testing with your own data under more tightly controlled conditions.
Building your data structure as efficiently as possible. Your result looks like a multi-dimensional array, this interesting article on JavaScript array performance may give you some ideas for experimentation in this area.
But I'm not sure how far you'll really be able to go with optimising the data loading alone. To solve the actual problem with your application (browser locking up for too long) have you considered options such as?
Using Web Workers
Web Workers might not be supported by all your target browsers, but should prevent the main browser thread from locking up while it processes the data.
For browsers without workers, you could consider increasing the setTimeout interval slightly to give the browser time to service the user as well as your JS. This will make things actually slightly slower but may increase user happiness when combined with the next point.
Providing feedback of progress
For both worker-capable and worker-deficient browsers, take some time to update the DOM with a progress bar. You know how many files you have left to load so progress should be fairly consistent and although things may actually be slightly slower, users will feel better if they get the feedback and don't think the browser has locked up on them.
Lazy Loading
As suggested by jira in his comment. If Google Instant can search the entire web as we type, is it really not possible to have the server return a file with all locations of the search keyword within the current book? This file should be much smaller and faster to load than the locations of all words within the book, which is what I assume you are currently trying to get loaded as quickly as you can?
I tested three methods of loading the same 9,000,000 point dataset into Firefox 3.64.
1: Stephen's GetJSON Method
2) My function based push method
3) My pre-processed array appending method:
I ran my tests two ways: The first iteration of testing I imported 100 files containing 10,000 rows of data, each row containing 9 data elements [0,1,2,3,4,5,6,7,8]
The second interation I tried combining files, so that I was importing 1 file with 9 million data points.
This was a lot larger than the dataset I'll be using, but it helps demonstrate the speed of the various import methods.
Separate files: Combined file:
JSON: 34 seconds 34
FUNC-BASED: 17.5 24
ARRAY-BASED: 23 46
Interesting results, to say the least. I closed out the browser after loading each webpage, and ran the tests 4 times each to minimize the effect of network traffic/variation. (ran across a network, using a file server). The number you see is the average, although the individual runs differed by only a second or two at most.
Instead of using $.getScript to load JavaScript files containing function calls, consider using $.getJSON. This may boost performance. The files would now look like this:
{
"key" : 0,
"values" : [0,1,2,3,4,5,6,7,8]
}
After receiving the JSON response, you could then call AddToBookData on it, like this:
function AddToBookData(json) {
BookData[BookIndex].push([json.key,json.values]);
}
If your files have multiple sets of calls to AddToBookData, you could structure them like this:
[
{
"key" : 0,
"values" : [0,1,2,3,4,5,6,7,8]
},
{
"key" : 1,
"values" : [0,1,2,3,4,5,6,7,8]
},
{
"key" : 2,
"values" : [0,1,2,3,4,5,6,7,8]
}
]
And then change the AddToBookData function to compensate for the new structure:
function AddToBookData(json) {
$.each(json, function(index, data) {
BookData[BookIndex].push([data.key,data.values]);
});
}
Addendum
I suspect that regardless what method you use to transport the data from the files to the BookData array, the true bottleneck is in the sheer number of requests. Must the files be fragmented into 40-100? If you change to JSON format, you could load a single file that looks like this:
{
"file1" : [
{
"key" : 0,
"values" : [0,1,2,3,4,5,6,7,8]
},
// all the rest...
],
"file2" : [
{
"key" : 1,
"values" : [0,1,2,3,4,5,6,7,8]
},
// yadda yadda
]
}
Then you could do one request, load all the data you need, and move on... Although the browser may initially lock up (although, maybe not), it would probably be MUCH faster this way.
Here is a nice JSON tutorial, if you're not familiar: http://www.webmonkey.com/2010/02/get_started_with_json/
Fetch all the data as a string, and use split(). This is the fastest way to build an array in Javascript.
There's an excellent article a very similar problem, from the people who built the flickr search: http://code.flickr.com/blog/2009/03/18/building-fast-client-side-searches/

Categories