Monitoring and debugging performance of Angular components - javascript

One of my Angular (5.1.0) components slows down the whole app considerably: reacting to click event takes 30ms in every other view of the app and around 350ms in this one problematic view. And although the desktop performance is almost indistinguishable between the "problematic" and "normal" views, the performance difference on a mobile device is obvious and the performance penalty is just staggering (in the example above, the click event would take more like a 1500ms on a smartphone).
There are basically two new components which have been added recently. One of them holds the view, the other one renders some data (and is used twice on the page). I would put my bet on the latter one, but I do not know where to start. Chrome DevTools and Safari Developer Tools could for now give me the meaningful event times, but either I do not know how to dig deeper or I need different tools or methodology altogether to pinpoint what exactly causes the lag. Any ideas?

For the "monitoring" aspect of your question, you can try Bucky an opensource tool to monitor webapp performance on browser side.
There is also a post about how to monitor AngularJS with Statsd here.
If you're really care about measuring user experiences, you can take a look at using percentiles, some information can be found here and here.

Related

Best way to make and handle custom Electron Dialogs?

In our project, we need to use custom electron dialog windows, that have really fast opening time.
I tried the dialog module (in electron), but there's no way to edit it, although the time it takes to open the window is really fast.
Then, I made a modal window (using BrowserWindow) and made it act as a dialog when needed. This worked in a way as we were able to customize it however we want. But, the major issue is that it takes about 3-4 seconds to open a new modal window (worse if PC's performance is bad). This time consumption is causing a lot of issues as we have parts in the project where the user needs to notified instantly (errors and confirmations).
As another solution, I started creating a modal window at a very initial stage, then hide it and show it to the user (with updated contents using React states and props) whenever needed. This worked, however, since the application is already quite heavy, adding and keeping another Electron window open since the beginning is consuming a lot of memory.
I have used multiple packages as well, but all have the same time issue.
As of now, I have tried these solutions but all have bad parts that we are trying to majorly avoid.
I am trying to achieve the best possible way to handle these windows, which would be: opens really fast (<1s), is custom and less memory consumption.
Have I exhausted all my options? Or, is there any other way I can go about this problem.
Any help would be appreciated.

Improving IE8 Performance with select2 v3.4.8 Library

I am trying to improve performance of the select2 v3.4.8 library in IE8.
Essentially, I have narrowed the problem down to the call from the SingleSelect2's opening method to the parent AbstractSelect2's opening method. The method call I am speaking of is this one:
this.parent.opening.apply(this, arguments);
I have seen this call take upwards of 5 seconds to complete. In some instances, it has taken up to 10 seconds to open the dropdown in IE8.
I've made some performance improvements already. For example, rather than constantly adding the:
<div id="select2-drop-mask" class="select2-drop-mask"></div>
to the DOM programmatically, I just added it directly to the markup and set it's display:none. This saves quite a number of cycles because apparently, adding elements to the DOM in IE8 is expensive to do from Javascript.
But I still want to get more performance improvements than this. We have only increased performance by about 10-20% by making this change.
Does anyone have any suggestions for me?
We've already cached the data to display in the dropdown on the client on page load. So, there are zero server calls being made when the dropdown is opening. The performance bottleneck is entirely inside the select2 library itself.
Unfortunately, we are unable to upgrade our select2 library. Doing so would be at least an 8-Point User Story, so it's prohibitive at this time for us to undertake an upgrade.
Thanks to anyone who's able to help!
-classTemplateT

How to measure the time to bind/render a view or directive?

I have a view that contains two directives. I want to know how much time it takes to update/bind/display each part (the view, the first directive and the other directive).
I'm looking for an end-to-end duration, including the time spent in JS and the actual browser rendering time;
I know Batarang helps measuring watches but here I'm looking at the bigger picture.
In Chrome, I started a CPU Profile and hit refresh. In the Flame Chart view, I see that scope.$digest took 91 ms. But that's for the whole view including the directives right? And does that include browser rendering time?
I don't mind inserting a few console.log here and there in the AngularJS source code to do that.
I'll probably lose reputation for saying this :) lets hope not - but the latest IE11 dev tools have both a profiler for scripts, and UI responsiveness tabs for rendering.
It's a little bit odd using IE for internet dev, but the dev tools have been really good - it has taken some adapting to switch from chrome, but it's just as useful and quite performant.
An overview of the profile and how to use it, and track down specific app areas is at http://msdn.microsoft.com/en-us/library/ie/dn255009(v=vs.85).aspx
I don't know if it can help your exact situation but Batarang (the chrome extension for AngularJS) can really help you with AngularJS performance measuring.
Like this: https://github.com/angular/angularjs-batarang#performance
You can find it here: https://chrome.google.com/webstore/detail/angularjs-batarang/ighdmehidhipcmcojjgiloacoafjmpfk?hl=en

How do you assess the impact of an attached event function on performance?

I am building out a UI Framework as an application template. It's built on jQuery / Bootstrap. It will be used by web app developers to build data driven apps for the company I work for.
There are countless screen scenarios to consider. for example, I am adding a scroll event that calls a function to fix the header if the page scrolls horizontally.
My question is:
If I add scroll, resize or mouse move event functions to the global frame work, how can I test their impact on performance. I don't want to laden the framework down little by little.
Thanks for your insights.
you can use developers tools profiler, but you can be sure that calling stuff on every event gonna impact on site performance.
if you do hook to all this event i would suggest do it wise and use "throttle".
and try to build some light weight logic that will prevent unexpected/necessary class.
and test your code it also can tell how much time it take to run each test which can indicate on your logic performance.

Single Page Application SEO and infinite scroll AngularJS

We are have a site with a feed similar to pinterest and are planning to refactor the jquery soup into something more structured. The two most likely candidates are AngularJS and Backbone+Marionette. The site is user-generated and is mostly consumption-oriented (typical 90/9/1 rule) with the ability for users to like, bookmark, and comment on posts. From the feed we open a lightbox to see more detail about the post with comments, related posts, similar to pinterest.
We have used backbone sporadically and are familiar with the idea but put off by the boilerplate. I assume Marionette would help a lot with that but we're open to changing the direction more radically (eg Angular) if it will help in the long term.
The requirements:
Initial page must static for SEO reasons. It's important that the framework be able to start with existing content, preferable with little fight.
we would prefer to have the data needed for the lightbox loaded already in feed so that the transition can be faster. Some of the data is already there (title, description, photos, num likes/ num bookmarks,num comments) but there is additional data that would be loaded for the detail view - comments, similar posts, who likes this, etc.
Changes to the post that happen in the feed or detail lightbox should be reflected in the other with little work (eg, if I like it from the feed, I should see that like and new like count number if I go to the lightbox - or the opposite.)
We would like to migrate our mobile site (currently in Sencha Touch) to also use the same code base for the parts that are common so we can have closer feature parity between mobile and main site.
These requirements related to my concerns about Angular:
1) Will it be possible/problematic to have initial page loads be static while rending via the templates additional pages.
2) is it problematic to have multiple data-sources for different parts of page - eg the main post part comes from embedded json data and from "see more"s in the feed while the additional detail would come from a different ajax call.
3) While the two-way binding is cool - I'm concerned it might be a negative in our case because of the number of items being rendered. The number of elements that we need two-way binding is relatively small. Posts like:
https://stackoverflow.com/a/7654856/214545
Angular JS ng-repeat consumes more browser memory
concern me for our use-case. We can easily have hundreds of posts each with 1-2 dozen details. Can the two-way binding be "disabled" where I have fields/elements that I know won't change?
Is it normal/possible to unload elements outside of the view port to same memory? This is also connected to the mobile direction because memory is even more of a concern there.
Would AngularJS work/perform well in our use-case? Are there any tricks/tips that would help here?
There are different methods of "infinite scroll" or feed as you put it. The needs of the users and size of acceptable response payload will determine which one you choose.
You sacrifice usability where you meet performance it seems here.
1. Append assets
This method is your traditional append to bottom approach where if the user reaches the bottom of the current scroll height, another API call will be made to "stack on more" content. This has it's benefits as being the most effective solution to handle cross device caveats.
Disadvantages of this solution, as you have mentioned, come from large payloads flooding memory as user carelessly scrolls through content. There is no throttle.
<div infinite-scroll='getMore()' infinite-scroll-distance='0'>
<ul>
<li ng-repeate="item in items">
{{item}}
</li>
</ul>
</div>
var page = 1;
$scope.getMore() = function(){
$scope.items.push(API.returnData(i));
page++;
}
2. Append assets with a throttle
Here, we are suggesting that the user can continue to display more results in a feed that will infinitely append, but they must be throttle or "manually" invoke the call for more data. This becomes cumbersome relative to the size of the content being returned that the user will scroll through.
If there is a lot of content being retruned per payload, the user will have to click the "get more" button less. This is of course at a tradeoff of returning a larger payload.
<div>
<ul>
<li ng-repeate="item in items">
{{item}}
</li>
</ul>
</div>
<div ng-click='getMore()'>
Get More!
</div>
var page = 1;
$scope.getMore() = function(){
$scope.items.push(API.returnData(i));
page++;
}
3. Virtual Scroll
This is the last and most interesting way to infinite scroll. The idea is that you are only storing the rendered version of a range of results in browser memory. That is, complicated DOM manipulation is only acting on the current range specified in your configuration. This however has it's own pitfalls.
The biggest is cross device compatibility .
If your handheld device has a virtual scrolling window that reaches the width of the device --- it better be less then the total height of the page because you will never be able to scroll past this "feed" with its own scroll bar. You will be "stuck" mid page because your scroll will always be acting on the virtual scroll feed rather than the actual page containing the feed.
Next is reliability. If a user drags the scroll bar manually from a low index to one that is extremely high, you are forcing the broswer to run these directives very very quickly, which in testing, has caused my browser to crash. This could be fixed by hiding the scroll bar, but of course a user could invoke the same senario by scrolling very very quickly.
Here is the demo
The source
"Initial page must static for SEO reasons. It's important that the framework be able to start with existing content, preferable with little fight."
So what you are saying is that you want the page to be prerendered server side before it serves content? This approach worked well in the early thousands but most everyone is moving away from this and going towards the single page app style. There are good reasons:
The inital seed you send to the user acts as a bootstrap to fetch API data so your servers do WAY less work.
Lazy loading assets and asynchronous web service calls makes the percieved load time much faster than the traditional "render everything on the server first then spit it back out to the user approach."
Your SEO can be preserved by using a page pre-render / caching engine to sit in front of your web server to only respond to web crawlers with your "fully rendered version". This concept is explained well here.
we would prefer to have the data needed for the lightbox loaded already in feed so that the transition can be faster. Some of the data is already there (title, description, photos, num likes/ num bookmarks,num comments) but there is additional data that would be loaded for the detail view - comments, similar posts, who likes this, etc.
If your inital payload for feed does not contain children data points for each "feed id" and need to use an additional API request to load them in your lightbox --- you are doing it right. That's totally a legit usecase. You would be arguing 50-100ms for a single API call which is unpercievable latency to your end user. If you abosultely need to send the additional payload with your feed, you arent winning much.
Changes to the post that happen in the feed or detail lightbox should be reflected in the other with little work (eg, if I like it from the feed, I should see that like and new like count number if I go to the lightbox - or the opposite.)
You are mixing technologies here --- The like button is an API call to facebook. Whether those changes propogate to other instantiations of the facebook like button on the same page is up to how facebook handles it, I'm sure a quick google would help you out.
Data specific to YOUR website however --- there are a couple different use cases:
Say I change the title in my lightbox and also want the change to propogate to the feed its currently being displayed in. If your "save edit action" POST's to the server, the success callback could trigger updating the new value with a websocket. This change would propogate to not just your screen, but everyone elses screen.
You could also be talking about two-way data binding (AngularJS is great at this). With two way data-binding, your "model" or the data you get back from your webservice can be binded to muiltiple places in your view. This way, as you edit one part of the page that is sharing the same model, the other will update in real time along side it. This happens before any HTTP request so is a completely different use case.
We would like to migrate our mobile site (currently in Sencha Touch) to also use the same code base for the parts that are common so we can have closer feature parity between mobile and main site.
You should really take a look a modern responsive CSS frameworks like Bootstrap and Foundation. The point of using responsive web design is that you only have to build the site once to accomadate all the different screen sizes.
If you are talking about feature modularity, AngularJS takes the cake. The idea is that you can export your website components into modules that can be used for another project. This can include views as well. And if you built the views with a responsive framework, guess what --- you can use it anywhere now.
1) Will it be possible/problematic to have initial page loads be static while rending via the templates additional pages.
As discussed above, its really best to move away from these kind of approaches. If you absolutely need it, templating engines dont care about wether your payload was rendered serverside or client side. Links to partial pages will be just as accesible.
2) is it problematic to have multiple data-sources for different parts of page - eg the main post part comes from embedded json data and from "see more"s in the feed while the additional detail would come from a different ajax call.
Again, this is exactly what the industry is moving into. You will be saving in "percieved" and "actual" load time using an inital static bootstrap that fetches all of your external API data --- This will also make your development cycle much faster because you are separating concerns of completely independant peices. Your API shouldnt care about your view and your view shouldnt care about your API. The idea is that both your API and your front end code can become modular / reusable when you break them into smaller peices.
3) While the two-way binding is cool - I'm concerned it might be a negative in our case because of the number of items being rendered. The number of elements that we need two-way binding is relatively small.
I'm also going to combine this question with the comment you left below:
Thanks for the answer! Can you clarify - it seems that 1) and 2) just deal with how you would implement infinite scrolling, not the performance issues that might come from such an implementation. It seems that 3 addresses the problem in a way similar to recent versions of Sencha Touch, which could be a good solution
The performance issues you will run into are totally subjective. I tried to outline the performance considerations like throttling into the discussion because throttling can drastically reduce the amount of stress your server is taking and the work your users browser has to do with each new result set appended into the DOM.
Infinite scroll, after a while, will eat up your users browser memory. That much I can tell you is inevitible but only through testing will you be able to tell how much. In my experience I could tell you that a users browser can handle a great deal of abuse but again, how big your payload is for each result set and what directives you are running on all of your results are totally subjective. There are solutions that render only on a ranged data set in option three I described, but have their limitations as well.
API data coming back shouldn't be anymore than 1-2kbs in size, and should only take about 50-200ms to return a query. If you arent meeting those speeds, mabye it's time to re-evaluate your queries or cut down on the size of the result set coming back by using child ID's to query other endpoints for specifics.
The main thing that remains unanswered in Dan's answer is the initial page load. We're still not satisfied with the approach that we have to do it client side - it still seems to us that there is a risk for SEO and initial page load. We have a fair amount of SEO traffic and looking for more - these people are coming to our site with no cache and we have a few seconds to catch them.
There are a few options to handle angular on the server side - I'll try to collect some of them here:
https://github.com/ithkuil/angular-on-server
https://github.com/ithkuil/angular-on-server/wiki/Running-AngularJS-on-the-server-with-Node.js-and-jsdom
https://github.com/angular/angular.js/issues/2104
will add more as they come up.

Categories