I am currently working on a site that is displaying relatively long (~5sec) render times on the home page. It is a WordPress install, with quite a bit going on, but it seems excessively long to load. I have noticed through the web inspector several big gaps in the load timeline where nothing seems to be going on. Does anyone have any idea what might be going on? I have confirmed there are no errors in the code that are being signaled load. The site can be found at http://ewokdown.com. For comparison, you can see that http://ewokdown.com/reviews loads quite snappily by comparison.
Thanks!
Elliott
Checked out your website, and the delay was not caused by anything on the client-side. Rather the delay occurred while waiting for the server to respond to my browser's request. Once the resources were loaded, rendering took a standard (relatively short) amount of time.
What does this mean? It means there's likely nothing specifically wrong with the way your site renders, it whatever server-side code you have for your site that ends up hanging.
Word to the wise: I wouldn't sweat the initial page load times all that much. User's expect there to be some delay, and once the resources are cached browsers typically do a good job of reducing future load times.
Related
The website testmysite claims that the website I'm working on is very slow, namely 9.8 seconds on 4g which is ridiculous.
My webapp requests geolocation first, but denies it immediately, so this is not where the slowness comes from.
The site is server-rendered, and sends scripts to the client to optimise.
The bundle analyzer is very dubious.
It claims that my total bundle size would be 750kb, but I strongly doubt that this is the case.
Even if it turns out that the testmysite is not a reliable source, I'd like to know why it says that my website is so slow, and what I can actually do to improve.
My vendor node modules are chunk split, because I want the browser to cache them individually.
My website is deployed here: https://zwoop-website-v001.herokuapp.com
Note that loading may take some time because I use the free service and the server falls asleep often.
Edit: my performance tab shows the following in Chrome:
I have really zero idea why that website says that my site is slow...
But if the measurement is used by search engines, then I do care.
* Answer: *
I just voted to re-open this question as #slebetman has helped me to troubleshoot some issues and I'd just like to formulate an answer.
First thing to note is that the free Heroku server that I run in production is located in Europe (this was an option you could choose), and it is unclear where the server from testmysite is located. #slebetman was located in East-Asia when running his test.
#slebetman mentioned that the network tab indicated for him a very slow load time for loading fonts (woff2), of about 2 seconds. This didn't occur for me, but as it turns out, these are font-awesome icons that are loaded from a CDN.
So while there is the logical thought of looking at performance in terms of script loading, interpretation and rendering, there may additionally be a latency issue related to third-party resources that are downloaded from another server. Even when your website is fast, you don't actually know if you have imported some module that requests additional resources.
Either of these things can be tracked through the performance or network tab of Google Chrome. An additional tip is to mimick a slow network (although I don't think this would actually track all problems that may occur on the network in terms of redirects, DNS resolutions etc as #slebetman mentions could be the case as well).
First thing to note is that the free Heroku server that I run in production is located in Europe (this was an option you could choose), and it is unclear where the server from testmysite is located.
#slebetman was located in East-Asia when running his test.
#slebetman mentioned that the network tab indicated for him a very slow load time for loading fonts (woff2), of about 2 seconds. This didn't occur for me, but as it turns out, these are font-awesome icons that are loaded from a CDN.
So while there is the logical thought of looking at performance in terms of script loading, interpretation and rendering, there may additionally be a latency issue related to third-party resources that are downloaded from another server. Even when your website is fast, you don't actually know if you have imported some module that requests additional resources.
Either of these things can be tracked through the performance or network tab of Google Chrome. An additional tip is to mimick a slow network (although I don't think this would actually track all problems that may occur on the network in terms of redirects, DNS resolutions etc as #slebetman mentions could be the case as well).
I'm looking at a waterfall in Chromes Developer tools of several CSS and Javascript files.
When refreshing the page, several of the files load from the browser cache, as expected. These are taking 1ms to load most of the time. However some files, and it seems to be the same offenders each refresh, are taking quite a bit longer. Something between 400ms and 800ms.
The waterfall timeline in Chromes network tab shows that this time is spent in the TTFB (time to first byte) in some cases. This doesn't make any sense to me, if it's getting it from the browser cache and not from the server, why there is a TTFB?
Can anyone share some light on what's happening here?
First some backstory:
We have a website that includes a Google Map in the usual way:
<script src="https://maps.googleapis.com/maps/api/js?v=....></script>
Then there is some of our javascript code that initializes the map. Now suddenly yesterday pages started to load but then freezed up entirely. In Chrome this resulted in having to force quit and restart. Firefox was smarter and allowed the user to stop script execution.
Now after some debugging, I found out that a previous developer had included the experimental version of the Google Maps API:
https://maps.googleapis.com/maps/api/js?v=3.exp
So it's likely that something has changed on Google's servers (which is completely understandable). This has uncovered a bug on our end, and both in combination caused the script to hang and freeze up the website and the browser.
Now ok, bug is found and fixed, no big harm done.
And now the actual question:
Is it possible to somehow sandbox these external script references so that they cannot crash my main site. I am loading a decent amount of external javascript files (tracking, analytics, maps, social) from their own servers.
However such code could change at all times and could have bugs that freeze my site. How can I protect my site? Is there a way to maybe define maximum allowable execution time?
I'm open to all kinds of suggestions.
It actually doesn't matter where the scripts are coming from - whether an external source or your own server. Either way they are run in the clients browser. And that makes it quite difficult to achieve your desired sandbox behavior.
You can get a sandbox inside your DOM with the usage of iframes and the keyword "sandbox". This way the content of this iframe is independent from the DOM of your actual website and you can include scripts independent as well. But this is beneficial mainly for security. I am not sure how it would result regarding the overall stability when one script has a bug like an endless loop or similar. But imho this is worth a try.
For further explanation see: https://www.html5rocks.com/en/tutorials/security/sandboxed-iframes/
I have an ASP.NET MVC application that makes pretty heavy use of javascript and JQuery for both administrative functions as well as customer-facing functions. Recently I reorganized the administrative screens to be able to more cleanly fit administrative controls for some new features.
I tested using IE and Chrome and found that there was a slight, but acceptable hang in one of the busier pages. However, the main person who uses the admin pages uses Firefox and kept reporting an unacceptable hang. I finally checked it out and found that what hangs in Chrome and IE for 2-3 seconds hangs in Firefox for 10-12 seconds, which is no good.
Not knowing where to turn, I wound up installing Glimpse and got it configured and running just fine, but I'm still having trouble figuring out how to drill into it to find out what area of the page is causing trouble. All I can tell so far is that it is definitely something with how the client (Firefox) is rendering. To be clear, it happens on all browsers, but for some reason it is way more pronounced in Firefox.
Can someone please give me some pointers on how to get started on diagnosing the issue? I'm not married to the idea of using Glimpse, but it seems like a pretty decent tool from what I can tell.
Thanks for your help.
Based on what you're describing, the problem appears to be client side. With that said, Glimpse may not be as well-suited as using Firefox's own profiler.
SHIFT+F5 will bring up the web developer performance screen. From there, you can begin/end a performance analysis and gain more insight into what may be taking longer than expected.
It may also be worthwhile to look at the network tab and make sure assets are loading in a timely manner.
Keep in mind as well that add-ins could play into the latency. If the end-user has a setup that performs post-page processing (such as Greasemonkey scripts or (recalling an earlier add-in) a Skype plugin that used to transform phone numbers on the page to direct-dial links), that would also play a part in the performance. A good way to rule these out is to hold down SHIFT while starting up Firefox (effectively running it in Safe Mode), which would determine if it's Firefox itself or an add-in that's to blame.
during the development with angular + requirejs. I met a weird problem.
if I quick refresh page(like pressing F5), after a few times, I will get the following error message and the page is not working any more even refresh the page again, unless clean the cache and open the page on new tab:
Error: $digest already in progress
at Error (<anonymous>)
at beginPhase (http://www.caoglish.info/angular_bug/assets/vendor/angular/angular.js:8495:15)
at Object.Scope.$apply (http://www.caoglish.info/angular_bug/assets/vendor/angular/angular.js:8297:11)
at done (http://www.caoglish.info/angular_bug/assets/vendor/angular/angular.js:9357:20)
at completeRequest (http://www.caoglish.info/angular_bug/assets/vendor/angular/angular.js:9520:7)
at XMLHttpRequest.xhr.onreadystatechange (http://www.caoglish.info/angular_bug/assets/vendor/angular/angular.js:9490:11)
at http://www.caoglish.info/angular_bug/assets/vendor/angular/angular.js:9499:11
at sendReq (http://www.caoglish.info/angular_bug/assets/vendor/angular/angular.js:9333:9)
at $http (http://www.caoglish.info/angular_bug/assets/vendor/angular/angular.js:9124:17)
at Function.$http.(anonymous function) (http://www.caoglish.info/angular_bug/assets/vendor/angular/angular.js:9267:18)
this error seem to only happen in chrome. I try to trigger it on IE and Firefox, it was not happening. however, I try it in chrome on slow computer, it also not happen.
the computer which I triggered this issue has i7 cpu, 8gb ram.angular I using is 1.0.8.
I attached my code which isolated this issue. my project code is more complex then this, so less times to trigger it. the isolated code may requires press more few times on F5 to trigger the issue.
the isolated example code: Link
how to trigger the bug: video
quick pressing F5 until it happen.
I create a issue post on angular project: Link
This question may be dead, but I've seen the same behavior. And it's definitely not an Angular issue. It's just a simple race condition, and it's more or less unavoidable. When you hit F5, the browser halts everything going on in the "sandbox" for security reasons: standard requests for things like IMG content, XHR requests, the main thread your Angular code runs in, etc. It's extremely fast but not instant, and if you work hard to disrupt it, you succeed.
IMO the root question here should be whether this is "bad", and to me, that means does it either a) create a security vulnerability, or b) do something that could corrupt important data.
You can rule out the first because only code already running in the browser at the moment F5 was pressed could POSSIBLY try to exploit whatever edge cases come out of the race condition here. And if it's already running in the browser, it can do anything it wants because it's already "within the walls" so to speak.
You can rule out the second because you aren't supposed to design Web apps that require 100% confidence that the next line of code will be executed... because you know that's never certain. The user could just close the browser any time they wanted to, and once you assume that you have to code in such a way to tolerate these kinds of conditions anyway.