This is the fiddle: http://jsfiddle.net/36mdt/
After about 10-20 seconds, the display starts to freeze randomly and shortly after crashes. I cannot reproduce this in Firefox.
Profiling reveals nothing unusual.
http://jsfiddle.net/3pbdQ/ shows there is definitely a memory leak. Even a 1 FPS, the memory usage climes 5 megabytes a frame.
On a side note, this example really shows how Math.random() is really not so random.
I've done only 2 performance improvements and it doesn't crash after 5 mins (also seems to be not leaking memory). Checkout http://jsfiddle.net/3pbdQ/3/
Don't calculate the size inside each iteration
Use timeouts instead of freezing interval.
Use bitwise operator for flooring a number
Profiling reveals nothing unusual.
Chrome Profiler doesn't work with WebWorkers, AFAIK. As per conversation with Paul Irish:
"Check about:inspect for shared workers, also you can do console.profile() within the worker code (I THINK) and capture those bits. The "cleans up" is the garbage collector: if after the cleanup there is still a growing line of excess memory, then thats a leak."
And
On a side note, this example really shows how Math.random() is really
not so random.
It is well known there are no perfect random algorithms, but anyway the bunch of grouped colors you see is because you're not setting canvas.height and canvas.width, and it differs from CSS values.
EDIT: Still leaking memory, I don't know why, about after 10 secs it 'cleans up'. Exceeds my knowledge, but works smoothly at 60 FPS (var TIME = 16)
Depending on the system and browser version you use some steps may vary although I tried my best to provide common steps that are compatible with most systems.
Disable Sandbox:
1. Right click Google Chrome desktop icon.
2. Select Properties.
3. Click Shortcut > Target.
4. Add "--no-sandbox"
5. Click Apply | OK.
6. Download and install ZombieSoftFix.
7. Check and resolve conflicts detected.
Disable Plug-Ins:
1. Type "about:plugins" in Address Bar.
2. Press ENTER.
3. Disable all plug-ins displayed in the list page.
Clear Temporary Files:
1. Click Wrench.
2. Select More Tools | Clear browsing data.
3. Check-up all boxes, click "Clear browsing data" button to confirm the process.
Thanks & Regards.
This is an unfortunate, known Chrome bug.
Related
From about 2 months ago, I got back to work on my fully custom ecommerce regarding the optimization of the front-end on pagespeed, in view that googlebot now carries out measurements of parameters such as CLS and LCP for which they have the criticality and part of determining if that particular page goes under index in the google crawler.
I did all the optimizations that I managed like:
extraction of critical css online
all non-critical CSS merged and parsed inline
mod_pagespeed
image and JS under CDN
nginx optimization
moved non-core JS below close to body closure where possible
deferring non critical JS
preloading, prefetch and many other things that I don't remember now done during the long nights spent studying the template
So now, i reach a great result compared of the previous 2 months.
The only thing I can't explain is that I have high Time To interactive, CLS and LCP in mobile mode, When the desktop version is just fine.
I am puzzling myself, trying to find the key to the solution.
This is an example of a product page: https://developers.google.com/speed/pagespeed/insights/?hl=it&url=https%3A%2F%2Fshop.biollamotors.it%2Fcatalogo%2FSS-Y6SO4-HET-TERMINALI-AKRAPOVIC-YAMAHA-FZ-6-S2-FZ-6-FAZER-S2-2004-2009-TITANIO--2405620
and here is the homepage which has acceptable values compared to the product page, but still the time to interactive is much higher than the desktop: https://developers.google.com/speed/pagespeed/insights/?hl=it&url=https%3A%2F%2Fshop.biollamotors.it%2F
Thanks in advance to all expert mode users who will be of help to me.
Why are my mobile scores lower than desktop?
The mobile test uses network throttling and CPU throttling to simulate a mid range device on 4G so you will always get lower scores for mobile.
You may find this related answer I gave useful on the difference between how the page loads as you see it and how Lighthouse applies throttling (along with how to make lighthouse use applied throttling so you can see the slowdown in real time), however I have included the key information in this answer and how it applies to your circumstances.
Simulated Network Throttling
When you run an audit it applies 150ms latency to each request and also limits download speed to 1.6 Megabits (200 kilobytes) per second and upload to 750 megabits (94 kilobytes) per second.
This is very likely to affect your Largest Contentful Paint (LCP) as resources take a lot longer to download.
This throttling is done via an algorithm rather than applied (it is simulated) so you don't see the slower load times.
CPU throttling
Lighthouse applies a 4x slowdown to your CPU to simulate a mid-tier mobile phone performance.
If your JavaScript payload is heavy this could block the main thread and delay rendering. Or if you dynamically insert elements using JavaScript it can delay LCP for the same reason.
This affects your Time To Interactive most out of the items you listed.
This throttling is also done via an algorithm rather than applied (it is simulated) so you don't see the slower processing times.
What do you need to do to improve your scores?
Focus on your JavaScript.
You have 5 blocking scripts for a start, just add defer to them as you have done the others (or better yet async if you know how to handle async JS loading).
Secondly the payload is over 400kb of JS (uncompressed) - if you notice your scripts take 2.3 seconds to evaluate.
Strip out anything you don't need and also run a trace in the "performance" tab in Developer tools and learn how to find long running tasks and performance bottlenecks.
Look at reducing the number of network requests, because of the higher latency of a 4G network this can add several seconds to you load time if you have a lot of requests.
Combine as much CSS and JS as you can (and you only need to inline your critical CSS not the entire page CSS, find all items used "above the fold" and inline them only, at the moment you seem to be sending the whole site CSS inline which is going to add a lot of page weight).
Finally your high Cumulative Layout Shift (CLS) is (in part) because you are using JS to hide items (for example the modal with ID comp-modal appears to be hidden with JS) on page load, but they have already rendered by the time that JS runs, easily fixed by hiding them within your inline CSS rather than JavaScript.
Other than that you just need to follow the guidance that the Lighthouse report gives you (you may not have paid much attention to the "Diagnostics" section yet, start looking there at anything that has a red triangle or orange square next to it, each item provides useful information on things that may need optimisation.).
That should be enough to get you started.
Prod
For desktop, I have a site with a decent page speed score (currently, 96): https://developers.google.com/speed/pagespeed/insights/?url=https%3A%2F%2Fwww.usstoragecenters.com%2Fstorage-units%2Fca%2Falhambra%2F2500-w-hellman-ave&tab=desktop
Stage
I'm trying to improve the score (mostly for mobile), but I've somehow made it worse (currently, 69 on desktop): https://developers.google.com/speed/pagespeed/insights/?url=https%3A%2F%2Fstage.usstoragecenters.com%2Fstorage-units%2Fca%2Falhambra%2F2500-w-hellman-ave%3Fplain%3Dtrue&tab=mobile
Problem
While converting the site from Angular (the first link) to plain JavaScript (second link), I've managed to lower the desktop Google PageSpeed Insights score from 96 to 69.
I've massive reduced the amount of JavaScript and other resources (2MB on prod vs 500KB on stage).
Analysis
Looking through the numbers, the thing that stands out to me is that prod has an FCP (First Contentful Paint) of 0.7 seconds, while stage has an FCP of 2.0 seconds. This seems weird to me since stage should be much faster, but is apparently much slower.
Looking at the mobile timeline of thumbnails (desktop is a bit hard to see), it appears as if stage renders the first "full content" much faster:
I highlighted the ones that visually look "complete" to me (stage is on top, prod is on bottom).
Screenshots
Here are some screenshots so you can see what I do (PageSpeed Insights differs fairly significantly each time it's run).
Here is stage:
And here is production:
Summary of Changes
Here are the main things I did when trying to improve the score:
I converted the JavaScript from Angular to plain JavaScript, significantly reducing the JavaScript required to render the page.
I lazy loaded JavaScript (e.g., Google Maps JavaScript isn't loaded until it's needed).
I lazy loaded images (e.g., the slideshow initially only loads the first image).
I reduced the number of DOM elements (from 4,600 to 1,700).
I am using HTTP/2 server push to load the new plain JavaScript as fast as possible.
Those changes should have improved the score.
Question
Do you have any ideas of why, despite my best efforts, the PageSpeed score tanked?
I'd recommended you to look into the different between how 3rd-party scripts are included between your Prod and Staging environment.
Most of the time when I have problem with pagespeed, it's 3rd-party script that cause the tank. YMMV, though.
Just some pointer to start, as I compared the stat between the two, I noticed that this particular Wistia script works quite differently, maybe not the the problem with the script itself, but the way it's embedded are different or something.
On Prod
Wistia: Main-Thread Blocking Time: 3ms (section: Minimize third-party usage)
Wistia: Total CPU Time: 87 ms (section: Javascript execution time)
Wistia: Script Evaluation: 76 ms (section: Javascript execution time)
On Staging
Wistia: Main-Thread Blocking Time: 229 ms (section:
Reduce the impact of third-party code)
Wistia: Total CPU Time: 425 ms
Wistia: Script Evaluation: 376 ms
The issue you have
You have done a lot of things correctly, but your score is suffering because of First Meaningful Paint and First Contentful Paint
Looking at the load order etc. I have noticed that your main HTML file has actually increased in size by 33% from 60kb to 81.6kb.
That is your first indicator of where things are going wrong as you must load all HTML before the browser can even begin to think about rendering.
The next issue is that Lighthouse (the engine behind PSI) is showing you that you don't have render blocking content but I don't think the method is perfect in showing what is blocking render.
Your site still needs the SVG logo and icomoon files to render everything above the fold.
On the main site these are loaded early, on the staging site they are deferred and start loading much later, delaying your first Contentful paint etc.
There may be other things but those are a couple I found with a quick glance.
How to fix the couple of items I mentioned
HTML size - maybe externalise some of the JSON etc. you have inlined as there is a lot there, lazy load it in instead (suggestion only, haven't explored whether feasible for you)
SVG Logo - simple to fix, grab the actual text that makes up the logo and inline it instead of using an external resource.
icomoon - not quite as easy to fix but swap all your icons for inline SVGs.
Bonus - by changing your icons from fonts to SVG you help with accessibility for people who have their own style sheets that override fonts (as fonts for icons get over-ridden and make no sense).
Bonus 2 - One less request!
How to Identify Problems
If anyone comes across problems like this you need to do the following to work out what is going on.
Open Developer Tools and go to network tab first.
Set the options to 'Disable Cache - true' and 'Slow 3G' in the dropdown box.
Load each version of the site and compare the waterfall.
Normally you can spot changes in load order and start investigating them - the 'game' is to find items that appear above the fold and try and remove them, defer them or inline them as you have with some of your CSS.
Next thing is to learn to use the Coverage and Rendering tabs as they very quickly point you to problems.
Finally learn how to use the performance tab and understand the trace it produces.
You may already know how to use the above, but if not learn them they will let you find all of your problems quickly.
So I figured out the issue. PageSpeed Insights is drunk.
Well, it's unreliable anyway. I was able to dramatically improve the score by simply removing the server pushing of the JavaScript files (less than 20KB of them).
This is weird because the page actually seems to take longer to display. However, Google PageSpeed Insights thinks it's displaying sooner, and so it improves the score.
One time I tried, the mobile score went up to 99:
I tried again and got a more reasonable 82:
And on desktop, the score went up to 98:
The interesting thing about the mobile screenshot showing 99 is that you can see in the timeline thumbnails that the image for the slideshow at the top of the page hasn't loaded yet. So it seems like a case of Google PSI prematurely deciding that the page has "finished loading", even though it hasn't finished.
It's almost like if you delay certain things long enough, Google will ignore them. In other words, the slower the page is, the better the score they will give you. Which is of course backwards.
In any event, this might be one of those things where I'll go with the slightly less optimal approach in order to achieve a higher score. There may also be a middle ground I can explore (e.g., have the first JavaScript file inject link rel=preload tags in order to load the rest of the JavaScript files immediately rather than wait for the full chain of modules to resolve).
If somebody can come up with a more satisfactory explanation, I'll mark that as the answer. Otherwise, I may end up marking this one as the answer.
Middle Ground Approach
EDIT: Here's the middle ground approach I went with that seems to be working. First, I load a JavaScript file called preload.js that is included like this:
<script src="/preload.js" defer></script>
This is the content of the preload.js file:
// Optimization to preload all the JavaScript modules. We don't want to server push or preload them
// too soon, as that negatively impacts the Google PageSpeed Insights score (which is odd, but true).
// Instead, we start to load them once this file loads.
let paths = window.preloadJavaScriptPaths || [],
body = document.querySelector('body'),
element;
paths.forEach(path => {
element = document.createElement('link');
element.setAttribute('rel', 'preload');
element.setAttribute('as', 'script');
element.setAttribute('crossorigin', 'anonymous');
element.setAttribute('href', path);
body.appendChild(element);
});
The backend creates a variable on the window object called preloadJavaScriptPaths. It is just an array of strings (each string being a path to a JavaScript file, such as /app.js).
The page still loads pretty fast and the score is PSI score is still good (80 mobile, 97 desktop):
I have a simple single-page app that scrolls a bunch of photos indefinitely, meant to be run on displays for days at a time.
Because of the large number of scrolling pics, the memory use in Chrome keeps growing. I want a way to programmatically reduce the memory consumption at intervals (every few hours).
When I stop the animation programmatically, the memory footprint still doesn't go down. Even when I reload the page with location.reload();, it doesn't go down.
Is there a way to do this? How I can I "clear everything" programmatically that has the same effect as closing the tab?
FYI, in Firefox there isn't a memory issue. Just in Chrome. The code is super simple, uses requestAnimationFrame to animate two divs across the screen constantly. I'm not accumulating references anywhere. My question isn't about my code specifically, but rather about general ways to reset the tab's memory if it can be done.
Please find out with chrome or Firefox memory debugger, where the memory leak is.Then, when you find, just think, how you can clean this objects.
Reasonable causes of memory leaks are:
You are loading big images, you need to resize it on server, or simply
draw it to smaller canvases
You have too many dom elements (for
an example more than 10000)
You have some js objects, that are
growing up, and you don't clean it.
When in task manager you will see, that usage of memory is very high. You can see what going on with memory at firefox if you put to address bar about memory , and then press button "Measure".
You can use location.reload(true), with this it clears all the memory(caches etc). Moreover if you want to further clear the storage of a browser then you can use localStorage.clear() in your javascript code. This clears the items stored in the localStorage if you have saved something like this. localStorage.myItem = "something".
I have a web app that uses a lot of JavaScript and is intended to run non-stop (for days/weeks/months) without a page reload.
However, Chrome is crashing after a few hours. Safari doesn't crash as often, but it does slow down considerably.
How can I check whether or not the issues are with my code, or with the browser itself? And what can I do to resolve these issues?
Using Chrome Developer Profile Tools you can get a snapshot of what's using your CPU and get a memory snapshot.
Take 2 snaps shots. Select this first one and switch to comparison as shown below
The triangle column is the mathmatical symbol delta or change. So if your deltas are positive, you are creating more objects in memory. I'd would then take another snapshot after a given period of time, say 5 minutes. Then compare the results again. Looking at delta
If your deltas are constant, you are doing an good job at memory manageemnt. If negative, your code is clean and your used objects are able to be properly collected, again a great job.
If your deltas keep increasing, you probably have a memory leak.
Also,
document.getElementsByTagName('*'); // a count of all DOM elements
would be useful to see if you are steadily increasing you DOM elements.
Chrome also has the "about:memory" page, but I agree with IAbstractDownVoteFactory - developer tools are the way to go!
I've been building Conway's Life with javascript / jquery in order to run it in a browser Here. Chrome, Firefox and Opera or Safari do this pretty fast so preferably don't use IE for this. IE9 is ok though.
While generating the new generations of Life I am storing the previous generations in order to be able to walk back through the history. This works fine until a certain point when memory fills up, which makes the browser(tab) crash.
So my question is: how can I detect when memory is filling up? I am storing an array for each generation in an array which forms the history of generations. This takes massive amounts of memory which crashes the browser after a few thousands of generations, depending on available memory.
I am aware of the fact that javascript can't check the amount of available memory but there must be a way...
I doubt that there is a way to do it. Even if there is, it would probably be browser-specific. I can suggest a different way, though.
Instead of storing all the data for each generation, store snapshots taken every once in a while. Since the Conway's Game of Life is deterministic, you can easily re-generate future frames from a given snapshot. You'll probably want to keep a buffer of a few frames so that you can make rewinding nice and smooth.
In reality, this doesn't actually solve the problem, since you'll run out of space eventually. However, if you store every n frames, your application will last n times longer, which might just be long enough. I would recommend that you impose some hard limits on how far into the past you can rewind so that you have a cap on how much you have to store. Determine that how many frames that would be (10 minutes at 30 FPS = 18000 frames). Then, divide frames by how many frames you can store (profile various web browsers to figure this out) and that is the interval between snapshots you should use.
Dogbert pretty much nailed it. You can't know exactly how much available memory there is but you can know how potentially large your dataset will be.
So, take the size of each object stored in the array, multiply by array dimensions and that's the size of one iteration. Multiply that by the desired number of iterations to see how much space total it will take, and adjust accordingly.
Or, inspired by Travis, simply run the pattern in reverse from the last known array. It is deterministic after all.