We’re using Chrome for an interactive installation and appear to have hit some sort of image loading ceiling.
The app is built for a multitouch device and runs at 1920x1080. It's built on backbone and involves the rendering and removing of a large number of views which contain sprite animations (facilitated by the stepping of transparent png background images).
We’re preloading all of the images and listening for completion using David Desandro’s imagesloaded plugin. This worked perfectly at first (with less assets) and appears to work now, until you navigate further into the application. Despite the absence of 404s in the console and the confirmed presence of the files some of the images aren’t loaded and simply don’t appear. The problem persists even if we don’t preload the images.
The typical size of an animation sequence is 92250px x 450px and they come in any where between 1mb and 10mb each (that's after they've been optimised using the compressors behind grunt-contrib-imagemin). The image assets total is around 300mb.
What we’ve tried:
Applying any cache related flags in the chrome command line arguments when launching chrome (http://peter.sh/experiments/chromium-command-line-switches/) such as --disk-cache-size.
Caching all of the media assets using the HTML5 cache manifest.
Testing on different machines, both mac and PC. This produces the same results.
What we’re currently trying:
Reducing the size of all of the images by removing every other frame in the animations. This isn't ideal.
Changing the animation method to switch out (preloaded) individual images rather than sprites.
Preloading images in batches just before they're about to be added (not ideal).
Ideally we'd like to remove the ceiling on whatever this limit we've hit is. Any help/insights would be appreciated!
Whilst we haven't really got to the bottom of what is causing this issue, we have implemented a work around. Switching out the large sprite sheet animations for individual PNG frames does the job. I'd still be interested to learn exactly what the issue/limitation was for future reference...
Related
Prod
For desktop, I have a site with a decent page speed score (currently, 96): https://developers.google.com/speed/pagespeed/insights/?url=https%3A%2F%2Fwww.usstoragecenters.com%2Fstorage-units%2Fca%2Falhambra%2F2500-w-hellman-ave&tab=desktop
Stage
I'm trying to improve the score (mostly for mobile), but I've somehow made it worse (currently, 69 on desktop): https://developers.google.com/speed/pagespeed/insights/?url=https%3A%2F%2Fstage.usstoragecenters.com%2Fstorage-units%2Fca%2Falhambra%2F2500-w-hellman-ave%3Fplain%3Dtrue&tab=mobile
Problem
While converting the site from Angular (the first link) to plain JavaScript (second link), I've managed to lower the desktop Google PageSpeed Insights score from 96 to 69.
I've massive reduced the amount of JavaScript and other resources (2MB on prod vs 500KB on stage).
Analysis
Looking through the numbers, the thing that stands out to me is that prod has an FCP (First Contentful Paint) of 0.7 seconds, while stage has an FCP of 2.0 seconds. This seems weird to me since stage should be much faster, but is apparently much slower.
Looking at the mobile timeline of thumbnails (desktop is a bit hard to see), it appears as if stage renders the first "full content" much faster:
I highlighted the ones that visually look "complete" to me (stage is on top, prod is on bottom).
Screenshots
Here are some screenshots so you can see what I do (PageSpeed Insights differs fairly significantly each time it's run).
Here is stage:
And here is production:
Summary of Changes
Here are the main things I did when trying to improve the score:
I converted the JavaScript from Angular to plain JavaScript, significantly reducing the JavaScript required to render the page.
I lazy loaded JavaScript (e.g., Google Maps JavaScript isn't loaded until it's needed).
I lazy loaded images (e.g., the slideshow initially only loads the first image).
I reduced the number of DOM elements (from 4,600 to 1,700).
I am using HTTP/2 server push to load the new plain JavaScript as fast as possible.
Those changes should have improved the score.
Question
Do you have any ideas of why, despite my best efforts, the PageSpeed score tanked?
I'd recommended you to look into the different between how 3rd-party scripts are included between your Prod and Staging environment.
Most of the time when I have problem with pagespeed, it's 3rd-party script that cause the tank. YMMV, though.
Just some pointer to start, as I compared the stat between the two, I noticed that this particular Wistia script works quite differently, maybe not the the problem with the script itself, but the way it's embedded are different or something.
On Prod
Wistia: Main-Thread Blocking Time: 3ms (section: Minimize third-party usage)
Wistia: Total CPU Time: 87 ms (section: Javascript execution time)
Wistia: Script Evaluation: 76 ms (section: Javascript execution time)
On Staging
Wistia: Main-Thread Blocking Time: 229 ms (section:
Reduce the impact of third-party code)
Wistia: Total CPU Time: 425 ms
Wistia: Script Evaluation: 376 ms
The issue you have
You have done a lot of things correctly, but your score is suffering because of First Meaningful Paint and First Contentful Paint
Looking at the load order etc. I have noticed that your main HTML file has actually increased in size by 33% from 60kb to 81.6kb.
That is your first indicator of where things are going wrong as you must load all HTML before the browser can even begin to think about rendering.
The next issue is that Lighthouse (the engine behind PSI) is showing you that you don't have render blocking content but I don't think the method is perfect in showing what is blocking render.
Your site still needs the SVG logo and icomoon files to render everything above the fold.
On the main site these are loaded early, on the staging site they are deferred and start loading much later, delaying your first Contentful paint etc.
There may be other things but those are a couple I found with a quick glance.
How to fix the couple of items I mentioned
HTML size - maybe externalise some of the JSON etc. you have inlined as there is a lot there, lazy load it in instead (suggestion only, haven't explored whether feasible for you)
SVG Logo - simple to fix, grab the actual text that makes up the logo and inline it instead of using an external resource.
icomoon - not quite as easy to fix but swap all your icons for inline SVGs.
Bonus - by changing your icons from fonts to SVG you help with accessibility for people who have their own style sheets that override fonts (as fonts for icons get over-ridden and make no sense).
Bonus 2 - One less request!
How to Identify Problems
If anyone comes across problems like this you need to do the following to work out what is going on.
Open Developer Tools and go to network tab first.
Set the options to 'Disable Cache - true' and 'Slow 3G' in the dropdown box.
Load each version of the site and compare the waterfall.
Normally you can spot changes in load order and start investigating them - the 'game' is to find items that appear above the fold and try and remove them, defer them or inline them as you have with some of your CSS.
Next thing is to learn to use the Coverage and Rendering tabs as they very quickly point you to problems.
Finally learn how to use the performance tab and understand the trace it produces.
You may already know how to use the above, but if not learn them they will let you find all of your problems quickly.
So I figured out the issue. PageSpeed Insights is drunk.
Well, it's unreliable anyway. I was able to dramatically improve the score by simply removing the server pushing of the JavaScript files (less than 20KB of them).
This is weird because the page actually seems to take longer to display. However, Google PageSpeed Insights thinks it's displaying sooner, and so it improves the score.
One time I tried, the mobile score went up to 99:
I tried again and got a more reasonable 82:
And on desktop, the score went up to 98:
The interesting thing about the mobile screenshot showing 99 is that you can see in the timeline thumbnails that the image for the slideshow at the top of the page hasn't loaded yet. So it seems like a case of Google PSI prematurely deciding that the page has "finished loading", even though it hasn't finished.
It's almost like if you delay certain things long enough, Google will ignore them. In other words, the slower the page is, the better the score they will give you. Which is of course backwards.
In any event, this might be one of those things where I'll go with the slightly less optimal approach in order to achieve a higher score. There may also be a middle ground I can explore (e.g., have the first JavaScript file inject link rel=preload tags in order to load the rest of the JavaScript files immediately rather than wait for the full chain of modules to resolve).
If somebody can come up with a more satisfactory explanation, I'll mark that as the answer. Otherwise, I may end up marking this one as the answer.
Middle Ground Approach
EDIT: Here's the middle ground approach I went with that seems to be working. First, I load a JavaScript file called preload.js that is included like this:
<script src="/preload.js" defer></script>
This is the content of the preload.js file:
// Optimization to preload all the JavaScript modules. We don't want to server push or preload them
// too soon, as that negatively impacts the Google PageSpeed Insights score (which is odd, but true).
// Instead, we start to load them once this file loads.
let paths = window.preloadJavaScriptPaths || [],
body = document.querySelector('body'),
element;
paths.forEach(path => {
element = document.createElement('link');
element.setAttribute('rel', 'preload');
element.setAttribute('as', 'script');
element.setAttribute('crossorigin', 'anonymous');
element.setAttribute('href', path);
body.appendChild(element);
});
The backend creates a variable on the window object called preloadJavaScriptPaths. It is just an array of strings (each string being a path to a JavaScript file, such as /app.js).
The page still loads pretty fast and the score is PSI score is still good (80 mobile, 97 desktop):
I have an existing website (a photo blog) that loads the majority of the photos from Flickr. I'd like to enhance the experience for users with high resolution screens and load higher res versions of photos, but I'm not sure what would be the best way to go.
Since the images in question are not UI elements, there is no sensible way to solve this problem with CSS. That leaves either client side JavaScript, or a server side find-and-replace for specific image patterns (since they come from Flickr, it's easy to detect and easy enough to figure out the url for a double-sized image).
For client side, my concern is that even the regular sized images are 500-800 KB in size, there there can be 10-30 images per gallery, causing lots of excess bandwidth use for retina users.
For server side, it's obviously tricky to determine if the request comes from a retina device or not. One idea I had (which I have yet to test out), was to run a JavaScript function that checks window.devicePixelRatio and sets a cookie accordingly, and then on each successive page request the server would know if the device is high res or not. That leaves the entry page with non-retina images, but at least all the next ones will have high res images loaded right away.
Are there any pitfalls to this proposed solution? Are there better ways to handle it?
You can generate CSS on server that will have links to both regular size and double-sized images in background-image property. This CSS can easily be different for every page by including it in <style> tag and referencing classes/ids that only exist on this page. This will deal with traffic issue, since majority of modern browsers don't load pictures for other DPIs.
Other solution (though worse) will be not just set cookie, but load images with JavaScript. This will solve the initial first page issue, but slow down the page rendering a little.
I'm trying to make a sick slider like this http://learnlakenona.com/
But I'm having major performance problems in chrome (it seems their site doesn't even run the slider on chrome). I'm displaying the images by using background-image CSS on divs, adding them as img tags caused even more lag.
I disabled all javascript and noticed I was still getting lag just with having some huge images sitting on top of eachother.
Here's just some images sitting there. Try changing the size of the panels, redrawing the images locks it up. (sometimes it runs okay for a bit, it's weird)
http://jsfiddle.net/LRynj/2/
Does anyone have an idea how I can get acceptable performance on a slider like this!? (images need to be pretty big)
Optimize your image assets by resizing them in an image editor or lowering the quality.
You can also use free tools online:
http://sixrevisions.com/tools/8-excellent-tools-for-optimizing-your-images/
The site uses HUGE transparent PNGs for a parallax effect, so:
you'll have to reduce the total weight of your images: you can try to convert these PNGs to PNG-8 if they aren't already. Quantizations algorithms and such do a very good job at reducing images to 256 colors without too much degradation of quality.
you've to keep transparency for the parallax effect. Both types of transparency are compatible with PNG-8: GIF-like opaque/tranparent bit of transparency on each pixel and "PNG-32"-like (PNG-24 + 8 bits of transparency) where each pixel has 256 levels of transparency. Adobe Fireworks is particularly good at creating such "PNG-8+alpha" images; converters also exist but they're not perfect (depends of your images).
loading the minimum part of your image that is seen immediately and only later the rest of your 9600px-wide (!) would greatly reduce the time to first view. You can achieve that by slicing your images by chunks of 1920 or 2560px, load the viewed part of the 3 images as background images and via a script that would execute only after the DOM is ready load all the other parts. Not too much parts because that would imply more assets to download but still not a 4MB sprite :) Bonus: by slicing your 3 images to PNG-8, each PNG will have its own 256-colors palette and overall quality will be better (not as perfect as a PNG-24 but better than a single 9600px PNG-8 that could only use 256 colors total. More shades of grey for the man suit in one PNG, more shiny colors for the ball-and-stick molecule, etc
EDIT: and don't ever try to load that on a smartphone! I do like Media Queries and avoid UA detection because it isn't needed most of the time and never perfect but that's one of the cases (choosing to load 8MB of images) where it'll be welcome... Ignoring users that don't have optic fiber and won't wait for your site to display is another issue not related to your question.
Does anyone know why Javascript performance would be affected by the loading of lots of external JPG/PNG images into HTML5 Image() objects, totalling approx 200Mb-250Mb. Performance also seems to be affected by cache. Ie. if the cache is full(-ish) from previous browsing the performance on the current site is greatly reduced.
There are 2 says i can crudely solve it.
clear cache manually.
minimise browser, wait about 20 secs and re-open the browser after which time the iOS/browser has reclaimed the memory and runs the JS as it should.
I would have expected the iOS to reclaim required memory to run the current task, but it seems not. Another workaround is to load 200Mb of 'cache clearing' images into Image() objects, then remove these by setting the src = "". This does seem to help, but its not an elegant solution...
please help?
First and foremost read the excellent post on LinkedIn Engineering blog. Read it carefully and check if there are some optimizations that you can also try in your application. If you tried all of them and that still haven't solved your performance issues read on.
I assume that you have some image gallery or magazine-style content area on your page
How about having this image area in a separate iframe? What you could do then is this:
Have two iframes. Only one should be visible and active in time.
Load images into first iframe. Track the size of loaded images. If exact size tracking is hard
numberOfLoadedImages * averageImageSize
might be a pretty good aproximation.
As that number approaches some thresshold start preloading the currently visible content into second iframe.
Flip the visibility of iframes so the second one becomes active.
Clear the inner content of the first frame.
Repeat the whole procedure as necessary.
I don't know for sure if this would work for you but I hope that WebKit engine on iPad clears the memory of frames independently.
EDIT: It turned out you're writing a game.
If it's a game I assume that you want to have many game objects on the screen at the same time and you won't be able to simply unload some parts of them. Here are some suggestions for that case:
Don't use DOM for games: it's too memory-heavy. Fortunately, you're using canvas already.
Sprite your images. Image sprites not only help reducing the number of requests. They also let you reduce the number of Image objects and keep the per-file overhead lower. Read about using sprites for canvas animations on IE blog.
Optimize your images. There are several file size optimizers for images. SmushIt is one of them. Try it for your images. Pay attention to other techniques discussed in this great series by Stoyan Stefanov at YUI blog.
Try vector graphics. SVG is awesome and canvg can draw it on top of canvas.
Try simplifying your game world. Maybe some background objects don't need to be that detailed. Or maybe you can get away with fewer sprites for them. Or you can use image filters and masks for different objects of the same group. Like Dave Newton said iPad is a very constrained device and chances are you can get away with a relatively low-quality sprites.
These were all suggestions related to reduction of data you have to load. Some other suggestions that might work for you.
Preload images that you will need and unload images that you no longer need. If your game has "levels" or "missions" load sprites needed only for current one.
Try loading "popular" images first and download the remaining once in background. You can use separate <iframe> for that so your main game loop won't be interrupted by downloads. You can also use cross-frame messaging in order to coordinate your downloader frame.
You can store the very most popular images in localStorage, Application Cache and WebSQL. They can provide you with 5 mb of storage each. That's 15 megs of persistent cache for you. Note that you can use typed arrays for localStorage and WebSQL. Also keep in mind that Application Cache is quite hard to work with.
Try to package your game as a PhoneGap application. This way you can save your users from downloading a huge amount of data before playing the game. 200 megs as a single download just to open a page is way too much. Most people won't even bother to wait for that.
Other than that your initial suggestion to override cache with your images is actually valid. Just don't do it straight away. Explore the possibilities to reduce the download size for your game instead.
I managed to reduce the impact by setting all the images that aren't currently in the viewport to display:none. This was with background images though and I haven't tested over 100Mb of images, so can't say whether this truly helps. But definitely worth of trying.
I was wondering if there is any other ways to compress my images or any script that would load the page faster / or the the images behind the scenes?
The site is very interactive and using very high quality layers of images for the main layout. I have already saved for web devices in Photoshop and re-compressed using ImageOptim, some are jpeg but the majority are png24 to maintain transparancy, they are all set in CSS.
I have used jpegs and css sprites where i can but there is one in particular of a tree illustration streching the full site length, that is really slowing up the loading time, is there any I could compress these images further or code them differently that I missed?
Any help would be great thanks!
You said you are spriting. That is good.
You can also use tools such as PNGcrush which attempt to make files smaller by dropping things such as meta data.
You should also send far distant expiry headers and use a cache breaker on your images, to ensure the images won't be downloaded again if unnecessary.
In Photoshop, choose file-> save for web, you will be able to find the best compromise between size and quality.
Do you really need the transparency there? PNG transparency is unsupported on some browsers and makes the page processing intensive and slow even on high end computers, depending on image size and quantity of layers. If you can show something of your site maybe someone can give more hints about how to optimize it.
You can compress them on the fly with Apache if that's your web server. One of many available articles on the subject: http://www.samaxes.com/2009/01/more-on-compressing-and-caching-your-site-with-htaccess/