My website has a jQuery script (from Shadow animation jQuery plugin) which constantly changes the colour of box shadow of various <div>s on the home page.
The animation is not essential, but it does take up a lot of CPU time on slower machines.
Is it possible to find out if the script will run 'too slowly'? I can then disable it before it impacts performance.
Is this even a good idea? If not, is there an easy way to break up the jQuery animate?
This may indirectly solve your problem. Pick a few algorithms and performance tests from this site http://dromaeo.com/ that seem similar to your jQuery plugin. Don't run comprehensive tests as they do on the site. Instead, pick fairly small and fast algorithms, and run them for an unnoticeable period of time.
Use a tiny predefined time span to limit how long these tests are allowed to run. Let's say if that span is 200 ms, and on a fast machine with browser A, you can get 100 iterations, while on some random user's machine, it's only able to complete 5 iterations, then you may want to consider disabling it on the user's machine. Tweak and tweak till you find the optimal numbers.
As a bonus, send all test results back to your server so you have a better idea of where your users lie in the speed spectrum. If a big majority of users are using slower computers and older browsers, then it just may make sense to remove the thing altogether.
You could probably do it by timing a few times round a loop which did some intensive processing on page load, but that's going to slow the page and add even further to CPU load, so it doesn't seem like a great solution.
A compromise I've used in the past, though, was to make the decision based on browser version, for example, Internet Explorer 6 users get simpler content whereas newer browsers with better JavaScript performance get the animation. That seemed to work pretty well at a practical level. In practice, browser choice is a big factor in JavaScript performance and you might get a 90% fit with what you want very simply just by taking that into account.
You could do something like $(window).width() to get the browser width. Using this you could make the assumption that anything < 1024px wide is likely to be either a netbook, smartphone or old computer.
This wouldnt be nearly as accurate as timing a loop, but much more efficient.
Obviously its this rule's a generalisation and there are will be slow computers with > 1024px. But in general a 1024px + computer would typically be able to handle a fair bit of javascript (untill the owner puts on loads of software, virus scans and browser toolbars!)
hope this is useful!
Related
From about 2 months ago, I got back to work on my fully custom ecommerce regarding the optimization of the front-end on pagespeed, in view that googlebot now carries out measurements of parameters such as CLS and LCP for which they have the criticality and part of determining if that particular page goes under index in the google crawler.
I did all the optimizations that I managed like:
extraction of critical css online
all non-critical CSS merged and parsed inline
mod_pagespeed
image and JS under CDN
nginx optimization
moved non-core JS below close to body closure where possible
deferring non critical JS
preloading, prefetch and many other things that I don't remember now done during the long nights spent studying the template
So now, i reach a great result compared of the previous 2 months.
The only thing I can't explain is that I have high Time To interactive, CLS and LCP in mobile mode, When the desktop version is just fine.
I am puzzling myself, trying to find the key to the solution.
This is an example of a product page: https://developers.google.com/speed/pagespeed/insights/?hl=it&url=https%3A%2F%2Fshop.biollamotors.it%2Fcatalogo%2FSS-Y6SO4-HET-TERMINALI-AKRAPOVIC-YAMAHA-FZ-6-S2-FZ-6-FAZER-S2-2004-2009-TITANIO--2405620
and here is the homepage which has acceptable values compared to the product page, but still the time to interactive is much higher than the desktop: https://developers.google.com/speed/pagespeed/insights/?hl=it&url=https%3A%2F%2Fshop.biollamotors.it%2F
Thanks in advance to all expert mode users who will be of help to me.
Why are my mobile scores lower than desktop?
The mobile test uses network throttling and CPU throttling to simulate a mid range device on 4G so you will always get lower scores for mobile.
You may find this related answer I gave useful on the difference between how the page loads as you see it and how Lighthouse applies throttling (along with how to make lighthouse use applied throttling so you can see the slowdown in real time), however I have included the key information in this answer and how it applies to your circumstances.
Simulated Network Throttling
When you run an audit it applies 150ms latency to each request and also limits download speed to 1.6 Megabits (200 kilobytes) per second and upload to 750 megabits (94 kilobytes) per second.
This is very likely to affect your Largest Contentful Paint (LCP) as resources take a lot longer to download.
This throttling is done via an algorithm rather than applied (it is simulated) so you don't see the slower load times.
CPU throttling
Lighthouse applies a 4x slowdown to your CPU to simulate a mid-tier mobile phone performance.
If your JavaScript payload is heavy this could block the main thread and delay rendering. Or if you dynamically insert elements using JavaScript it can delay LCP for the same reason.
This affects your Time To Interactive most out of the items you listed.
This throttling is also done via an algorithm rather than applied (it is simulated) so you don't see the slower processing times.
What do you need to do to improve your scores?
Focus on your JavaScript.
You have 5 blocking scripts for a start, just add defer to them as you have done the others (or better yet async if you know how to handle async JS loading).
Secondly the payload is over 400kb of JS (uncompressed) - if you notice your scripts take 2.3 seconds to evaluate.
Strip out anything you don't need and also run a trace in the "performance" tab in Developer tools and learn how to find long running tasks and performance bottlenecks.
Look at reducing the number of network requests, because of the higher latency of a 4G network this can add several seconds to you load time if you have a lot of requests.
Combine as much CSS and JS as you can (and you only need to inline your critical CSS not the entire page CSS, find all items used "above the fold" and inline them only, at the moment you seem to be sending the whole site CSS inline which is going to add a lot of page weight).
Finally your high Cumulative Layout Shift (CLS) is (in part) because you are using JS to hide items (for example the modal with ID comp-modal appears to be hidden with JS) on page load, but they have already rendered by the time that JS runs, easily fixed by hiding them within your inline CSS rather than JavaScript.
Other than that you just need to follow the guidance that the Lighthouse report gives you (you may not have paid much attention to the "Diagnostics" section yet, start looking there at anything that has a red triangle or orange square next to it, each item provides useful information on things that may need optimisation.).
That should be enough to get you started.
I'm creating an online psychology experiment in JavaScript with the goal of studying people's ability to recognize images quickly (this RSVP, Rapid Serial Visual Presentation, paradigm is a common approach in psychophysics for isolating the feedforward, pre-attentive stages of visual processing). I aim to show a sequence of images for brief durations (e.g. 30 ms, 50 ms, 70 ms, etc.). For valid results, it's important to get the presentation times as precise as possible. Usually, this is done in a lab with specialized software or equipment, but this setup isn't an option for me. I recognize that some slop is an inevitable side effect of doing this experiment in a browser, but I'm hoping to (A) minimize it as much as possible, and (B) quantify, to the extent that I can, the amount of imprecision that actually occurred.
Right now, I'm preloading images, and am using setTimeout() to control stimulus duration. I'm assuming that experiment participants will use a variety of monitors (so refresh rates may differ) and browsers.
By eye, the timings of the images with my current approach seem to be wrong, and, in particular, highly variable. This is consistent with things I've read saying that variability for brief presentation times can be high.
Are there tools available to control timing more precisely in JavaScript? Additionally, are there tools that can measure/estimate the time an image was actually shown on screen? Finally, are there other options I should look at, e.g. compiling the sequence of images into a video, gif, etc.?
setTimeout does not specify when an action should take place. It only states that an action should take place once the timer has expired; but there's no way to guarantee when exactly the event will take place (https://developer.mozilla.org/en-US/docs/Web/API/WindowOrWorkerGlobalScope/setTimeout).
When working on (psychological) expiriments, and when you have no control over the participant's environment, I would argue (apart from not entertaing such an uncontrolled environment in such cases) for claiming the whole tab the user is using (effectively blocking any interaction while the image is shown) or, as you already mention yourself, using an animated gif or sequence of images.
Or you can monitor and store the time the image was actually shown to your participant (using checking when the image comes into the viewport), and compare that timestamp with the participant's interaction. That would perhaps give the most valid results, as it will eliminate the inpredictability of setTimeout.
Trying to get page-load time down.
I followed the third example outlined here to asynchronously load the TypeKit javascript.
To make it work you have to add a .wf-loading #some-element {visibility: hidden;} to each element that uses the font, and after either 1) it loads or 2) after a set time (1 sec), the font becomes visible.
The thing is, the CSS I'm working with has the font assigned to about 200 elements, so thats 200 elements of .wf-loading{ } (note: I did not write this CSS).
I feel this would slow the load time down more than just letting it load regularly, DOM traversing that much stuff. If this is the case, I will just axe Typekit altogether and go with a regular font.
Are there any tools I can use to run performance tests on this kind of stuff? Or has anyone tested these things out?
You're not actually modifying more than a single DOM element (the root ) with this approach. This means that our modern browsers will rely on their super fast CSS engines, so the number of elements involved will have no noticeable affect on page load.
As far as page load and flicker, network latency is usually an order of magnitude worse than DOM manipulation. There will always be some flicker on the very first (unprimed) page load while the browser waits for the font to download. Just make sure your font being cached for reuse, and try to keep it's file size as small as possible.
I went down this path a few years ago with Cufon. In the end, I chose the simplest path with acceptable performance and stopped there. It's easy to get caught up in optimizing page loads, but there's probably more promising areas for improvement – features, bugs, refactoring, etc.
The problem is, as they mention in the blog, the rare occasions (but it definitely does happen - certainly more than once for me) when the Typekit CDN fails completely and users just see a blank page. This is when you'll wish you'd used async loading.
I'm working on my new portfolio and I want to use a complex javascript (for animating, moving, effecting dom elements) and i going to do as much as possible optimization to maximize the performance. BUT I can't prepare for all the case with my site will be faced. So i started to looking for a script with I can check the browser performance (maximum in a few seconds) and based on the performance test results I can set the number of displayed and calculated effects on the page.
So is there any way to check browser performance and set the optimal number of applied effect on a page?
If possible, use CSS transforms/transitions instead of pure-js effects, as the former are usually hardware accelerated and thus orders of magnitude faster.
Even if you don't use CSS transforms, you can detect support for them using e.g. modernizr, and if supported, you can assume that the browser is very modern and has pretty good performance in general. Take a look at window.requestAnimationFrame, it automatically throttles the framerate.
I'm developing a site which has a flash background playing a small video loop scaled to fill the whole background. Over the top I have a number of HTML elements which are animated using javascript. The problem I am having is that (predominantly in FF, but also in others to a lesser degree) the flash seems to be causing my javascript animations to run rather jerky, and in some cases missing the animation altogether and just jumping to the end state.
Does anybody have any thoughts on how to make the 2 work together nicely?
Many thanks
Matt
You'll notice the same effect on BBC Iplayer - if you've played a few videos, then use the left and right scroller. The javascript animation is no longer smooth.
The is more noticeable in FF.
Chrome creates an entirely separate process for the flash, and therefore smoother, Safari is quite lightweight therefore smoother at times.
Bit of a bugger really - the only thing I can suggest is ensure your swf is optimised for CPU - if it contains lots of code, ensure you doing good memory management.
I had the same trouble once and I targeted FP10 - this offset a lot of visual work off the CPU (therefore the current process in the browser) and gave it to the GPU.
--
Aside from this, you're pretty much at the mercy of how powerful the clients machine is.
Additional for my answer above:
Thanks Glycerine. Do you think there would be any performance improvements if it was compressed into an older format? Or even just a SWF? There is no audio, so it's just an animated background really. – - Matt Brailsford
I think a newer format would be better - if you can do FP10, then again, you'll be able to utilise the user GPU, if your working in CS3, best to go for FP9.5.
Ensure your stage objects are cached for bitmap if your using large vectors
http://www.adobe.com/devnet/flash/articles/bitmap_caching_print.html
This ensures any heavy animation (even animation we regard as light) will run smoother because there turned into pixel data as opposed to complicated vector information. Its a small fix but it may work.
Try and target the AS3 engine as well. Even if your not using code. I keep saying it runs better than the as2, as1 engine with arguments from people but I'm sure you'll find your favourite camp.
If you have very large images scaled down, use a smaller form factor by photoshoping then to a smaller size. This will not only improve rendering speeds, but also swf file size.
Try those.