I'm creating an online psychology experiment in JavaScript with the goal of studying people's ability to recognize images quickly (this RSVP, Rapid Serial Visual Presentation, paradigm is a common approach in psychophysics for isolating the feedforward, pre-attentive stages of visual processing). I aim to show a sequence of images for brief durations (e.g. 30 ms, 50 ms, 70 ms, etc.). For valid results, it's important to get the presentation times as precise as possible. Usually, this is done in a lab with specialized software or equipment, but this setup isn't an option for me. I recognize that some slop is an inevitable side effect of doing this experiment in a browser, but I'm hoping to (A) minimize it as much as possible, and (B) quantify, to the extent that I can, the amount of imprecision that actually occurred.
Right now, I'm preloading images, and am using setTimeout() to control stimulus duration. I'm assuming that experiment participants will use a variety of monitors (so refresh rates may differ) and browsers.
By eye, the timings of the images with my current approach seem to be wrong, and, in particular, highly variable. This is consistent with things I've read saying that variability for brief presentation times can be high.
Are there tools available to control timing more precisely in JavaScript? Additionally, are there tools that can measure/estimate the time an image was actually shown on screen? Finally, are there other options I should look at, e.g. compiling the sequence of images into a video, gif, etc.?
setTimeout does not specify when an action should take place. It only states that an action should take place once the timer has expired; but there's no way to guarantee when exactly the event will take place (https://developer.mozilla.org/en-US/docs/Web/API/WindowOrWorkerGlobalScope/setTimeout).
When working on (psychological) expiriments, and when you have no control over the participant's environment, I would argue (apart from not entertaing such an uncontrolled environment in such cases) for claiming the whole tab the user is using (effectively blocking any interaction while the image is shown) or, as you already mention yourself, using an animated gif or sequence of images.
Or you can monitor and store the time the image was actually shown to your participant (using checking when the image comes into the viewport), and compare that timestamp with the participant's interaction. That would perhaps give the most valid results, as it will eliminate the inpredictability of setTimeout.
Related
From about 2 months ago, I got back to work on my fully custom ecommerce regarding the optimization of the front-end on pagespeed, in view that googlebot now carries out measurements of parameters such as CLS and LCP for which they have the criticality and part of determining if that particular page goes under index in the google crawler.
I did all the optimizations that I managed like:
extraction of critical css online
all non-critical CSS merged and parsed inline
mod_pagespeed
image and JS under CDN
nginx optimization
moved non-core JS below close to body closure where possible
deferring non critical JS
preloading, prefetch and many other things that I don't remember now done during the long nights spent studying the template
So now, i reach a great result compared of the previous 2 months.
The only thing I can't explain is that I have high Time To interactive, CLS and LCP in mobile mode, When the desktop version is just fine.
I am puzzling myself, trying to find the key to the solution.
This is an example of a product page: https://developers.google.com/speed/pagespeed/insights/?hl=it&url=https%3A%2F%2Fshop.biollamotors.it%2Fcatalogo%2FSS-Y6SO4-HET-TERMINALI-AKRAPOVIC-YAMAHA-FZ-6-S2-FZ-6-FAZER-S2-2004-2009-TITANIO--2405620
and here is the homepage which has acceptable values compared to the product page, but still the time to interactive is much higher than the desktop: https://developers.google.com/speed/pagespeed/insights/?hl=it&url=https%3A%2F%2Fshop.biollamotors.it%2F
Thanks in advance to all expert mode users who will be of help to me.
Why are my mobile scores lower than desktop?
The mobile test uses network throttling and CPU throttling to simulate a mid range device on 4G so you will always get lower scores for mobile.
You may find this related answer I gave useful on the difference between how the page loads as you see it and how Lighthouse applies throttling (along with how to make lighthouse use applied throttling so you can see the slowdown in real time), however I have included the key information in this answer and how it applies to your circumstances.
Simulated Network Throttling
When you run an audit it applies 150ms latency to each request and also limits download speed to 1.6 Megabits (200 kilobytes) per second and upload to 750 megabits (94 kilobytes) per second.
This is very likely to affect your Largest Contentful Paint (LCP) as resources take a lot longer to download.
This throttling is done via an algorithm rather than applied (it is simulated) so you don't see the slower load times.
CPU throttling
Lighthouse applies a 4x slowdown to your CPU to simulate a mid-tier mobile phone performance.
If your JavaScript payload is heavy this could block the main thread and delay rendering. Or if you dynamically insert elements using JavaScript it can delay LCP for the same reason.
This affects your Time To Interactive most out of the items you listed.
This throttling is also done via an algorithm rather than applied (it is simulated) so you don't see the slower processing times.
What do you need to do to improve your scores?
Focus on your JavaScript.
You have 5 blocking scripts for a start, just add defer to them as you have done the others (or better yet async if you know how to handle async JS loading).
Secondly the payload is over 400kb of JS (uncompressed) - if you notice your scripts take 2.3 seconds to evaluate.
Strip out anything you don't need and also run a trace in the "performance" tab in Developer tools and learn how to find long running tasks and performance bottlenecks.
Look at reducing the number of network requests, because of the higher latency of a 4G network this can add several seconds to you load time if you have a lot of requests.
Combine as much CSS and JS as you can (and you only need to inline your critical CSS not the entire page CSS, find all items used "above the fold" and inline them only, at the moment you seem to be sending the whole site CSS inline which is going to add a lot of page weight).
Finally your high Cumulative Layout Shift (CLS) is (in part) because you are using JS to hide items (for example the modal with ID comp-modal appears to be hidden with JS) on page load, but they have already rendered by the time that JS runs, easily fixed by hiding them within your inline CSS rather than JavaScript.
Other than that you just need to follow the guidance that the Lighthouse report gives you (you may not have paid much attention to the "Diagnostics" section yet, start looking there at anything that has a red triangle or orange square next to it, each item provides useful information on things that may need optimisation.).
That should be enough to get you started.
I have made some pretty resource-intensive items on my new site's design, and I'm a bit concerned. While testing on different computers, I notice some simply cannot handle the computer generated art/animations nor 1080p video. This could be a problem.
So I'm already planning to test their Internet speed (or at least, that it's sufficient) using a "control file" and timing the response, but how can I (or what would be the best way) to do so for checking hardware?
So far I have:
Use a short 1080p video, test the time to download. Test start-finish time, see if there's lag.
Use a loading page with 2 gateways: one for regular, one for enhanced. Above the "door" to enhanced, have some heavy javascript animation - if it lags, they should click 'regular
'.
*loading page will likely be cookie-dependent overlay, don't want to send them to home page from search.
I'm not wild about either of those ideas, and I already strip the resource-intensive items for sizes under 700 or so. Kind of a "why should some suffer for others lacking" deal. Eventually all computers can handle it, but I can't be freezing their devices just to put on a good show.
Update
3 . Have an "awesome" page that records the destination URL, does some fast JS animation (totaling under 3 seconds), and says "testing device capabilities" then "proceed to site". This would automatically determine the best version and give them something cool to look at while it's testing.
How can I control the rendering loop frame rate in KineticJS? The docs for Kinetic.Animation show a frame rate being passed to the render callback, and Kinetic.Tween seems to have no frame rate logic, but I don't see anyway to force, say, a 30fps when 60fps is possible.
Loads of context for the curious follows, but the question is that simple. If anyone reads on, other advice is welcome. If you already know the answer, don't waste your time reading on!
I'm developing a music app that combines some DOM-based GUI controls (current iteration using jQuery Mobile) and Canvas-based GUI controls (using KineticJS). The latter involve some animation. Because the animated elements are triggered by music playback, I'm using Kinetic.Tween to avoid the complexity of remembering how long a given note has been playing (which Kinetic.Animation would require doing).
This approach works great at 60fps in Chrome (on a fast machine) but is just slow enough on iOS 6.1 Safari (iPad 2) that manipulating controls while animations are happening gets a little janky. I'm not using WebGL (unless KineticJS or Chrome does this by default for canvas?), and that's not an option when I package for native UIWebView.
As I'm getting beyond prototype into wanting to make more committed tech decisions, I see the following options, in order of perceived goodness:
Figure out how to cap the frame rate. Because my animations heavily use alpha fades but do not involve motion, I believe I could get away with 20-30fps and look fine. Could also scale this up on faster devices.
Don't respond immediately to touch inputs, but add them to a queue which I poll at a constant interval and only use the freshest for things like touchmove. This has no impact on my non-interactive animated elements, but tackles the problem from the other direction, trying to reduce the load of user interaction. This would require making Kinetic controls static and manually tracking touch coordinates (not terrible effort if it actually helped).
Rewrite DOM-based GUI to canvas-based (KineticJS); rewrite WebAudio-based engine to HTML5 audio; leverage CocoonJS or Ejecta for GPU-acceleration. This means having to hand-code stuff like file choosers and nav menus and such (bad). Losing WebAudio is pretty serious as it eliminates features like DSP effects and very fine-grained, low-latency timing (which is working just fine on an iPad 2).
Rewrite the app to separate DOM based GUI and WebAudio from Canvas-based elements, leverage CocoonJS. I'm not sure if/how well this works out, but the fact that CocoonJS passes JavaScript code as strings between the 2 components makes me very skittish about how solid this idea is. It's probably doable, but best case I'm very tied to CocoonJS moving forwards. I don't like architecting this way, but maybe it's not as bad as it sounds?
Make animations less juicy. This is least good not because of its design impact but because, as it is, I'm only animating ~20 simple shapes at any time in my central view component, however they include transparency and span an area ~1000x300. Other components like sliders are similarly bare-bones. In other words, it's not very juicy right now.
Overcome severe allergy to Objective-C; forget about the browser, Android, and that other mobile OS. Have a fast app that performs natively and has shiny Apple-approved widgets. My biggest problem with this approach is not wanting to be stuck in Objective-C reality for years, skillset-wise. I just don't like it.
Buy an iPad 3 or later. Since I already am pretending Android doesn't exist (I don't have any devices to test), why not pretend no one still has iPad 2? I think this is passing the buck -- if I can get acceptable performance on iPad 2, I will feel confident about the app's performance as I add more features.
I may be overlooking options or otherwise naive about how to tackle this. Some would say what I'm trying to build is just silly. But it's working pretty well just not ready for prime time on the iPad 2.
Yes, you can control the Kinetic.Animation framerate
The Kinetic.Animation sends in a frame object which has a frame.time property.
That .time is a running timer that you can use to throttle your animation speed.
Here's an example that throttles the Kinetic.Animation: http://jsfiddle.net/m1erickson/Hn3cC/
var lastTime;
var frameDelay=1000;
var loop = new Kinetic.Animation(function(frame) {
var time = frame.time
if(!lastTime){lastTime=time;}
var elapsed = time-lastTime;
if(elapsed>=frameDelay){
// frameDelay has expired, so animate stuff now
// set lastTime for the next loop
lastTime=time;
}
}, layer);
loop.start();
Working from #markE's suggestions, I tried a few things and found a solution. It's ultimately not rocket science, but sharing what I figured out:
First, tried the hack of doubling Tween durations and targets, using a timer to stop them at 50%. This kinda sorta worked but was hard to get to look good and was pretty error prone in coding bogus targets like negative opacity or height or whatnot.
Second, having read the source to Tween and looked at docs again for Animation, decided I could locally use Animation instances instead of Tween instances, and allow the closure scope to hang onto the relevant note properties. Eventually got this working smoothly and finally realized a big Duh! which is that throttling the frame rate of several independently animating things does not in any way throttle the overall frame rate.
Lastly, decided to give my component a render() method that calls itself in a loop with requestAnimationFrame, exits immediately if called before my clamp time, and inside render() I update all objects in the Kinetic canvas and call layer.drawScene(). Because there is now only one animation, this drops frame rate to whatever I need and the app is fast on iPad 2 (looks exactly the same to my eyes too).
So Kinetic is still helping for its higher level canvas API, and so far my other control widgets are still easy code using Kinetic to handle user input and dragging, now performing much better as the big beast component is not eating up the CPU.
The short answer to my original question is that no, you can't lock the overall frame rate for very complex animations, but as Mark said, you can for anything that fits in a single Animation instance.
Note that I could have still used Animation without giving it a layer or explicitly calling any draw() methods, but since I'd still have to write all the logic to determine individual element's current visual state, there was no gain to doing this. What would be very useful would be if Tween could accept a parameter to not automatically render. This would simplify code like mine, as I could shorthand the animation on individual objects but still choose when to actually do the heavy lifting of rendering everything. Seeing how much this whole exercise gained in performance on the iPad 2, might be worth adding this option to the framework.
I know that it's possible to take a series of images and use javascript to set opacity from 1 to 0 very quickly for each one in rapid secession. I have actually done it before very successfully, although I only used 41 images at approx. 720p.
My question though, is whether or not it would be practical to make an entire video (4-10 minutes long) using only html, css, and javascript. Obviously that would be too many images to leave in the cache, so you'd have to empty the cache every so often of particular images, and also this would require a pretty good internet connection, but do you think it would be worth while to attempt?
are there any obvious pro's and cons that you could think of for trying this?
(to be clear, I'm not asking for the code to achieve it as I have already developed most of it, just what you think in terms of practicality of it as opposed to youtube or vimeo etc...)
Plainly put, I don't think it's practical.
Here's a few reasons why:
Number of images. Videos tend to be around 30 frames per second.
For a 4 min video, that's roughly 7,200 images.
Download time. Downloading 7,200 images at 720 pixels high would require a
significant amount of bandwidth, not only for the user, but also for
the server.
DOM load. 7,200 images would require just as many DOM
elements positioned correctly behind each other.
Rendering. Each
time we do an animation (fadeout), the browser has to calculate what
element is visible and what that means to the user (pixel color,
etc.)
Of course, we can optimize this:
Download the images that is needed at that specified time. If the connection can download 30 images/second, then we load what we need and then initiate the play sequence.
Remove elements when it's done. As the sequence goes on, we can assume that those images are no longer needed. When those images are done, we can destroy those elements and thus free up a few resources.
Put images on multiple sub-domains/locations. Browsers tend to download 1 asset per domain at a single moment. Modern browsers may download 6 or more. But if we were to split those assets to multiple subdomains, we can increase that number of simultaneous downloads and thus increase response time.
Create large sprites. Have the sequence on one or several large sprites, and use Javascript to correctly position the element. This allows us to have a single/few downloads and removes the overhead from multiple downloads.
Videos are encoded so that the browser doesn't have to do all those pixel calculations and animations. Each frame isn't a full image, but rather a delta between the current and next frame. (I'm overly simplifying this.)
But this method is constantly used for pieces that need interactivity, or want to get past iOS auto-play hangups. Again, not practical for large sequences, but definitely doable for shorter sequences.
My website has a jQuery script (from Shadow animation jQuery plugin) which constantly changes the colour of box shadow of various <div>s on the home page.
The animation is not essential, but it does take up a lot of CPU time on slower machines.
Is it possible to find out if the script will run 'too slowly'? I can then disable it before it impacts performance.
Is this even a good idea? If not, is there an easy way to break up the jQuery animate?
This may indirectly solve your problem. Pick a few algorithms and performance tests from this site http://dromaeo.com/ that seem similar to your jQuery plugin. Don't run comprehensive tests as they do on the site. Instead, pick fairly small and fast algorithms, and run them for an unnoticeable period of time.
Use a tiny predefined time span to limit how long these tests are allowed to run. Let's say if that span is 200 ms, and on a fast machine with browser A, you can get 100 iterations, while on some random user's machine, it's only able to complete 5 iterations, then you may want to consider disabling it on the user's machine. Tweak and tweak till you find the optimal numbers.
As a bonus, send all test results back to your server so you have a better idea of where your users lie in the speed spectrum. If a big majority of users are using slower computers and older browsers, then it just may make sense to remove the thing altogether.
You could probably do it by timing a few times round a loop which did some intensive processing on page load, but that's going to slow the page and add even further to CPU load, so it doesn't seem like a great solution.
A compromise I've used in the past, though, was to make the decision based on browser version, for example, Internet Explorer 6 users get simpler content whereas newer browsers with better JavaScript performance get the animation. That seemed to work pretty well at a practical level. In practice, browser choice is a big factor in JavaScript performance and you might get a 90% fit with what you want very simply just by taking that into account.
You could do something like $(window).width() to get the browser width. Using this you could make the assumption that anything < 1024px wide is likely to be either a netbook, smartphone or old computer.
This wouldnt be nearly as accurate as timing a loop, but much more efficient.
Obviously its this rule's a generalisation and there are will be slow computers with > 1024px. But in general a 1024px + computer would typically be able to handle a fair bit of javascript (untill the owner puts on loads of software, virus scans and browser toolbars!)
hope this is useful!