Lazy loading images with accessibility and printer support - javascript

I am looking for a proper way to implement lazy loading of images without harming printability and accessibility, and without introducing layout shift (content jump), preferrably using native loading=lazy and a fallback for older browsers. Answers to the question How lazy loading images using JavaScript works?
included various solutions none of which completely satisfy all of these requirements.
An elegant solution should be based on valid and complete html markup, i.e. using <img src, srcset, sizes, width, height, and loading attributes instead of putting the data into data- attributes, like the popular javascript libraries lazysizes and vanilla-lazyload do. There should be no need to use <noscript> elements either.
Due to a bug in chrome, the first browser to support native lazyloading, images that have not yet been loaded will be missing in the printed page.
Both javascript libraries mentioned above, require either invalid markup without any src attribute at all, or an empty or low quality placeholder (LQIP), while the src data is put into data-src instead, and srcset data put into data-srcset, all of which only works with javascript. Is this considered an acceptable or even best practice in 2020, and does this neither harm the site accessibility, cross-device compatibility, nor search engine optimization?
Update:
I tried a workaround for the printing bug using only HTML and CSS #media print background images in this codepen . Even if this worked as intended, there would be a necessary css directive for each and every image, which is neither elegant nor generic. Unfortunately there is no way to use media queries inside the <picture> element either.
There is another workaround by Houssein Djirdeh at at lazy-load-with-print-ctl1l4wu1.now.sh using javascript to change loading=lazy to loading=eager when a "print" button is clicked. The same function could also be used onbeforeprint.
I made a codepen using lazysizes.
I made another codepen using vanilla-lazyload .
I thought about forking a javascript solution to make it work using src and srcset, but this must probably have been tried before, the tradeoff would be that once the lazyloading script starts to act on the image elements, the browser might have already started downloading the source files.

Just show me your hideous code, I don't want to read!
If you don't want to read my ramblings the final section "Demo" contains a fiddle you can investigate (commented reasonably well in the code) with instructions.
Or there is a link to the demo on a domain I control here that is easier to test against if you want to use that.
There is also a version that nearly works in IE here, for some reason the "preparing for print" screen doesn't disappear before printing but all other functionality works (surprisingly)!
Things to try:
Try it at different browser sizes to see the dynamic image requesting
try it on a slower connection and check the network tab to see the lazy loading in action and the dynamic change in how lazy loading works depending on connection speed.
try pressing CTRL + P when the network connection is slow (without scrolling the page) to see how we load in images not yet in the DOM before printing
try loading the page with a slow network connection and then using FILE > PRINT to see how we handle images that have not yet loaded in that scenario.
Version 0.1, proof of concept
So there is still a long way to go, but I thought I would share my solution so far.
It is complex (and flawed) but it is about 90% of what you asked for and potentially a better solution than current image lazy loading.
Also as I am awful at writing clean JS when prototyping an idea. I can only apologise to any of you brave enough to try and understand my code at this stage!
only tested in chrome - so as you can imagine it might not work in other browsers, especially as grabbing the content of a <noscript> tag is notoriously inconsistent. However eventually I hope this will be a production ready solution.
Finally it was too much work to build an API at this stage, so for the image resizing I utilised https://placehold.it - so there are a few lines of redundant code to be removed there.
Key features / Benefits
No wasted image bytes
This solution calculates the actual size of the image to be requested. So instead of adding breakpoints in something like a <picture> element we actually say we want an image that is 427px wide (for example).
This obviously requires a server-side image resizing solution (which is beyond the scope of a stack overflow answer) but the benefits are massive.
First of all if you change all of your breakpoints on the site it doesn't matter, so no updating picture elements everywhere.
Secondly the difference between a 320px and 400px wide image in terms of kb is over 40% so picking a "similarly sized" image is not ideal (which is basically what the <picture> element does).
Thirdly if people (like me) have massive 4K monitors and a decent connection speed then you can actually serve them a 4K image (although connection speed detection is an improvement I need to make in version 0.2).
Fourthly, what if an image is 50% width of it's parent container at one screen size, 25% width of it's parent container at another, but the container is 60% screen width at one screen size and 80% screen width at another.
Trying to get this right in a <picture> element can be frustrating at best. It is even worse if you then decide to change the layout as you have to recalculate all of the width percentages etc.
Finally this saves time when crafting pages / would work well with a CMS as you don't need to teach someone how to set breakpoints on an image (as I have yet to see a CMS handle this better than just setting the breakpoints as if every image is full width on the screen).
Minimal Markup (and semantically correct markup)
Although you wanted to not use <noscript> and avoid data attributes I needed to use both.
However the markup you write / generate is literally an <img> element written how you normally would wrapped in a <noscript> tag.
Once an image has fully loaded all clutter is removed so your DOM is left with just an <img> element.
If you ever want to replace the solution (if browser technology improves etc.) then a simple replace on the <noscripts> would get you to a standard HTML markup ready for improving.
WebP
Of course this solution requests WebP images if supported (its all about performance!). On the server side you would need to process these accordingly (for example if an image is a PNG with transparency you send that back even if a WebP image is requested).
Printing
Oh this was a fun one!
There is nothing we can do if we send a document to print and an image has not loaded yet, I tried all sorts of hacks (such as setting background images) but it just isn't possible (or I am not clever enough to work it out....more likely!)
So what I have done is think of real world scenarios and cover them as gracefully as possible.
If the user is on a fast connection we lazy load the images, but we don't wait for scroll to do this. This could mean a bit more load on our servers but I am acting like printing is highly important (second only to speed).
If the user is on a slow connection then we use traditional lazy loading.
If they press CTRL + P we intercept the print command and display a message while the images are loading. This concept is taken from the example OP gave by Houssein Djirdeh but using our lazy loading mechanism.
If a user prints using FILE > PRINT then we instead display a placeholder for images that have not yet loaded explaining that they need to scroll the page to display the image. (the placeholders are approximately the same size as the image will be).
This is the best compromise I could think of for now.
No layout shifts (assuming content to be lazy loaded is off-screen on page load).
Not a 100% perfect solution for this but as "above the fold" content shouldn't be lazy loaded and 95% of page visits start at the top of the page it is a reasonable compromise.
We use a blank SVG (created at the correct proportions "on the fly") using a data URI as a placeholder for the image and then swap the src when we need to load an image. This avoids network requests and ensures that when the image loads there is no Layout Shift.
This also means the page is semantically correct at all times, no empty hrefs etc.
The layout shifts occur if a user has already scrolled the page and then reloads. This is because the <img> elements are created via JavaScript (unless JavaScript is disabled in which case the image displays from the <noscript> version of the image). So they don't exist in the DOM as it is parsed.
This is avoidable but requires compromises elsewhere so I have taken this as an acceptable hit for now.
Works without JavaScript and clean markup
The original markup is simply an image inside a <noscript> tag. No custom markup or data-attributes etc.
The markup I have gone with is:
<noscript class="lazy">
<img src="https://placehold.it/1500x500" alt="an image" width="1500px" height="500px"/>
</noscript>
It doesn't get much more standard and clean as that, it doesn't even need the class="lazy" if you don't use <noscript> tags elsewhere, it is purely for collisions.
You could even omit the width and height attributes if you didn't care about Layout Shift but as Cumulative Layout Shift (CLS) is a Core Web Vital I wouldn't recommend it.
Accessibility
The images are just standard images and alt attributes are carried over.
I even added an additional check that if alt attributes are empty / missing a big red border is added to the image via a CSS class.
Issues / compromises
Layout Shift if page already scrolled
As mentioned previously if a page is already scrolled then there will be massive layout shifts similar to if a standard image was added to a page without width and height attributes.
Accessibility
Although the image solution itself is accessible the screen that appears when pressing CTRL + P is not. This is pure laziness on my part and easy to resolve once a more final solution exists.
The lack of Internet Explorer support (see below) however is a big accessibility issue.
IE
UPDATE
There is a version that nearly works in IE11 here. I am investigating if I can get this to work all the way back to IE9.
Also tested in Firefox, Edge and Safari (mobile), seems to work there.
ORIGINAL
Although this isn't tested in Firefox, Safari etc. it is easy enough to get to work there if there are issues.
However accessing the content of <noscript> tags is notoriously difficult (and impossible in some versions) in IE and other older browsers and as such this solution will probably never work in IE.
This is important when it comes to accessibility as a lot of screen reader users rely on IE as it works well with JAWS.
The solution I have in mind is to use User Agent sniffing on the server and serve different markup and JavaScript, but that is complex and very niche so I am not going to do that within this answer.
Checking Latency
I am using a rather crude way of checking latency (to try and guess if someone is on a 3G / 4G connection) of downloading a tiny image twice and measuring the load time.
2 unneeded network requests is not ideal when trying to go for maximum performance (not due to the 100bytes I download, but due to the delay on high latency connections before initialising things).
This needs a complete rethink but it will do for now while I work on other bits.
Demo
Couldn't use an inline fiddle due to character count limitation of 30,000 characters!
So here is the current JS Fiddle - https://jsfiddle.net/9d5qs6ba/.
Alternatively as mentioned previously the demo can be viewed and tested more easily on a domain I control at https://inhu.co/so/image-concept.php.
I know it isn't the "done thing" linking to your own domains but it is difficult to test printing on a jsfiddle etc.

The proper solution for printable lazy loading in 2022 is using the native loading attribute.
<img loading=lazy>
The recommendation to use a custom print button has been obsoleted as chromium issue 875403 got fixed.
Prior recommendations included adding a custom print button (which did not fix the problem when using the native browser print functionality) or using JavaScript to load images onBeforePrint the latter not being considered a good solution, as loading=lazy, as a "DOM-only" solution, must not rely on JavaScript.
Beware that, even after the bug fix, some of your users might still visit your site with a buggy browser version.

#Ingo Steinke Before one dwells into answers for the concerns that you have raised, one has to go back and think about why lazy loading came about and for what detriment it solved on initiation as framework of thought. Keyword framework of thought... it is not a solution and I would go on a leaf to say it has never been a solution but framework of thought.
Why we wanted it:
Minimise unnecessary file fetching from server - this is bandwidth critical if one is running a large user base. So it was the internet version of just in time as in industrial production.
Legacy browser versions and before async and defer were popularised in JS/HTML, interactivity with the browser window remained hampered until all content was loaded.
Now broad band as we know it has only been around since the last 6-7 years in real sense of manner and penetration. We wanted it because we didn't want to encounter no.2 on low bandwidth. To be honest, there was and still is a growing concern and ideology of minifying and zipping JS and CSS files - all because that round trip to server and back should be minimised so that next item in the list could be fetched. Do keep in mind browsers tend to limit simultaneous downloading connections to around 6 at a time per window or active window. There is reasons why Google popularised the 3 second rule. If above were to let run on as it than 3 second rule will fall on its head as if it did not have legs.
So came along thought frameworks.
Image as CSS background: This came as it did not mess up the visual aspect of the page. Everything remained as it is in its place and then suddenly became colourful. It was time when web pages seemed to have elastic fit i.e. it was that bag which once filled with air suddenly poped-transformed into jumping castle. This was increasingly become bad idea as front end developer. So fixing height and with of the container then run images as background helped and HTML5 background alignment properties upgraded them self accordingly. There was even variant and still used as in use multiple backgrounds one being loading spiril or low end blured image version on top of which actual intended image was fetched. Since level down bacground would be fetched and populated everywhere in single instance of downloading it created a more pleasing visual and user knew what to expect. worked in printing as well even if intended image did not download.
Then came JS version of it by hijacking DOM either through data-src, invalid image tags removing src, and what not. only trigger the change when content is scrolled to. Obviously there would be lag but that was either countered through CSS approach implemented in JS or calculating scroll points and triggering event couple of pixel ahead. They all still work on the same premise.
There is one question that begs to be asked and you have touched it in your pretext .... none of it controls or alters browser native functionality. Browser might as will go fetch the item even before your script had anything to do with any thing.
This is the main issue here. BOM does not care and even want to care about what your script is asking to do all it knows if there is a src property fetch the content. None of the solutions have changed that. If we could change that functionality then thought framework would become solution.
I still believe browsers should not change that just for the sake of it and thus never gained tracking in debates. What browsers have done is pre-fetching known as speculative or look-ahead pre-parser, It is the single biggest improvement in browsers that deserves it credit. Just as we type url in address bar on every chnage of string browser is pre-fetching the content even though I had not typed the whole url. I had specially developed a programme where I grabbed anything that was received at server from these look-ahead pre-parsers. It takes less than second to get response at most times and browsers begin to process it all including images and JS. This was counter the jerky delayed elastic prone display as discussed in No.1 and No.2. It did not reduce the server hit however. The reason why we are doing lazy loading any ways. But some JS workaround gained traction as there was no src property so pre-parser did not fetch the image and was only done so when user actually sent to the page and events were triggered. Some browser have toyed with the idea of lazy loading them self but let go if it as it did not assume universal consistency in standard.
Universal Standard is simple if there is src property browser will fetch the item no if and buts. Imagine if that was not the case OMG hell would break loose on poor front-end developer.
So deep down what you are raising in debate is the question regarding BOM functionality as I have discussed above. There is no work around for it. In your case both for screen and print version of display. How to make sure images are loaded when print command is sent. Answer is simple for BOM print is after the fact. Fact ebing screen display and before the fact being everything before that at BOM/DOM propagation level. Again you cannot change that.
So you have to make trade off. Trade off would come in the form of another thought framework. rather than assuming everything is print ready make it print ready on user command. There is div that pops up and shows printed version of document and then print from there on. UI could be anything it would only take second or so as majority of the content would be loaded any ways and rest will take short amount of time. CSS rules for print could mighty handy in this respect. You can almost see it in action in may places on the internet.
conclusion as we stand today where we are with BOM functionality bundling the screen display and print display with lazy load is not what lazy lading was intended for thus does not provide any better solution then mere hacks. So you have to create your UI based on your context separating the two, to make it work properly.

Related

Why Has My Google PageSpeed Insights Score Lowered So Much?

Prod
For desktop, I have a site with a decent page speed score (currently, 96): https://developers.google.com/speed/pagespeed/insights/?url=https%3A%2F%2Fwww.usstoragecenters.com%2Fstorage-units%2Fca%2Falhambra%2F2500-w-hellman-ave&tab=desktop
Stage
I'm trying to improve the score (mostly for mobile), but I've somehow made it worse (currently, 69 on desktop): https://developers.google.com/speed/pagespeed/insights/?url=https%3A%2F%2Fstage.usstoragecenters.com%2Fstorage-units%2Fca%2Falhambra%2F2500-w-hellman-ave%3Fplain%3Dtrue&tab=mobile
Problem
While converting the site from Angular (the first link) to plain JavaScript (second link), I've managed to lower the desktop Google PageSpeed Insights score from 96 to 69.
I've massive reduced the amount of JavaScript and other resources (2MB on prod vs 500KB on stage).
Analysis
Looking through the numbers, the thing that stands out to me is that prod has an FCP (First Contentful Paint) of 0.7 seconds, while stage has an FCP of 2.0 seconds. This seems weird to me since stage should be much faster, but is apparently much slower.
Looking at the mobile timeline of thumbnails (desktop is a bit hard to see), it appears as if stage renders the first "full content" much faster:
I highlighted the ones that visually look "complete" to me (stage is on top, prod is on bottom).
Screenshots
Here are some screenshots so you can see what I do (PageSpeed Insights differs fairly significantly each time it's run).
Here is stage:
And here is production:
Summary of Changes
Here are the main things I did when trying to improve the score:
I converted the JavaScript from Angular to plain JavaScript, significantly reducing the JavaScript required to render the page.
I lazy loaded JavaScript (e.g., Google Maps JavaScript isn't loaded until it's needed).
I lazy loaded images (e.g., the slideshow initially only loads the first image).
I reduced the number of DOM elements (from 4,600 to 1,700).
I am using HTTP/2 server push to load the new plain JavaScript as fast as possible.
Those changes should have improved the score.
Question
Do you have any ideas of why, despite my best efforts, the PageSpeed score tanked?
I'd recommended you to look into the different between how 3rd-party scripts are included between your Prod and Staging environment.
Most of the time when I have problem with pagespeed, it's 3rd-party script that cause the tank. YMMV, though.
Just some pointer to start, as I compared the stat between the two, I noticed that this particular Wistia script works quite differently, maybe not the the problem with the script itself, but the way it's embedded are different or something.
On Prod
Wistia: Main-Thread Blocking Time: 3ms (section: Minimize third-party usage)
Wistia: Total CPU Time: 87 ms (section: Javascript execution time)
Wistia: Script Evaluation: 76 ms (section: Javascript execution time)
On Staging
Wistia: Main-Thread Blocking Time: 229 ms (section:
Reduce the impact of third-party code)
Wistia: Total CPU Time: 425 ms
Wistia: Script Evaluation: 376 ms
The issue you have
You have done a lot of things correctly, but your score is suffering because of First Meaningful Paint and First Contentful Paint
Looking at the load order etc. I have noticed that your main HTML file has actually increased in size by 33% from 60kb to 81.6kb.
That is your first indicator of where things are going wrong as you must load all HTML before the browser can even begin to think about rendering.
The next issue is that Lighthouse (the engine behind PSI) is showing you that you don't have render blocking content but I don't think the method is perfect in showing what is blocking render.
Your site still needs the SVG logo and icomoon files to render everything above the fold.
On the main site these are loaded early, on the staging site they are deferred and start loading much later, delaying your first Contentful paint etc.
There may be other things but those are a couple I found with a quick glance.
How to fix the couple of items I mentioned
HTML size - maybe externalise some of the JSON etc. you have inlined as there is a lot there, lazy load it in instead (suggestion only, haven't explored whether feasible for you)
SVG Logo - simple to fix, grab the actual text that makes up the logo and inline it instead of using an external resource.
icomoon - not quite as easy to fix but swap all your icons for inline SVGs.
Bonus - by changing your icons from fonts to SVG you help with accessibility for people who have their own style sheets that override fonts (as fonts for icons get over-ridden and make no sense).
Bonus 2 - One less request!
How to Identify Problems
If anyone comes across problems like this you need to do the following to work out what is going on.
Open Developer Tools and go to network tab first.
Set the options to 'Disable Cache - true' and 'Slow 3G' in the dropdown box.
Load each version of the site and compare the waterfall.
Normally you can spot changes in load order and start investigating them - the 'game' is to find items that appear above the fold and try and remove them, defer them or inline them as you have with some of your CSS.
Next thing is to learn to use the Coverage and Rendering tabs as they very quickly point you to problems.
Finally learn how to use the performance tab and understand the trace it produces.
You may already know how to use the above, but if not learn them they will let you find all of your problems quickly.
So I figured out the issue. PageSpeed Insights is drunk.
Well, it's unreliable anyway. I was able to dramatically improve the score by simply removing the server pushing of the JavaScript files (less than 20KB of them).
This is weird because the page actually seems to take longer to display. However, Google PageSpeed Insights thinks it's displaying sooner, and so it improves the score.
One time I tried, the mobile score went up to 99:
I tried again and got a more reasonable 82:
And on desktop, the score went up to 98:
The interesting thing about the mobile screenshot showing 99 is that you can see in the timeline thumbnails that the image for the slideshow at the top of the page hasn't loaded yet. So it seems like a case of Google PSI prematurely deciding that the page has "finished loading", even though it hasn't finished.
It's almost like if you delay certain things long enough, Google will ignore them. In other words, the slower the page is, the better the score they will give you. Which is of course backwards.
In any event, this might be one of those things where I'll go with the slightly less optimal approach in order to achieve a higher score. There may also be a middle ground I can explore (e.g., have the first JavaScript file inject link rel=preload tags in order to load the rest of the JavaScript files immediately rather than wait for the full chain of modules to resolve).
If somebody can come up with a more satisfactory explanation, I'll mark that as the answer. Otherwise, I may end up marking this one as the answer.
Middle Ground Approach
EDIT: Here's the middle ground approach I went with that seems to be working. First, I load a JavaScript file called preload.js that is included like this:
<script src="/preload.js" defer></script>
This is the content of the preload.js file:
// Optimization to preload all the JavaScript modules. We don't want to server push or preload them
// too soon, as that negatively impacts the Google PageSpeed Insights score (which is odd, but true).
// Instead, we start to load them once this file loads.
let paths = window.preloadJavaScriptPaths || [],
body = document.querySelector('body'),
element;
paths.forEach(path => {
element = document.createElement('link');
element.setAttribute('rel', 'preload');
element.setAttribute('as', 'script');
element.setAttribute('crossorigin', 'anonymous');
element.setAttribute('href', path);
body.appendChild(element);
});
The backend creates a variable on the window object called preloadJavaScriptPaths. It is just an array of strings (each string being a path to a JavaScript file, such as /app.js).
The page still loads pretty fast and the score is PSI score is still good (80 mobile, 97 desktop):

How to serve responsively-sized assets in Rails 4, quickly, using a CDN, without knowing the size of a placement in advance?

I have the standard "responsive image serving" problem, but with some complex twists. I expect I'll need to build my own solution to the below, but it's a few months down the line so I thought I'd bring this by the community now for help with my approach and getting started. I also think the solution I'm looking for would have pretty wide appeal, so this could be valuable to the community as a whole.
The problem:
We'd like to provide users with images, embedded videos, etc (anything that takes a lot of time/bandwidth to load and takes less when lower res) but change the loaded dimensions depending on the size the element is actually allocated on the page. This is basic "responsive image serving" applied to a few other types of assets (though since we provide lower-bandwidth file versions to mobile devices, I think this also falls under "adaptive design"). But don't worry about other types of content for now, let's focus on images.
We need to determine the appropriate max-width for a each specific asset placement, for each screen width breakpoint, without providing this info as configuration.
I'm creating a platform that will serve pages relying on HTML templates from many different parties. Images can be served from anywhere on the page, and pages can use any styling system they want, so we have no idea what the appropriate size for an image is just by looking at screen width. We need to actually evaluate the max width of the placement at each supported sizing breakpoint. Sure, this could be done manually in advance given a design template, but let's assume that's too much work for these 3rd parties.
For example, in Twitter Bootstrap 3 an image contained in a col-md-8 should be at most 720px width when browser width is < 768, but if it was in a col-sm-8 it should be smaller than 470px. And if we're using a different framework altogether these would clearly be different too. I need solution that can take into account everything the CSS is doing automatically, because I have no idea what the CSS will do.
We can't do any processing during the image request. We rely on a CDN (Cloudfront). They are not going to implement our custom code on each of their edge locations, and I don't want a visitor in New Delhi or Berlin to have to send yet another request halfway around the globe, for every sized asset, before they know what the final url is. So that rules out solutions like this controller-based solution and the PHP adaptive-images script.
We need this to be fast. There's a good amount of wiggle-room on the server side, since caching is so easy and flexible with Rails 3 & 4. But we probably can't use jQuery.width() on every element for performance reasons. After all, the entire reason we're serving responsive images is to decrease perceived page load time. But we do have access to jQuery in general, and we could probably load up Modernizr all the time if we needed to (currently only included for low IE with conditional HTML).
We don't trust User-Agent headers enough to base our browser width on them. I love the idea behind mobvious 1, 2 and its friend responsive-images, but there are SO many versions of browsers on SO many different devices out there. How complex would it be to build a truly reliable system to determine browser width on this, as opposed to directly calculating it using JS?
Clients without javascript (and thus crawlers) will need access to an image. Easiest solution here seems to be to include a <noscript>....</noscript> with the canonical, largest version of the image inside.
The solution
It seems like the only way to do this is to:
Have the server pass all the available sizes, then calculate the width of each element on the client side using jQuery in some performance-efficient way (maybe using $.css_width() or some sort of specialized script). So server would create:
<span data-respv-img-id="picture_of_unicorns"></span>
<noscript data-respv-img-id="picture_of_unicorns" data-img-720- url="//cdn.example.com/assets/picture_of_unicorns_720x480" data-img-320-url="//cdn.example.com/assets/picture_of_unicorns_320x260" data-img-120-url="//cdn.example.com/assets/picture_of_unicorns_120x80">
<img src="//cdn.example.com/assets/picture_of_unicorns_720x480" alt="Magical unicorns">
</noscript>
And if we're on a small screen and only the 120 fits, the JS would turn this into:
<span data-respv-img-id="picture_of_unicorns">
<img src="//cdn.example.com/assets/picture_of_unicorns_120x80" alt="Magical unicorns">
</span>
OR have the server do some sort of pre-processing, so it knows exactly what size image fits each placement on each browser width, and delivers:
<span data-respv-img-id="picture_of_unicorns"></span>
<noscript data-respv-img-id="picture_of_unicorns" data-img-1200- url="//cdn.example.com/assets/picture_of_unicorns_720x480" data-img-1024-url="//cdn.example.com/assets/picture_of_unicorns_320x260" data-img-768-url="//cdn.example.com/assets/picture_of_unicorns_120x80">
<img src="//cdn.example.com/assets/picture_of_unicorns_720x480" alt="Magical unicorns">
</noscript>
And we end up with the same thing as the other approach. But this time jQuery's job was much easier, as we passed all the sizing work off to the server. But this requires loading up a full browser stack on the server side to generate each request. That's ok with caching, but sure does bring a lot of complexity along.
Note that both of these solutions would allow for scroll-based image loading, which is another aspect I'll need to implement, but not something we need to discuss now.
Long story short: Which approach would you recommend? Can you think of a better way?

Optimizing online photo gallery for retina devices

I have an existing website (a photo blog) that loads the majority of the photos from Flickr. I'd like to enhance the experience for users with high resolution screens and load higher res versions of photos, but I'm not sure what would be the best way to go.
Since the images in question are not UI elements, there is no sensible way to solve this problem with CSS. That leaves either client side JavaScript, or a server side find-and-replace for specific image patterns (since they come from Flickr, it's easy to detect and easy enough to figure out the url for a double-sized image).
For client side, my concern is that even the regular sized images are 500-800 KB in size, there there can be 10-30 images per gallery, causing lots of excess bandwidth use for retina users.
For server side, it's obviously tricky to determine if the request comes from a retina device or not. One idea I had (which I have yet to test out), was to run a JavaScript function that checks window.devicePixelRatio and sets a cookie accordingly, and then on each successive page request the server would know if the device is high res or not. That leaves the entry page with non-retina images, but at least all the next ones will have high res images loaded right away.
Are there any pitfalls to this proposed solution? Are there better ways to handle it?
You can generate CSS on server that will have links to both regular size and double-sized images in background-image property. This CSS can easily be different for every page by including it in <style> tag and referencing classes/ids that only exist on this page. This will deal with traffic issue, since majority of modern browsers don't load pictures for other DPIs.
Other solution (though worse) will be not just set cookie, but load images with JavaScript. This will solve the initial first page issue, but slow down the page rendering a little.

Async loading of Typekit :: is it worth it, or better not to use it at all?

Trying to get page-load time down.
I followed the third example outlined here to asynchronously load the TypeKit javascript.
To make it work you have to add a .wf-loading #some-element {visibility: hidden;} to each element that uses the font, and after either 1) it loads or 2) after a set time (1 sec), the font becomes visible.
The thing is, the CSS I'm working with has the font assigned to about 200 elements, so thats 200 elements of .wf-loading{ } (note: I did not write this CSS).
I feel this would slow the load time down more than just letting it load regularly, DOM traversing that much stuff. If this is the case, I will just axe Typekit altogether and go with a regular font.
Are there any tools I can use to run performance tests on this kind of stuff? Or has anyone tested these things out?
You're not actually modifying more than a single DOM element (the root ) with this approach. This means that our modern browsers will rely on their super fast CSS engines, so the number of elements involved will have no noticeable affect on page load.
As far as page load and flicker, network latency is usually an order of magnitude worse than DOM manipulation. There will always be some flicker on the very first (unprimed) page load while the browser waits for the font to download. Just make sure your font being cached for reuse, and try to keep it's file size as small as possible.
I went down this path a few years ago with Cufon. In the end, I chose the simplest path with acceptable performance and stopped there. It's easy to get caught up in optimizing page loads, but there's probably more promising areas for improvement – features, bugs, refactoring, etc.
The problem is, as they mention in the blog, the rare occasions (but it definitely does happen - certainly more than once for me) when the Typekit CDN fails completely and users just see a blank page. This is when you'll wish you'd used async loading.

Can I resize images using JavaScript (not scale, real resize)

I need to dynamically load and put on screen huge number of images — it can be something like 1000–3000. Usually these pictures are of different size, and I'm getting their URLs from user. So, some of these pictures can be 1024x800 or 10x40 pixels.
I wrote a good JS script showing them nicely on one page (ala Google Images Search style), but they are still very heavy on RAM used (a hundred 500K images on one page is not good), so I thought about the option of really resizing images. Like making an image that’s 1000x800 pixels something like 100x80, and then forget (free the ram) of the original one.
Can this be done using JavaScript (without server side processing)?
I would suggest a different approach: Use pagination.
Display, say, 15 images. Then the user click on 'next page' and the next page is shown.
Or, even better, you can script that when the user reaches the end of the page the next page is automatically loaded.
If such thing is not what you want to do. Maybe you want to do a collage of images, then maybe you can check CSS3 transforms. I think they should be fast.
What you want to do is to take some pressure from the client so that it can handle all the images. Letting it resize all the images (JavaScript is client side) will do exactly the opposite because actually resizing an image is usually way more expensive than just displaying it (and not possible with browser JS anyway).
Usually there is always a better solution than displaying that many items at once. One would be dynamic loading e.g. when a user scrolls down the page (like the new Facebook profiles do) or using pagination. I can't imagine that all 1k - 3k images will be visible all at once.
There is no native JS way of doing this. You may be able to hack something using Flash but you really should resize the images on the server because:
You will save on bandwidth transferring those large 500K images to the client.
The client will be able to cache those images.
You'll get a faster loading page.
You'll be able to fit a lot more thumbnail images in memory and therefore will require less pagination.
more...
I'm (pretty) sure it can be done in browsers that support canvas. If this is a path you would like to take you should start here.
I see to possible problems with the canvas approach:
It will probably take a really long time (relatively speaking) to resize many images. Because of this, you're probably going to have to look into utilizing webworkers.
Will the browser actually free up any memory if you remove the image from the DOM and/or delete/null all references to those images? I don't know.
And some pretty pictures of a canvas-resized image:
this answer needs a ninja:--> Qk

Categories