I have a page that has a list of many <img> tag. So it takes a long time to load all images. Before loading any image, I see the broken image icon. I want to replace broken image while loading the images. I tested this answer, but it just worked when a error happens. Is there anyway for doing that with javascript or jquery?
I found a good solution on GitHub. Just use the CSS code below:
img[src=""],
img:not([src]) {
visibility: hidden;
}
Link: https://github.com/wp-media/rocket-lazy-load/issues/60
You can load a placeholder image, but then you must load that image (when you're already loading another image). If you load something like a spinner via a GET request, that should be ok since you can set cache headers from the server so the browser does not actually make any additional requests for that loading image. A way that Pinterest gets around this is by loading a solid color and the title of each of their posts in the post boxes while the images are loading, but now we're getting into a design discussion. There are multiple ways to skin a cat.
Regarding loading several images, you have to understand a couple considerations:
The time it takes to fetch and download an image.
The time it takes to decode this image.
The maximum number of concurrent sockets you may have open on a page.
If you don't have a ton of images that need to be loaded up front, consideration 3 is typically not a problem since you can optimistically load images under the fold, but if you have 100s of images on the page that need to be loaded quickly for a good user experience, then you may need to find a better solution. Why? Because you're incurring 100s of additional round trips to your server just load each image which makes up a small portion of the total loading spectrum (the spectrum being 100s of images). Not only that, but you're getting choked by the browser limitation of having X number of concurrent requests to fetch these images.
If you have many small images, you may want to go with an approach similar to what Dropbox describes here. The basic gist is that you make one giant request for multiple thumbnails and then get a chunked encoding response back. That means that each packet on the response will contain the payload of each thumbnail. Usually this means that you're getting back the base64-encoded version of the payload, which means that, although you are reducing the number of round trips to your server to potentially just one, you will have a greater amount of data to transfer to the client (browser) since the string representation of the payload will be larger than the binary representation. Another issue is that you can no longer safely cache this request on the browser without using something like IndexedDB. You also incur a decode cost when you set the background image of each img tag to a base64 string since the browser now must convert the string to binary and then have the img tag decode that as whatever file format it is (instead of skipping the base64->binary step altogether when you request an image and get a binary response back).
you can use placeholder image, which is very light weight and use that in place of each image.
same time while loading page, you can load all the images in hidden div.
then on document ready you can replace all the images with jQuery.
e.g.
HTML
<img src="tiny_placeholder_image" alt="" data-src="original_image_src"/>
<!-- upto N images -->
<!-- images are loading in background -->
<div style="display:none">
<img src="original_image_src" alt=""/>
<!-- upto N images -->
</div>
JavaScript
(function($){
// Now replace all data-src with src in images.
})(jQuery);
Basically I want to specify the background-image for a button using something other than an image url. Being able to set the background-image to an already loaded element contained within the DOM would be ideal. This is so that I can cache a loading gif (displayed on the button) within the DOM and don't have to fetch it when the button is clicked.
I didn't think code was necessary to illustrate the problem but here is some anyway
Not ideal due to image loading on click:
that.submitButtonSelector.css('background-image', 'url(/Content/_activity/ajax-loader.gif)');
Ideal (but no obvious way to achieve)
that.submitButtonSelector.css('background-image', '#precachedImage');
If you load the loading gif via the url, it will be cached in most cases. You only need to download it once. After that it will be served up from cache. The only other thing I can think of is to use a base64 source. This has the benefit if not generating an HTTP request, but is larger when it comes to actual bytes (I don't think the larger size is slower than another HTTP request, but you can always benchmark them).
In my experience I believe base64 images are great if you need to immediately show the loading icon and if the icon is small enough, but if you don't need to show it right away, I suggest preloading the image via url with javascript and just relying on the cached version.
So in your case, if you went with base64, you could use
that.submitButtonSelector.css('background-image', 'url(data:image/gif;base64,R0lGODlhEAAQAPIAAP///wAAAMLCwkJCQgAAAGJiYoKCgpKSkiH+GkNyZWF0ZWQgd2l0aCBhamF4bG9hZC5pbmZvACH5BAAKAAAAIf8LTkVUU0NBUEUyLjADAQAAACwAAAAAEAAQAAADMwi63P4wyklrE2MIOggZnAdOmGYJRbExwroUmcG2LmDEwnHQLVsYOd2mBzkYDAdKa+dIAAAh+QQACgABACwAAAAAEAAQAAADNAi63P5OjCEgG4QMu7DmikRxQlFUYDEZIGBMRVsaqHwctXXf7WEYB4Ag1xjihkMZsiUkKhIAIfkEAAoAAgAsAAAAABAAEAAAAzYIujIjK8pByJDMlFYvBoVjHA70GU7xSUJhmKtwHPAKzLO9HMaoKwJZ7Rf8AYPDDzKpZBqfvwQAIfkEAAoAAwAsAAAAABAAEAAAAzMIumIlK8oyhpHsnFZfhYumCYUhDAQxRIdhHBGqRoKw0R8DYlJd8z0fMDgsGo/IpHI5TAAAIfkEAAoABAAsAAAAABAAEAAAAzIIunInK0rnZBTwGPNMgQwmdsNgXGJUlIWEuR5oWUIpz8pAEAMe6TwfwyYsGo/IpFKSAAAh+QQACgAFACwAAAAAEAAQAAADMwi6IMKQORfjdOe82p4wGccc4CEuQradylesojEMBgsUc2G7sDX3lQGBMLAJibufbSlKAAAh+QQACgAGACwAAAAAEAAQAAADMgi63P7wCRHZnFVdmgHu2nFwlWCI3WGc3TSWhUFGxTAUkGCbtgENBMJAEJsxgMLWzpEAACH5BAAKAAcALAAAAAAQABAAAAMyCLrc/jDKSatlQtScKdceCAjDII7HcQ4EMTCpyrCuUBjCYRgHVtqlAiB1YhiCnlsRkAAAOwAAAAAAAAAAAA==)');
body:after {
content: url(data:image/gif;base64,R0lGODlhEAAQAPIAAP///wAAAMLCwkJCQgAAAGJiYoKCgpKSkiH+GkNyZWF0ZWQgd2l0aCBhamF4bG9hZC5pbmZvACH5BAAKAAAAIf8LTkVUU0NBUEUyLjADAQAAACwAAAAAEAAQAAADMwi63P4wyklrE2MIOggZnAdOmGYJRbExwroUmcG2LmDEwnHQLVsYOd2mBzkYDAdKa+dIAAAh+QQACgABACwAAAAAEAAQAAADNAi63P5OjCEgG4QMu7DmikRxQlFUYDEZIGBMRVsaqHwctXXf7WEYB4Ag1xjihkMZsiUkKhIAIfkEAAoAAgAsAAAAABAAEAAAAzYIujIjK8pByJDMlFYvBoVjHA70GU7xSUJhmKtwHPAKzLO9HMaoKwJZ7Rf8AYPDDzKpZBqfvwQAIfkEAAoAAwAsAAAAABAAEAAAAzMIumIlK8oyhpHsnFZfhYumCYUhDAQxRIdhHBGqRoKw0R8DYlJd8z0fMDgsGo/IpHI5TAAAIfkEAAoABAAsAAAAABAAEAAAAzIIunInK0rnZBTwGPNMgQwmdsNgXGJUlIWEuR5oWUIpz8pAEAMe6TwfwyYsGo/IpFKSAAAh+QQACgAFACwAAAAAEAAQAAADMwi6IMKQORfjdOe82p4wGccc4CEuQradylesojEMBgsUc2G7sDX3lQGBMLAJibufbSlKAAAh+QQACgAGACwAAAAAEAAQAAADMgi63P7wCRHZnFVdmgHu2nFwlWCI3WGc3TSWhUFGxTAUkGCbtgENBMJAEJsxgMLWzpEAACH5BAAKAAcALAAAAAAQABAAAAMyCLrc/jDKSatlQtScKdceCAjDII7HcQ4EMTCpyrCuUBjCYRgHVtqlAiB1YhiCnlsRkAAAOwAAAAAAAAAAAA==)
}
You can have a hidden div that preloads the image into cache, and then once it's loaded you can add the background.
HTML:
<img id="img" src="/Content/_activity/ajax-loader.gif" />
CSS:
#img {
display: none;
}
jQuery:
$("#img").load(function() {
$(submitButtonSelector).click(function() {
that.submitButtonSelector.css("background-image", "/Content/_activity/ajax-loader.gif");
});
});
Here's the code:
<div id="wrapper">
<img src="lowres/image123.jpg">
</div>
PREMISSES:
The <img> element is generated by a proprietary system backend and uses a low resolution image as source. I can only operate on it using pure javascript (no jquery!).
I NEED TO change de src attribute to a high resolution version located in a external server, eg.: src="//cdn.provider.com/highres/image123.png" (images have the same name but different locations).
THE PROBLEM: doing it after <img> insertion into DOM issues 2 (two) HTTP requests, one for the lowres image and other for the highres - and I have lots of images on the page!
In order to FIX IT, I was wondering if it is possible to manipulate <img> just before its insertion into DOM to change src appropriately, for example by intercepting an <img> event "beforeInsertion" or an event "afterInsertion" of the <div> wrapping it.
Cheers!
UPDATE 1: to make things clear: 1) I don't have accesss to the backend/server side; 2) I don't want to display the low resolution image, just the high resolution; 3) I dont' know the file name in advance, I need to get it from the <img> and append it to the path of the high resolution version stored in the CDN (both images have the same name); and 4) I can do it with the code bellow, but at the cost of TWO HTTP requests, which is what I want to avoid and what has motivated this question! ;)
var img = document.getElementById("wrapper").childNodes[0];
img.src = getHighResolutionImagePath(img.src);
You have to change url before adding <img> tag to document (e.g in server side script). Browser will get image from given url as soon as it'll receive tag, so it's not possible to replace it then. Especially that js will execute after getting all elements used on website.
When you're developing an app for retina displays and you want to let the browser know it's a high resolution image, you would append "#2x" to the picture name, like so:
background-image: url('../img/logo2#2x.png');
Try that and see if the browser adopts the same policy.
Source: http://benfrain.com/how-to-serve-high-resolution-website-images-for-retina-displays-new-ipadiphone4/
You must listen to mutation events, specifically DOMNodeInserted
Full documentation is available at MDN > Mutation Events
Once you have detected that an image as been added (you must filter the proper image type) you can replace the source for the one you want.
A small snippet taken from MDN:
element.addEventListener("DOMNodeInserted", function (ev) {
// ...
}, false);
Beware that performance will be greatly degraded
Context
I am making an application for showing a synchronized HTML5 slideshow to about 50 spectators in a wireless LAN with no internet access.
I run a Node.js server in one of the computers and connect with the 50 clients via Socket.IO (Btw, only one of them controlls the presentation).
The hardware is a domestic wireless 802.11b/g router and 50 mobile devices (tablets, netbooks, smartphones).
Problem
When the slideshow starts, it takes too long (about 10 minutes or more for a 5 MB slideshow) for the clients to see it, since the router has to send the complete slideshow to all the clients at the same time.
How my slideshow looks
<html>
<head>
<title>My Slideshow</title>
<script src="javascripts/slidesplayer.js"></script>
<link rel="stylesheet" href="/stylesheets/style.css">
</head>
<body>
<div id="slides-containter">
<div class="slide" id="slide_1">
<!--Contents such as images, text, video and audio sources -->
</div>
<div class="slide" id="slide_2">
<!--Contents -->
</div>
<!--A bunch of slides here-->
</div>
<script>
// Here I load the slides
</script>
</body>
</html>
What I would like to do
At the beginning, I would like to load the slides-container element completely empty.
Then, as I advance through the slideshow, I'd like to GET from the server the div representing the next slide, and append it to the DOM so that only when that is done, the client starts to download the pictures and othet stuff only for that slide (thus, decreasing significantly my network overload).
Another relevant fact is that the slideshow (including the slidesplayer.js) is automatically generated from an external software that parses PowerPoint presentations to this HTML5 format and that we will use a lot of presentations that are already made in PowerPoint.
My first impression is that I should accomplish this by using jQuery-ajax, but I don't know exactly how to do it the good way, since my idea is just copying the div.slide elements in separate files.
Update: This answer suggests using jQuery for DOM manipulation before displaying. It seems that jQuery requests the resources everytime you manipulate a DOM object, even if it is not inserted into your current DOM. So, one possible solution would be working only with strings. You can see more about this issue in this and this questions.
One solution would be to treat this as a front-end solution. The front-end should arguably only eat as much as it can take at any one time.
I'm assuming it's external resources (imagery etc) as opposed to the slideshow markup itself that's making up the most of those 5MB, in which case the DOM should not attempt to call those resources until they are necessary.
I would suggest serving the whole slide document to an ajax call but only introducing the markup to each slide as it is called. Something like this:
$.ajax('path/to/slides', {
async: false,
complete: function ajaxCallback(slidesDOM){
// Pull out the individual slides from your slideshow HTML
$slides = $(slidesDOM).find('.slide');
// For each of these...
$slides.each(function prepareSlide(){
// Store a reference to the slide's contents
var $slideContent = $($(this).html());
// Empty the contents and keep only the slide element itself
var $slideWrapper = $(this).empty();
$slideWrapper
// Put the slide where you want it
.appendTo('.slidesContainer')
// And attach some kind of event to it
// (depending on how your slideware works, you might want to bind this elsewhere)
.on('focus', function injectContent(){
// Put the content in — NOW external resources will load
$slideWrapper.append($slideContent);
// Unbind this function trigger
$slideWrapper.off('focus', injectContent);
});
})
}
});
1) You shouldn't be streaming payloads with SocketIO. Socket is made for low-load. If you need to transmit en-masse, I'd recommend using a standard HTTP AJAX request. Then, you can use Socket.IO to control which slide you are on.
2) Try AngularJS. They've basically done all the thinking for you regarding view switching (which is essentially what you are doing). They have a great tutorial, which helps alot.
3) To simplify you Socket calls, I'd recommend using ConversationJS both client and server side.
As I said in the question, manipulating DOM elements will cause the browser to download the resources, even if you don't insert the elements that use that resources in your DOM.
In my case, the best solution I could make was to use some sort of lazy loading at least for the img tags (but it could be easily extended for other tags, such as audio and video).
What I did was replacing replacing the src attribute with another name (xsrc in this case) and adding a custom empty src attribute to all img tags.
<img id="someImg" src="#" xsrc="foo.png"></img>
Then, with jQuery I changed the src attribute value to that of xsrc whenever I needed to dowload the image.
// When I want the image to be downloaded from the server
$('#someImg').attr( 'src' , $('#someImg').attr('xsrc') )
You can see more about the idea behind this in the questions I already mentioned (this and this).
I hate how you can actually see webpages load. I think it'd be much more appealing to wait until the page is fully loaded and ready to be displayed, including all scripts and images, and then have the browser display it. So I have two questions:
How can I do this?
Is this common practice? If not, why?
This is a very bad idea for all of the reasons given, and more. That said, here's how you do it using jQuery:
<body>
<div id="msg" style="font-size:largest;">
<!-- you can set whatever style you want on this -->
Loading, please wait...
</div>
<div id="body" style="display:none;">
<!-- everything else -->
</div>
<script type="text/javascript">
$(document).ready(function() {
$('#body').show();
$('#msg').hide();
});
</script>
</body>
If the user has JavaScript disabled, they never see the page. If the page never finishes loading, they never see the page. If the page takes too long to load, they may assume something went wrong and just go elsewhere instead of *please wait...*ing.
I think this is a really bad idea. Users like to see progress, plain and simple. Keeping the page at one state for a few seconds and then instantly displaying the loaded page will make the user feel like nothing is happening and you are likely to lose visits.
One option is to show a loading status on your page while stuff processes in the background, but this is normally reserved for when the site is actually doing processing on user input.
http://www.webdeveloper.com/forum/showthread.php?t=180958
The bottom line, you at least need to show some visual activity while the page is loading, and I think having the page load in little pieces at a time is not all that bad (assuming you aren't doing something that seriously slows down page load time).
There is certainly a valid use for this. One is to prevent people from clicking on links/causing JavaScript events to occur until all the page elements and JavaScript have loaded.
In IE, you could use page transitions which mean the page doesn't display until it's fully loaded:
<meta http-equiv="Page-Enter" content="blendTrans(Duration=.01)" />
<meta http-equiv="Page-Exit" content="blendTrans(Duration=.01)" />
Notice the short duration. It's just enough to make sure the page doesn't display until it's fully loaded.
In FireFox and other browsers, the solution I've used is to create a DIV that is the size of the page and white, then at the very end of the page put in JavaScript that hides it. Another way would be to use jQuery and hide it as well. Not as painless as the IE solution but both work well.
Here's a solution using jQuery:
<script type="text/javascript">
$('#container').css('opacity', 0);
$(window).load(function() {
$('#container').css('opacity', 1);
});
</script>
I put this script just after my </body> tag. Just replace "#container" with a selector for the DOM element(s) you want to hide. I tried several variations of this (including .hide()/.show(), and .fadeOut()/.fadeIn()), and just setting the opacity seems to have the fewest ill effects (flicker, changing page height, etc.). You can also replace css('opacity', 0) with fadeTo(100, 1) for a smoother transition. (No, fadeIn() won't work, at least not under jQuery 1.3.2.)
Now the caveats: I implemented the above because I'm using TypeKit and there's an annoying flicker when you refresh the page and the fonts take a few hundred milliseconds to load. So I don't want any text to appear on the screen until TypeKit has loaded. But obviously you're in big trouble if you use the code above and something on your page fails to load. There are two obvious ways that it could be improved:
A maximum time limit (say, 1 second) after which everything appears whether the page is loaded or not
Some kind of loading indicator (say, something from http://www.ajaxload.info/)
I won't bother implementing the loading indicator here, but the time limit is easy. Just add this to the script above:
$(document).ready(function() {
setTimeout('$("#container").css("opacity", 1)', 1000);
});
So now, worst-case scenario, your page will take an extra second to appear.
Immediately following your <body> tag add something like this...
<style> body {opacity:0;}</style>
And for the very first thing in your <head> add something like...
<script>
window.onload = function() {setTimeout(function(){document.body.style.opacity="100";},500);};
</script>
As far as this being good practice or bad depends on your visitors, and the time the wait takes.
The question that is stil left open and I am not seeing any answers here is how to be sure the page has stabilized. For example if you are loading fonts the page may reflow a bit until all the fonts are loaded and displayed. I would like to know if there is an event that tells me the page is done rendering.
Also make sure the server buffers the page and does not immediately (while building) stream it to the client browser.
Since you have not specified your technology stack:
PHP: look into ob_start
ASP.NET: make sure Response.BufferOutput = True (it is by default)
obligatory: "use jQuery"
I've seen pages that put a black or white div that covers everything on top of the page, then remove it on the document.load event. Or you could use .ready in jQuery That being said, it was one of the most anoying web pages I've ever seen, I would advise against it.
in PHP-Fusion Open Source CMS, http://www.php-fusion.co.uk, we do it this way at core -
<?php
ob_start();
// Your PHP codes here
?>
YOUR HTML HERE
<?php
$html_output = ob_get_contents();
ob_end_clean();
echo $html_output;
?>
You won't be able to see anything loading one by one. The only loader will be your browser tab spinner, and it just displays everything in an instant after everything is loaded. Give it a try.
This method is fully compliant in html files.
You can hide everything using some css:
#some_div
{
display: none;
}
and then in javascript assign a function to document.onload to remove that div.
jQuery makes things like this very easy.
In addition to Trevor Burnham's answer if you want to deal with disabled javascript and defer css loading
HTML5
<html class="no-js">
<head>...</head>
<body>
<header>...</header>
<main>...</main>
<footer>...</footer>
</body>
</html>
CSS
//at the beginning of the page
.js main, .js footer{
opacity:0;
}
JAVASCRIPT
//at the beginning of the page before loading jquery
var h = document.querySelector("html");
h.className += ' ' + 'js';
h.className = h.className.replace(
new RegExp('( |^)' + 'no-js' + '( |$)', 'g'), ' ').trim();
JQUERY
//somewhere at the end of the page after loading jquery
$(window).load(function() {
$('main').css('opacity',1);
$('footer').css('opacity',1);
});
RESOURCES
CSS delivery optimization: How to defer css loading?
What is the purpose of the HTML "no-js" class?
How to get the <html> tag HTML with JavaScript / jQuery?
How to add/remove a class in JavaScript?
While I agree with the others that you should not want it I'll just briefly explain what you can do to make a small difference without going all the way and actually blocking content that is already there -- maybe this will be enough to keep both you and your visitors happy.
The browser starts loading a page and will process externally located css and js later, especially if the place the css/js is linked is at the 'correct' place. (I think the advice is to load js as late as possible, and to use external css that you load in the header). Now if you have some portion of your css and/or js that you would like to be applied as soon as possible simply include that in the page itself. This will be against the advice of performance tools like YSlow but it probably will increase the change of your page showing up like you want it to be shown. Use this only when really needed!
You could start by having your site's main index page contain only a message saying "Loading". From here you could use ajax to fetch the contents of your next page and embed it into the current one, on completion removing the "Loading" message.
You might be able to get away with just including a loading message container at the top of your page which is 100% width/height and then removing the said div on load complete.
The latter may not work perfectly in both situations and will depends on how the browser renders content.
I'm not entirely sure if its a good idea. For simple static sites I would say not. However, I have seen a lot of heavy javascript sites lately from design companies that use complex animation and are one page. These sites use a loading message to wait for all scripts/html/css to be loaded so that the page functions as expected.
Don't use display:none. If you do, you will see images resize/reposition when you do the show(). Use visibility:hidden instead and it will lay everything out correctly, but it just won't be visible until you tell it to.
Hope this code will help
<html>
<head>
<style type="text/css">
.js #flash {display: none;}
</style>
<script type="text/javascript">
document.documentElement.className = 'js';
</script>
</head>
<body>
<!-- the rest of your code goes here -->
<script type="text/javascript" src="/scripts/jquery.js"></script>
<script type="text/javascript">
// Stuff to do as soon as the body finishes loading.
// No need for $(document).ready() here.
</script>
</body>
</html>
Put text at the top of the page. While the user reads it, the rest of the page can load and it will be ready by the time the user scrolls down.
I am, frankly, a bit disturbed at many of the answers here. I'd say all of them are terrible. Although I share the skeptical reaction of the various top respondents, many answers give "solutions" that won't display anything at all to a user who has JavaScript disabled, and many others rely on a customized on-page loading notice, while signaling to the browser that the page is already loaded.
As a user, I hate both of these outcomes, so as a web-developer, I'd say these are both "non-solutions". You never want to anger your userbase and the solutions given here will anger a lot of users. I especially hate these approaches because if the user opens a webpage in the background in a new tab, the browser will display the page as loaded but the user might click over to it to find that it isn't loaded.
Independently of your question here, best practice is to make as much of your site work without JavaScript as possible, and best practice is to use the browser's built-in loading signals and never signal to the browser that the page is loaded before it actually is. So really, the only good way to do this is to make your page load so fast that there is never any moment of the user waiting.
The best way to achieve what you want is avoid use of Javascript to load elements of the page, and then optimize the page intensely. Here are the components of this approach:
Have JavaScript on the page if you like, but don't use it to load or otherwise modify any DOM elements after the initial request is fulfilled by the server. Use JavaScript to modify elements of the page only later, such as if triggered by user input, or perhaps to refresh an element after some time, but not in any way related to the page's initial loading. I.e. use JavaScript for what it was designed for (to make webpages interactive) and don't use it to do what HTML was designed for (to make the webpage in the first place.)
Avoid the use of any heavy JavaScript libraries and include as little JavaScript as possible. Never include JavaScript files generically, i.e. only include specific files / libraries in specific pages where you need them.
Specify the width and height of any images in the page code itself, so that the browser can know the exact layout before the image loaded. This reduces any "choppiness" as the page loads, i.e. elements moving around as the browser resizes the boxes in which images of unspecified width are contained.
Ensure that image files are in the exact dimensions being displayed on the page and are not being downsized by the browser. This minimizes file size and also minimizes CPU work the user's computer needs to do to resize images, both of which can affect load time.
Optimize the compression of images, which includes using a good lossy format like JPG and lowering the compression level to as low as you can go without affecting perception. Use lossless formats like PNG only where necessary and ideally keep them small in dimension so the filesize is also small.
Focus the intensity of your optimization efforts on any elements that load "above the fold" on a typical page, as these are what is going to affect what the user sees. Users rarely scroll down instantly, so if elements lower down on the page load a bit slower, almost no one will notice. But still optimize these lower elements reasonably because they also affect server load, bandwidth, and user CPU load.
If you use any elements at all in your page that are potentially very slow to load due to reasons beyond your control, such as content pulled from another server (ads, social media widgets, integrations with other websites, etc.), compartmentalize these in an element of fixed size, and ideally place it below the fold.
Avoid auto-ads, page-modifying AI (like Ezoic), or any other external add-ins that necessarily breaks or undermines one or more of these recommendations. For example, auto-ads are terrible because they rely on loading an external resource,they usually have heavy javascript libraries, and they also modify the page layout. Even the best-designed auto-ads are going to completely undermine all your other optimization efforts.
If you are running a company with multiple developers, quickly jettison any developers who are not fully committed to a lightweight, fast-loading web design. Ideally, don't ever hire such people to begin with. A lot of people get really vested in a certain philosophy or style of development that is at odds with lightweight design. The world would be a better place if these people were in a different line of work, rather than designing webpages.
So you've optimized your page.
This produces the outcome that, if the user clicks the link directly, they'll see the content above the fold fully loaded immediately or nearly-immediately, worst-case-scenario being that a couple images fill in in a second or two. By the time they scroll down, everything else will already be loaded. Any truly-slow-to-load content, such as Google Analytics tracking or other third-party services, will not be central to the appearance of the webpage itself, so the user will see a fully-loaded page even if there are still a few invisible elements loading behind the scenes.
On the other hand, if the user loads the link in a background tab, it will display as loading to the browser, showing the animated symbol in the tab, until it is truly fully loaded. Once it displays as loaded in the tab, if they click it, it will be fully loaded.
In addition, you will have made the page load really fast, which is a good thing in and of itself.
This is a win-win. The user sees a full-loaded page nearly instantly, there is almost never any waiting while looking at a half-displayed page, the loading symbol works as expected when loading a tab in the background, and on top of this you've netted a ton of side-benefits like reduced bandwidth and server CPU load, not to mention lessening the load on the user's CPU as well. (Many users HATE when your page cranks their CPU, and rightfully so.)
So yeah, your choice what to do, but there is only one real solution here and it is lightweight, efficient web design.