For example, I have a webpage with such structure in <head>, which is pretty common i guess:
<link rel="stylesheet" href="styles.css"> // here goes big css bundle
<script defer src="main.js"></script> // relatively small javascript bundle with defer attribute
And there is <span id="element"></span> on the page.
CSS bundle contains #element { display: none; }.
JS bundle contains (using jquery here):
$(document).ready(() => {
console.log($('#element').css('display'));
});
The result will be different from time to time. Sometimes JS executes earlier than CSS and the result is 'inline', sometimes JS executes later and the result is 'none' as I want it to be.
I want my JS bundle to be non-blocking so I use deffered attribute. I am not able to simply put my JS bundle in the end of a page because I use turbolinks and it doesn't allow me to do it.
window:load is not a best option too, because it will be fired when not only css but all resources will be downloaded including images.
So I want JS to be not-blocking and be executed after CSS to get consistent and predictable results. Maybe there is something I can do?
One option is to add a load event handler for the link element which will then insert the script into the head. Since the script is dynamically added, it will automatically be async. Therefore, it would be non-blocking and executed after the CSS is loaded.
<link id="stylesheet" rel="stylesheet" href="styles.css">
<script>
var link = document.getElementById('stylesheet');
link.addEventListener('load', function () {
var script = document.createElement('script');
script.src = 'main.js';
document.head.appendChild(script);
});
</script>
However, this could cause you a problem. If the stylesheet is cached, then it might not emit a load event (because it's already loaded). For cases like that, you could try checking for link.sheet.cssRules.
Load events on <link> elements seem to historically be a troublesome issue, so I don't know how well this will work.
Here is a CodePen demonstrating the JS loading with a check for link.sheet.cssRules. It currently works for me in Chrome, FireFox, and Edge.
I found another solution. You can just add a script without src attribute to the <head> with some code, an empty comment for example: <script>//</script>
And that's it. Now all the scripts, even deferred, will wait for styles to apply.
I'm not sure how it works, but I think deferred scripts are queued after a script without src which by standard must wait for css to apply.
Related
I assume moving script at bottom is same as using defer or async attribute. Since defer and async are not fully legacy browser compliant, I gone with loading script at the bottom of the page.
<html>
<body>
<!-- whole block of html -->
<script type='text/javascript' src='app.js'></script>
</body>
</html>
Before doing this, I ran performance benchmark tools like GTmetrix and Google PageSpeed insight. Both shown 'render blocking' parameter as the main problem. I am bit confused now, as even after I moving these scripts at the bottom to allow content/html to load first; these tools still report render blocking as a main problem.
I did look at the other StackOverflow posts highlighting that though scripts loaded at the bottom has to have 'defer' attribute.
I have several questions:
is above true?
are these tools specifically look for 'defer' or 'async' attribute?
if I have to give a fallback w.r.t defer ( specifically for IE browsers), Do I need to use conditional statements to load non-defered scripts for IE?
Kindly suggest the best approach. Thank you in advance.
Yes, the scripts loaded even at the bottom has to have defere attribute, if it's possible in your site design and requirements
no, those tools looks for the completion of parsing
depends on the version of IE that you want to support, they will have different recommendations
Now explaining simple script, defer and async a bit, to help you understand the reasons:
Script
Simple <script> tag will stop the parsing at that point, until the script download and executes.
Async
If you will use async, then the script will not stop parsing for download as it will continue downloading in parallel with the rest of the html content. However, the script will stop the parsing for execution and only then the parsing of html will continue or complete
Defer
If you use defer, the script will not halt the parsing of html data for downloading or executing the script. So it's sort of best way to avoid any time addition to the loading time of the web page.
Please note, defer is good for reducing parsing time of html, however not always best or appropriate in every webdesign flow, so be careful when you use it.
Instead of async, maybe something like this (thanks #guest271314 for the idea)
<!DOCTYPE html>
<html>
<body>
<!-- whole block of html -->
<!-- inline scripts can't have async -->
<script type='text/javascript'>
function addScript(url){
document.open();
document.write("<scrip" + "t src = \"" + url + "\"></scr" + "ipt>");//weird quotes to avoid confusing the HTML parser
document.close();
}
//add your app.js last to ensure all libraries are loaded
addScript("jquery.js");
addScript("lodash.js");
addScript("app.js");
</script>
</body>
</html>
Is this what you wanted? You can add the async or defer attributes on the document.write call if you want.
According to HTML Spec 1.1 The script block in the html page would block the rendering of the page till the javascript file in the url is downloaded and processed.
Adding Script at the end of the page : allow the browser to continue with the page rendering and hence the user will be able to see the page rendering asap.
[Preferred] Adding defer to the script tag : promises the browser that the script does not contain any document.write or document altering code in it and hence allows it to continue rendering.
As previous thread may be useful to you
Is it necessary to put scripts at the bottom of a page when using the "defer" attribute?
Why should the scripts mentioned at last must have defer attribute?
Well, the answer is that by adding defer to last script you are actually reducing the number of critical resources that needs to be loaded before the page is painted which reduces the critical rendering path.
Yes, you are correct by the time you reach the last DOM is parsed but browser has not yet started painting the DOM and hence domContentLoadedEvent is blocked until it finishes the paint and render activity.
By adding async/defer we are telling the browser that the resource is not critical for rendering and it can be loaded and executed after dom content has loaded.
This will trigger the domContentLoaded event earlier and the sooner the domContentLoaded event fires, the sooner other application logic can begin executing.
Refer the google link below it clearly demonstrate the concept.
https://developers.google.com/web/fundamentals/performance/critical-rendering-path/analyzing-crp
I lately saw some sites which uses this pattern :
<html>
<head>
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js"></script>
<script>
$(function (){...do some stuff with plugins...});
</script>
</head>
<body>
<script src="myplugin1.js"></script>
<script src="myplugin2.js"></script>
<script src="myplugin3.js"></script>
</body>
</html>
This made me think of some traps :
Question #1
document.ready event is raised not executed after the plugins(JS) has been parsed.
it is executed when the dom structure is done. (notice: I didn't say :"when all resources has been downloaded" !)
So there could be a situation where the document. readyfunction will try to use a plugin variable which hasn't been fully downloaded. (which will cause error).
Am I right ?
Question #2
This leads me to : "never use document.ready" before the script references location ( I mean : in situations where document.ready is dependent on those script variables).
Am I right ?
p.s. Im not talking about window.load which will obviously will work here but i'll have to wait much longer.
If you think about all the kinds of resources in a page, many can be loaded separately from the page content: images and stylesheets, for example. They may change the look of the page, but they can't really change the structure, so it's safe to load them separately.
Scripts, on the other hand, have this little thing called document.write that can throw a wrench into the works. If I have some HTML like this:
Who would <script>document.write("<em>");</script>ever</em> write like this?
Then browsers will parse it just fine; document.write, if used at the top level like that, effectively inserts HTML at that position to be parsed. That means that the whole rest of the page depends upon a script element, so the browser can't really move on with the document until that script has loaded.
Because scripts can potentially modify the HTML and change the structure, the browser can't wait to load them later: it has to load them at that moment, and it can't say the DOM is ready yet because a script might modify it. Therefore, the browser must delay the DOM ready event until all the scripts have run. This makes it safe for you to put the dependent code at the top in a ready handler.
However, I'd recommend you don't, because it's not very clear.
The event that $(document).ready equates to is DOMContentLoaded in modern browsers (and it falls back to some others in legacy browsers that equate to the same scenario).
MDN summarizes it pretty nicely:
The DOMContentLoaded event is fired when the document has been
completely loaded and parsed, without waiting for stylesheets, images,
and subframes to finish loading (the load event can be used to detect
a fully-loaded page).
So your scripts will always be parsed by the time it is executed.
I'm making some simple changes using javascript to HTML elements already existant when the page is served (such as changing background images of div elements, adding IDs etc). This of course works fine in every browser apart from IE8 where the change doesn't appear to be reflected in the DOM so when I parse the dom after the JS has run it cant find the elements I'm looking for. The page is built up of 2 javascript files in the header, 1 is an external third party script which I do not have control over but which is the one adding the ids and background images. The second is mine which is called after the first and is parsing the document looking for the specific elements with the new IDs. Both are external scripts and are not inline in the HTML source.
From what I can tell its either:
a race condition, 2 external Javascripts are running 1 is changing the buttons and adding the ids and the other is parsing the dom looking for specific elements and as they're running at the same time the second never finds the elements
IE8 does not properly refresh the DOM after changes have been made
My JS is called after the first JS in the head so you would assume that the blocking would not cause the race condition and the elements would be available before my JS runs
Things I've tried:
I've tried adding a class to the body to force a refresh of the DOM before my code runs
I've used IE8 developer tools and the ids and elements are not present, but if I refresh a few times they magically appear (the page has already fully loaded at the this point and I can interact with it fully)
Any ideas?
Thanks!
One thing to keep in mind is that when you are creating multiple <script> tags, you are not guaranteed of the order in which they load, and depending on how they are build - they will generally begin processing as soon as they are loaded.
So, if you are including one local file and one file to a CDN, such as:
<script type="text/javascript"
src="https://ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.min.js">
</script>
<script type="text/javascript" src="/js/my_script.js"></script>
You have to take into account that the CDN file is often far faster for delivery than your hosted file. In the above case, this may be a good thing - because jQuery being loaded on the page before your script is probably ideal - but if you are loading a different 3rd party script, which may rely on certain elements being present in the DOM that your script is responsible for creating, your script may not create them in time.
Imagine this scenario:
<script type="text/javascript" src="https://someurl/somelib.js">
// This script parses the DOM and applies alterations to certain items
</script>
<script type="text/javascript" src="/js/my_script.js">
// This script creates the DOM elements the other script is supposed to alter
</script>
Your page will only work if the local file, /js/my_script.js, loads first - which is unlikely because the other file is being served from a dedicated CDN.
This is even worse when both files are served locally, such as:
<script type="text/javascript" src="/js/my_relied_upon_script.js"></script>
<script type="text/javascript" src="/js/my_reliant_script.js"></script>
In this case, it all depends on how your local web server happens to handle the HTTP requests in order to determine what happens in what order.
So - on to the solution:
1) Make your scripts all wait for the document's onready event to fire. Because this event only occurs once the document is fully loaded (including any other HTTP requests necessary to fully load its elements, such as scripts, images, etc.) - you can be guaranteed that the scripts will at least wait until the full DOM is loaded.
2) Make subordinate scripts wait for trigger events.
With jQuery, an example might be something like the following:
// Script #1
$(document).bind('ready', function () {
$('#NeedsBackground').css({ background: 'url(/gfx/bg.png)' });
var $wrapper = $('<div />').addClass('wrapper');
$('#NeedsWrapper').wrap($wrapper);
// Here's the magic that enforces loading.
$(document).trigger('Script1Finished');
});
// Script #2
$(document).bind('Script1Finished', function () {
$('.wrapper').css({ border: '1px solid #000' });
});
Now - bear in mind that the above transformations are fairly terrible, and not something you'd want to do (such as inlining CSS and such, generally) - but they give an example. Because Script #2 requires that the .wrapper elements exist before running, you need to ensure that it happens AFTER Script #1 occurs.
In this case, we're accomplishing that by triggering a custom event on the document, which we can then respond to - and we are only firing that event after the DOM has been put in the proper state.
If I keep JavaScript code at bottom or keep JavaScript code in <head> inside document.ready, are both same thing?
I'm confused between these two methodologies, http://api.jquery.com/ready/ and http://developer.yahoo.com/performance/rules.html#js_bottom.
Is there any benefit to putting JavaScript code at bottom (just before </body>) even if I keep the code inside.
$(document).ready(function() {
// code here
});
As JavaScript is attached in
<head>
<script type="text/javascript" src="example.js"></script>
</head>
In General, your should put your Javascript files at the bottom of your HTML file.
That is even more important if you're using "classical" <script> tag files. Most browsers (even modern ones) will block the UI thread and therefore the render process of your HTML markup while loading & executing javascript.
That in turn means, if you're loading a decent amount of Javascript at the top of your page, the user will expire a "slow" loading of your page, because he will see your whole markup after all your script has been loaded and executed.
To make this problem even worse, most browsers will not download javascript files in a parallel mode. If you have a something like this:
<script type="javascript" src="/path/file1.js"></script>
<script type="javascript" src="/path/file2.js"></script>
<script type="javascript" src="/path/file3.js"></script>
your browser will
load file1.js
execute file1.js
load file2.js
execute file2.js
load file3.js
execute file3.js
and while doing so, both the UI thread and the rendering process are blocked.
Some browsers like Chrome finally started to load script files in parallel mode which makes that whole problem a little bit less of an issue.
Another way to "workaround" that problem is to use dynamic script tag insertion. Which basically means you only load one javascript file over a <script> tag. This (loader) script then dynamically creates <script> tags and inserts them into your markup. That works asyncronously and is way better (in terms of performance).
They will load all the same in terms of you being able to access the code.
The differences are in the perceived speed of loading of the page. If the javascript is last it will not block any CSS that is trying to be downloaded, which should always be at the top, and will not block any images that need to be downloaded.
Browsers only ask for items as they find them in the HTML but they only have a limited amount of download streams (~10 in modern browsers) so if you doing a lot of requests for images/css and for JS something is going to lose and the perceived speed/ user experience of the page load of your page will take a hit.
They are not the same thing as the ready event is fired when the DOM tree has been built, while scripts at the end of the page may actually execute afterward.
Either way, they're both safe entry points for your app's execution.
The Yahoo! Developer site is saying that if you put JavaScript at the bottom of the page, it won't block loading of other resources by the browser. This will make the page's initial load quicker.
jQuery is specifying a function to load when the entire page has loaded.
If you have a function which executes on page load, it won't matter whether you include it in <head> or at the bottom of the page, it will be executed at the same time.
It's important to consider what the JavaScript is actually doing on your page when deciding where to put it. In most cases, the time it takes to load and run JavaScript makes placing it at the end of the page more logical. However, if the page rendering itself depends on Ajax calls or similar, this might not be the case.
Here's a good read on the subject of document.ready() not being appropriate for all JS.
Position of <script> tag don't involve your script if you use document.ready.
It seems JavaScript is charged faster when placed before </body> but I'm not sure.
Even with the script at the bottom of the HTML document, the DOM may not be fully loaded. All closed elements above the script will typically be ready, a DOM ready event may be necessary in corner cases.
The context: My question relates to improving web-page loading performance, and in particular the effect that javascript has on page-loading (resources/elements below the script are blocked from downloading/rendering).
This problem is usually avoided/mitigated by placing the scripts at the bottom (eg, just before the tag).
The code i am looking at is for web analytics. Placing it at the bottom reduces its accuracy; and because this script has no effect on the page's content, ie, it's not re-writing any part of the page--i want to move it inside the head. Just how to do that without ruining page-loading performance is the crux.
From my research, i've found six techniques (w/ support among all or most of the major browsers) for downloading scripts so that they don't block down-page content from loading/rendering:
(i) XHR + eval();
(ii) XHR + inject;
(iii) download the HTML-wrapped script as in iFrame;
(iv) setting the script tag's async flag to TRUE (HTML 5 only);
(v) setting the script tag's defer attribute; and
(vi) 'Script DOM Element'.
It's the last of these i don't understand. The javascript to implement the pattern (vi) is:
(function() {
var q1 = document.createElement('script');
q1.src = 'http://www.my_site.com/q1.js'
document.documentElement.firstChild.appendChild(q1)
})();
Seems simple enough: and anonymous function is created then executed in the same block. Inside this anonymous function:
a script element is created
its src element is set to it's location, then
the script element is added to the DOM
But while each line is clear, it's still not clear to me how exactly this pattern allows script loading without blocking down-page elements/resources from rendering/loading?
One note first:
(iv) setting the script tag's 'async' flag to 'TRUE' (HTML 5 only);
async is a "boolean attribute", in the HTML sense. That means that either of the following is correct:
<script async src="..."></script>
<script async="" src="..."></script>
<script async="async" src="..."></script>
And that both of the following make the script be loaded asynchronously, but are not conforming (because of the possible confusion):
<script async="true" src="..."></script>
<script async="false" src="..."></script>
Now on your question. The point is that it is the HTML parser that is being blocked by the script (because people do stupid things things like document.write("<!--"), even from external JS files, and expect it to "work"). However, in your case (vi), the HTML parser doesn't ever see the script element, because it is added to the DOM directly. Somewhat logically, if a script element (or rather, a <script> start tag) isn't seen by the HTML parser, it can't stop the parsing either.
I will try to say everything with one URL, it will save us both time.
Please have a look at this presentation from the author of a book that addresses topics such as the ones you are mentioning. I found the presentation (there is a youtube one also - i think in the google channel) very very interesting. It explains much of what you want to know.
http://www.slideshare.net/souders/sxsw-even-faster-web-sites
http://www.youtube.com/watch?v=aJGC0JSlpPE (long video many details on performance params/topics)
Your last example modifies the DOM tree and can only be used after the DOM tree is complete (onload). So the browser is already able to fully render the page, while you're loading the js script.
Here is a example how firefox renders 2 different versions:
img http://img177.imageshack.us/img177/9302/40229406.jpg
test.html loads with your method above within onload at the bottom of the page.
test2.html loads with an simple script tag in head.
Note the red line, this is when the onload element is triggered.