I have a header on my webpage that contains the some javascript for some jquery plugins i am using. These plugins are used on a bunch of pages so I just include them in the header document that is included in everyone one of my pages.
There is one page however that I would like to include some other javascript but is only needed on this one page. can i use the document ready function a second time on the same page or is that poor form?
I don't want to include the javascript on everypage as it is not needed and would be a waste to load on every page.
YES
The jQuery docs are clear that this is fine.
Yes, you can use it multiple times. It binds each function to the triggered event of the document being ready.
no nothing wrong it will be executed in the order it is specified
here is a demo http://jsfiddle.net/LKuz2/7/
Yes, keeping in mind that:
Your code will be less readable.
You will end up with multiple function scopes with each block. So variables you create in one closure will not be visible from other ones.
Recommendation: If you think you need multiple $(document).ready( blocks, you are probably wrong. Take time to re-work your solution to something more maintainable.
Erm, and this has been covered here before. More than once. By Me. How embarrassing :)
What are the side effects (if any) of multiple $(document).ready() in an HTML page?
Can you have multiple $(document).ready(function(){ ... }); sections?
Related
(I am still very new to Javascript and JQuery) I am trying to keep my code clean by creating one js file per html file. I also have a couple of common js file containing code used by page-specific js files.
I was searching for solutions to import/include other js files in a given js file and I came accross this one. It refers to $.getScript().
Instead of using multiple <script src="xxx.js"></script> as imports/includes in my html pages, could I move them to my js files and use $.getScript(...) instead? I mean, is it safe? Is it a good practice?
EDIT
Is it safe regarding cyclic references?
EDIT II
To make it more clear, let's imagine I have a1.js, a2.js and a3.js.
For a1.js:
$.getScript("a2.js");
$.getScript("a3.js");
...
For a2.js:
$.getScript("a3.js");
...
For a3.js:
$.getScript("a2.js");
...
There is a potential infinite loop between a2.js and a3.js. Should I handle it with something like this in each js file:
var vari;
if (typeof vari === 'undefined') {
vari = 1;
}
Regarding good Practice that answer is as always... it depends.
Your Page Style
Contra:
If you for example use a script to substitute some page fonts, the effect will be that the font change will be even more visible for the user.
If your script changes the height of some elements, this will be very noticeable.
PRO:
If you load a Script to handle a specific form, that the user first has to edit, there is no problem at all
If you load a Script that starts animation that can be substituted with a single loading animation you can do this.
If your page is a application with many views and models, you can use getScript like getJson, in this case your page speed will greatly improve.
Your Coding Style
Not every Page and Script is structured to be used this way. JQuery's $(document).ready() fires every registered handler once, even after the event occurred. This does not necessarily mean every handler works this way, certainly not the DOM2 Events.
If you have anywhere inline Scripts it will no longer work.
You can no longer guarantee a certain order your initialization code will have, so you can run in problems have to add more checks, that e.g. a expectedly visible container is still visible.
Whats the reward?
On high performance pages, you gain some thats clear. But script tags at the end of the page can do the same thing with half the work (mainly: remove inline scripts). In my opinion getScript is something like a last reward, you should not overuse it, because the potential to not only scare other developers but also your customers away is clearly there. I would only use it in the environment of a web application, because here are the real benefits
UPDATE response to your comment
Using getScript on your page should look like this:
//since you need it there will be some kind of wrapper
var initClosure = function() {...}
if(typeof optionalNamespace == 'undefined') {
$.getScript('/foo.js', initClosure);
} else {
initClosure();
}
All depending code is in initClosure, and you check a namespace, or variable name (even something like window['blub'] or simply blub will work). You will need this, since the on getScript depending function, wich typically sets default values or appends something to the dom should only be called once.
Nevertheless I don't really see the point in cyclic references, because this would mean:
load script 1 -> wait -> loaded ->load script 2 -> wait ->loaded -> [...] ->load script 1
This situation should be avoided for at least 2 reasons
The browser can not predict this. If there are several script tags, your browser will take care of parallel downloads, so the overall speed (simplified & rough) is the time the biggest file will need to load. In my Example it will take the sum of the script loads.
Initialization of your scripts will be handled twice, so any state will get lost.
I have been struggling with choosing unobtrusive javascript over defining it within the html syntax. I want to convince my self to go the unobtrusive route, but I am having trouble getting past the issues listed below. Can you please help convince me :)
1) When you bind events unobtrusively, there is extra overhead on the client's machine to find that html element, where as when you do stuff, you don't have to iterate the DOM.
2) There is a lag between when events are bound using document.ready() (jquery) and when the page loads. This is more apparent on very large sites.
3) If you bind events (onclick etc) unobtrusively, there is no way of looking at the html code and knowing that there is an event bound to a particular class or id. This can become problematic when updating the markup and not realizing that you may be effecting javascript code. Is there a naming convention when defining css elements which are used to bind javascript events (i have seen ppl use js_className)
4) For a site, there are different pieces of javascript for different pages. For example Header.html contains a nav which triggers javascript events on all pages, where as homepage.html and searchPage.html contains elements that trigger javascript on their respective pages
sudo code example:
header.html
<script src="../myJS.js"></script>
<div>Header</div>
<ul>
<li>nav1</li><li>nav2</li>
</ul>
homepage.html
<#include header.html>
<div class="homepageDiv">some stuff</div>
searchpage.html
<#include header.html>
<div class="searchpageDiv">some other stuff</div>
myJS.js
$(document).ready(function(){
$("ul.li").bind("click",doSomething());
$(".homePageDiv").bind("click",doSomethingElse());
$(".searchPageDiv").bind("click",doSomethingSearchy());
});
In this case when you are on the searchPage it will still try to look for the "homepageDiv" which does not exist and fail. This will not effect the functionality but thats an additional unnecessary traversal. I could break this up into seperate javascript files, but then the browser has to download multiple files, and I can't just serve one file and have it cached for all pages.
What is the best way to use unobtrusive javascript so that I could easily maintain a ( pretty script heavy) website, so another developer is aware of scripts being bound to html elements when they are modifying my code. And serve the code so that the client's browser is not looking for elements which do not exist on a particular page (but may exist on others).
Thanks!
You are confused. Unobtrusive JavaScript is not just about defining event handlers in a program. It's a set of rules for writing JavaScript such that the script doesn't affect the functionality of other JavaScript on the same page. JavaScript is a dynamic language. Anyone can make changes to anything. Thus if two separate scripts on the same page both define a global variable add as follows, the last one to define it will win and affect the functionality of the first script.
// script 1
var add = function (a, b) {
return a + b;
};
// script 2
add = 5;
//script 1 again
add(2, 3); // error - add is a number, not a function
Now, to answer your question directly:
The extra overhead to find an element in JavaScript and attach an event listener to it is not a lot. You can use the new DOM method document.querySelector to find an element quickly and attach an event listener to it (it takes less than 1 ms to find the element).
If you want to attach your event listeners quickly, don't do it when your document content loads. Attach your event listeners at the end of the body section or directly after the part of your HTML code to which you wish to attach the event listener.
I don't see how altering the markup could affect the JavaScript in any manner. If you try to attach an event listener to an element that doesn't exist in JavaScript, it will silently fail or throw an exception. Either way, it really won't affect the functionality of the rest of the page. In addition, a HTML designer really doesn't need to know about the events attached any element. HTML is only supposed to be used for semantic markup; CSS is used for styling; and JavaScript is used for behavior. Don't mix up the three.
God has given us free will. Use it. JavaScript supports conditional execution. There are if statements. See if homePageDiv exists and only then attach an event listener to it.
Try:
$(document).ready(function () {
$("ul.li").bind("click",doSomething());
if (document.querySelector(".homePageDiv")) {
$(".homePageDiv").bind("click",doSomethingElse());
} else {
$(".searchPageDiv").bind("click",doSomethingSearchy());
}
});
Your question had very little to do with unobtrusive JavaScript. It showed a lack of research and understanding. Thus, I'm down voting it. Sorry.
Just because jQuery.ready() executes does not mean that the page is visible to the end user. This is a behaviour defined by browsers and these days there are really 2 events to take into consideration here as mootools puts it DomReady vs Load. When jQuery executes the ready method it's talking about the dom loading loaded however this doesn't mean the page is ready to be viewed by the user, external elements which as pictures and even style sheets etc may still be loading.
Any binding you do, even extremely inefficient ones will bind a lot faster than all the external resources being loaded by the browser so IMHO user should experience no difference between the page being displayed and functionality being made available.
As for finding binding on elements in your DOM. You are really just fearing that things will get lost. This has not really been my actual experience, more often than not in your JS you can check what page you are on and only add javascript for that page (as Aadit mentioned above). After that a quick find operation in your editor should help you find anything if stuff gets lost.
Keep in mind that under true MVC the functionality has to be separate from the presentation layer. This is exactly what OO javascript or unobtrusive javascript is about. You should be able to change your DOM without breaking the functionality of the page. Yes, if you change the css class and or element id on which you bind your JS will break, however the user will have no idea of this and the page will at least appear to work. However if this is a big concern you can use OO-Javascript and put div's or span's as placeholders in your dom and use these as markers to insert functionality or tell you that it exists, you can even use html comments. However, in my experience you know the behavior of your site and hence will always know that there is some JS there.
While I understand most of your concerns about useless traversals, I do think you are nickle and dime'ing it at this point if you are worried about 1 additional traversal. Previous to IE8 it used to be the case that traversing with the tag name and id was a lot faster than my selector but this is no longer true infact browsers have evolved to be much faster when using just the selectors:
$("a#myLink") - slowest.
$("a.myLink") - faster.
$("#Link") - fastest.
$(".myLink") - fastest.
In the link below you can see that as many as 34 thousand operations per second are being performed so I doubt speed is an issue.
You can use firebug to test the speed of each in the case of a very large dom.
In Summary:
a) Don't worry about losing js code there is always ctrl+f
b) There is no lag because dom ready does not mean the page is visible to start with.
Update
Fixed order of speed in operations based on the tests results from here
However keep in mind that performances of IE < 8 are really had if you don't specify the container (this used to be the rule, now it seems to be the exception to the rule).
I have a question on jQuery $(document).ready
Let's say we have a HTML page which includes 2 JavaScript files
<script language="javascript" src="script1.js" ></script>
<script language="javascript" src="script2.js" ></script>
Now let's say in both these script files, we have $(document) as follows
Inside script1.js:
$(document).ready(function(){
globalVar = 1;
})
Inside script2.js:
$(document).ready(function(){
globalVar = 2;
})
Now my Questions are:
Will both these ready event function get fired ?
If yes, what will the order in which they get fired, since the
document will be ready at the same
time for both of them?
Is this approach recommended OR we should ideally have only 1
$(document).ready ?
Is the order of execution same across all the browsers (IE,FF,etc)?
Thank you.
Will both these ready event function get fired ?
Yes, they will both get fired.
what will the order in which they get fired, since the document will be ready at the same time for both of them?
In the way they appear (top to bottom), because the ready event will be fired once, and all the event listeners will get notified one after another.
Is this approach recommended OR we should ideally have only 1 $(document).ready ?
It is OK to do it like that. If you can have them in the same block code it would be easier to manage, but that's all there is to it. Update: Apparently I forgot to mention, you will increase the size of your JavaScript code if you do this in multiple files.
Is the order of execution same across all the browsers (IE,FF,etc)?
Yes, because jQuery takes the cross-browser normalization at hand.
See here: jQuery - is it bad to have multiple $(document).ready(function() {}); and here: Tutorials:Multiple $(document).ready()
Yes
Order of attach. jQuery internally maintains a readyList with Deferred objects.
It's partially a matter of taste. Having one ready handler will give you a nice overview of all that is happening, while multiple (i.e., one per included file) will make your code much more modular (i.e., you can include or remove a .js file and be sure that it provides and binds its own ready handler).
Yes - order of attach.
You can count on both handlers being executed in order of their script inclusion and globalVar being 2 after the second script reference, in any current browser.
If you want full control I strongly recommend only one $(document).ready();
If you load partial portions of HTML through ajax and the ajax response includes a $(document).ready();-script and you want to fire $(document).ready(); from script1.js, script2.js and so on in the ajax callback.. You have to duplicate PLENTY of code....
Good Luck!
/ $(window).ready(); ;)
Both will get fired
The value of the variable will be 2 once all the dust has settled.
The main thing which isn't recommended is using 2 different JS files, as Google PageSpeed, and Yahoo YSlow recommends, it's best to have all your Javascript codes in the same file.
as far as same event handlers, well, in all honesty, I see no reason why to do that, and it'll only make your code readability lousier.
I have no answer for that.
is it OK to use the
$(document).ready(function ()
{
// some code
});
more than 1 time in the javascript code?
Yes, it is OK, jQuery will queue and merge them into a single handler called when the DOM is ready.
Sure it is ok. Sometimes you have no other option. especially when you have some included JS files with jQuery and some jQuery code in the page itself.
I find that having everything in one huge $(document).ready leads to messy code thats hard to read.
I often prefer to split it up and place a separate $(document).ready() for every part of the system that needs stuff added to it. This is especially nice for larger, modular, systems, where you dynamically add blocks of html, events and stuff.
For me it all boils down to what is most beneficial for you as a developer in a certain case.
Small system, where it's easy to know what's going on: one $(document).ready() in your script.
Large, modular, system: split it up as needed to have control of what's going on, and develop efficiently.
But as #Codesleuth commented: very often you don't need to put stuff inside the $(document).ready(), you only need it when you need to be sure the DOM is in a consistant and known state for heavy manipulation etc.
I have a javascript function that manipulates the DOM when it is called (adds CSS classes, etc). This is invoked when the user changes some values in a form. When the document is first loading, I want to invoke this function to prepare the initial state (which is simpler in this case than setting up the DOM from the server side to the correct initial state).
Is it better to use window.onload to do this functionality or have a script block after the DOM elements I need to modify? For either case, why is it better?
For example:
function updateDOM(id) {
// updates the id element based on form state
}
should I invoke it via:
window.onload = function() { updateDOM("myElement"); };
or:
<div id="myElement">...</div>
<script language="javascript">
updateDOM("myElement");
</script>
The former seems to be the standard way to do it, but the latter seems to be just as good, perhaps better since it will update the element as soon as the script is hit, and as long as it is placed after the element, I don't see a problem with it.
Any thoughts? Is one version really better than the other?
The onload event is considered the proper way to do it, but if you don't mind using a javascript library, jQuery's $(document).ready() is even better.
$(document).ready(function(){
// manipulate the DOM all you want here
});
The advantages are:
Call $(document).ready() as many times as you want to register additional code to run - you can only set window.onload once.
$(document).ready() actions happen as soon as the DOM is complete - window.onload has to wait for images and such.
I hope I'm not becoming The Guy Who Suggests jQuery On Every JavaScript Question, but it really is great.
I've written lots of Javascript and window.onload is a terrible way to do it. It is brittle and waits until every asset of the page has loaded. So if one image takes forever or a resource doesn't timeout until 30 seconds, your code will not run before the user can see/manipulate the page.
Also, if another piece of Javascript decides to use window.onload = function() {}, your code will be blown away.
The proper way to run your code when the page is ready is wait for the element you need to change is ready/available. Many JS libraries have this as built-in functionality.
Check out:
http://docs.jquery.com/Events/ready#fn
http://developer.yahoo.com/yui/event/#onavailable
Definitely use onload. Keep your scripts separate from your page, or you'll go mad trying to disentangle them later.
Some JavaScript frameworks, such as mootools, give you access to a special event named "domready":
Contains the window Event 'domready', which will execute when the DOM has loaded. To ensure that DOM elements exist when the code attempting to access them is executed, they should be placed within the 'domready' event.
window.addEvent('domready', function() {
alert("The DOM is ready.");
});
window.onload on IE waits for the binary information to load also. It isn't a strict definition of "when the DOM is loaded". So there can be significant lag between when the page is perceived to be loaded and when your script gets fired. Because of this I would recommend looking into one of the plentiful JS frameworks (prototype/jQuery) to handle the heavy lifting for you.
#The Geek
I'm pretty sure that window.onload will be called again when a user hits the back button in IE, but doesn't get called again in Firefox. (Unless they changed it recently).
In Firefox, onload is called when the DOM has finished loading regardless of how you navigated to a page.
While I agree with the others about using window.onload if possible for clean code, I'm pretty sure that window.onload will be called again when a user hits the back button in IE, but doesn't get called again in Firefox. (Unless they changed it recently).
Edit: I could have that backwards.
In some cases, it's necessary to use inline script when you want your script to be evaluated when the user hits the back button from another page, back to your page.
Any corrections or additions to this answer are welcome... I'm not a javascript expert.
My take is the former becauase you can only have 1 window.onload function, while inline script blocks you have an n number.
onLoad because it is far easier to tell what code runs when the page loads up than having to read down through scads of html looking for script tags that might execute.