is it possible to make sifr re-run or update again without having to refreshing the page??
Id imagine there is a function like sifr() or something..?
Which version of sIFR? At least sIFR 3 has a method for this:
sIFR.redraw();
You are spot on: There is a function, called sIFR().
The Javascript replacement statements can be added in the sifr.js file, or at the end of your (X)HTML file.
When you put the replacement code in the sIFR Javascript file, it’ll execute on onload (depending on if sIFR.bAutoInit is set to True, which is the default value) or when you call sIFR(). If you put the replace statements in the body, they'll be executed immediately.
Effectively, this means that you could put the replacement statement in the JavaScript and call sIFR() in the body. It won't make any difference.
To save bandwidth you can put the replace statements in the JavaScript file, and then call sIFR() in the body. The exact code you have to use in the body in this case is:
<script type="text/javascript">
if(typeof sIFR == "function"){
sIFR();
};
</script>
Obviously you could call this function later again, e.g when something has changed asynchronously.
Sidenote: I wasn't aware of the redraw() function opted by Simon. Obviously I would give that a go, if I where you.
thanks guys. unfortunately for me its version 2.0.7. I'll have to let the front end designers know to update sifr for future markups. I havnt got time to play with this anymore as there is more important things on the project.
I tried the whole sIFR() thing, but didn't work, looks like ill have to refresh the page everytime the user switches themes. Or if i get time towards the end ill update it to v3 and try the redraw().
Related
I've written a small HTML5 page that I need to be able to support multiple languages. I've implemented the language control by making the page load a JSON file into memory (in the HEAD) and then running a jQuery command to change the text of any element as required.
Everything works fine except that as the change is being called post render (if the document ready function) there is a slight flash as the language gets changed.
Is there an event that is called before the page is rendered but after the DOM is available? If not, are there any suggestions to change implementation.
Cheers..
UPDATE
I've found a few answers to this on other sites. The general consensus appears to be that this isn't possible as most browsers render as they parse. The workaround that is suggested is to hide (display:'none') the body in script and then show it (display:'') after the updates in the document ready function. It sort of works for me although isn't 100% perfect.
Sounds like you are having an issue with FOUC (Flash Of Unstyled Content)
There are a few ways to get around it. You could add this to your body:
<body class="fouc">
And then have this CSS:
.fouc{display:none;}
And finally this script:
$(function(){
$('.fouc').show();
});
This works by initially hiding the page, and then once you are ready, turning it on with javascript. You may need to ensure your manipulation occurs ahead of the $('.fouc').show(); call.
One effective solution, though not the one you are probably looking for, is to use OUTPUT BUFFERING ... What is output buffering?
(I am still very new to Javascript and JQuery) I am trying to keep my code clean by creating one js file per html file. I also have a couple of common js file containing code used by page-specific js files.
I was searching for solutions to import/include other js files in a given js file and I came accross this one. It refers to $.getScript().
Instead of using multiple <script src="xxx.js"></script> as imports/includes in my html pages, could I move them to my js files and use $.getScript(...) instead? I mean, is it safe? Is it a good practice?
EDIT
Is it safe regarding cyclic references?
EDIT II
To make it more clear, let's imagine I have a1.js, a2.js and a3.js.
For a1.js:
$.getScript("a2.js");
$.getScript("a3.js");
...
For a2.js:
$.getScript("a3.js");
...
For a3.js:
$.getScript("a2.js");
...
There is a potential infinite loop between a2.js and a3.js. Should I handle it with something like this in each js file:
var vari;
if (typeof vari === 'undefined') {
vari = 1;
}
Regarding good Practice that answer is as always... it depends.
Your Page Style
Contra:
If you for example use a script to substitute some page fonts, the effect will be that the font change will be even more visible for the user.
If your script changes the height of some elements, this will be very noticeable.
PRO:
If you load a Script to handle a specific form, that the user first has to edit, there is no problem at all
If you load a Script that starts animation that can be substituted with a single loading animation you can do this.
If your page is a application with many views and models, you can use getScript like getJson, in this case your page speed will greatly improve.
Your Coding Style
Not every Page and Script is structured to be used this way. JQuery's $(document).ready() fires every registered handler once, even after the event occurred. This does not necessarily mean every handler works this way, certainly not the DOM2 Events.
If you have anywhere inline Scripts it will no longer work.
You can no longer guarantee a certain order your initialization code will have, so you can run in problems have to add more checks, that e.g. a expectedly visible container is still visible.
Whats the reward?
On high performance pages, you gain some thats clear. But script tags at the end of the page can do the same thing with half the work (mainly: remove inline scripts). In my opinion getScript is something like a last reward, you should not overuse it, because the potential to not only scare other developers but also your customers away is clearly there. I would only use it in the environment of a web application, because here are the real benefits
UPDATE response to your comment
Using getScript on your page should look like this:
//since you need it there will be some kind of wrapper
var initClosure = function() {...}
if(typeof optionalNamespace == 'undefined') {
$.getScript('/foo.js', initClosure);
} else {
initClosure();
}
All depending code is in initClosure, and you check a namespace, or variable name (even something like window['blub'] or simply blub will work). You will need this, since the on getScript depending function, wich typically sets default values or appends something to the dom should only be called once.
Nevertheless I don't really see the point in cyclic references, because this would mean:
load script 1 -> wait -> loaded ->load script 2 -> wait ->loaded -> [...] ->load script 1
This situation should be avoided for at least 2 reasons
The browser can not predict this. If there are several script tags, your browser will take care of parallel downloads, so the overall speed (simplified & rough) is the time the biggest file will need to load. In my Example it will take the sum of the script loads.
Initialization of your scripts will be handled twice, so any state will get lost.
I'm looking for a way to debug a dynamically loaded jQuery document.ready function.
Obviously I can't just bring up the script panel and add a breakpoint with the mouse since the function does not exist there.
I've also tried adding "debugger;" to the function (without the quotes), but that did not do anything. I have ensured that the function is actually executed while I tried this.
Thanks for your help,
Adrian
Edit: I just noticed that Firebug actually breaks on debug. However, when it does so on a dynamically loaded script, it does not bring up the source code of that script as usual. Plus, the call stack ends right below my own code. I can bring up the implementation for document.ready via the call stack, but that does not really help. Is this a Firebug bug or have I missed something?
I just worked on this similar question. The solution involves adding the word debugger twice; once at the top of the external file and one more time at the top of the function that needs to be debugged.
I noticed that if the debugger word was used only once, it did not work. Example:
//myExternal.js
debugger;
function myExternalFunction(){
debugger;
/* do something here */
}
You might try placing a break point where the event is called, and then instead of click "Play", choose "Step Into" (F11). I don't have a test case in front of me, but I think this may work.
I don't know if you ever got this figured out, but in case someone else needs it...
I got around this by moving the code I wanted to debug to an external file that was linked from the main page.
In my case, I had default.aspx loading services.aspx into a content div using jQuery AJAX. Services.aspx in turn, was loading jQuery UI tab elements using AJAX from a webservice that was providing it data. The webservice code was in a file called data.js which was linked from default.aspx. I needed to debug the code that was in the header of services.aspx (that loaded the tabs with data), but couldn't ever see it in any of the available inspectors. I just moved the code I needed to a new function in data.js and called it from the header in services.aspx.
I hope that makes sense to someone who needs it!
Just encountered same behavior (Firebug ignoring debugger; statement in dynamically loaded code) in Firefox 5.0/Firebug 1.7.3.
Worked around by detaching Firebug window ("Open Firebug in New Window").
There's also a 'debugger' keyword that's supported by the IE JScript debugger, and Safari's Web Inspector, so i would be surprised ifit wasn't supported in firebug.
Basically:
// mydynamicallyloadedfile.js
... // do stuff
debugger; // triggers debugger
... // more stuff
And i would expect firebug to break at the debugger keyword
I have a multi-frame layout. One of the frames contains a form, which I am submitting through XMLHttpRequest. Now when I use document.write() to rewrite the frame with the form, and the new page I am adding contains any javascript then the javascript is not exectuted in IE6?
For example:
document.write("<html><head><script>alert(1);</script></head><body>test</body></html>");
In the above case the page content is replaced with test but the alert() isn't executed. This works fine in Firefox.
What is a workaround to the above problem?
Instead of having the JS code out in the open, enclose it in a function (let's call it "doIt"). Your frame window (let's say it's name is "formFrame") has a parent window (even if it's not visible) in which you can execute JS code. Do the actual frame rewrite operation in that scope:
window.parent.rewriteFormFrame(theHtml);
Where rewriteFormFrame function in the parent window looks something like this:
function rewriteFormFrame(html) {
formFrame.document.body.innerHTML = html;
formFrame.doIt();
}
Workaround is to programmatically add <script> blocks to head DOM element in JavaScript at Callback function or call eval() method. It's only way you can make this work in IE 6.
Another possible alternative is to use JSON, dynamically adding scripts references which will be automatically processed by browser.
Cheers.
In short: You can't really do that. However JavaScript libraries such as jQuery provide functionality to do exactly that. If you depend on that, give jQuery a try.
Eval and/or executing scripts dynamically is bad practice. Very bad practice. Very, very, very bad practice. I can't stress enough, how bad practice it is.
AKA.: Sounds like bad design. What problem are you trying to solve again?
You could use an onload attribute in the body tag (<body onload="jsWrittenLoaded()">).
I have a javascript function that manipulates the DOM when it is called (adds CSS classes, etc). This is invoked when the user changes some values in a form. When the document is first loading, I want to invoke this function to prepare the initial state (which is simpler in this case than setting up the DOM from the server side to the correct initial state).
Is it better to use window.onload to do this functionality or have a script block after the DOM elements I need to modify? For either case, why is it better?
For example:
function updateDOM(id) {
// updates the id element based on form state
}
should I invoke it via:
window.onload = function() { updateDOM("myElement"); };
or:
<div id="myElement">...</div>
<script language="javascript">
updateDOM("myElement");
</script>
The former seems to be the standard way to do it, but the latter seems to be just as good, perhaps better since it will update the element as soon as the script is hit, and as long as it is placed after the element, I don't see a problem with it.
Any thoughts? Is one version really better than the other?
The onload event is considered the proper way to do it, but if you don't mind using a javascript library, jQuery's $(document).ready() is even better.
$(document).ready(function(){
// manipulate the DOM all you want here
});
The advantages are:
Call $(document).ready() as many times as you want to register additional code to run - you can only set window.onload once.
$(document).ready() actions happen as soon as the DOM is complete - window.onload has to wait for images and such.
I hope I'm not becoming The Guy Who Suggests jQuery On Every JavaScript Question, but it really is great.
I've written lots of Javascript and window.onload is a terrible way to do it. It is brittle and waits until every asset of the page has loaded. So if one image takes forever or a resource doesn't timeout until 30 seconds, your code will not run before the user can see/manipulate the page.
Also, if another piece of Javascript decides to use window.onload = function() {}, your code will be blown away.
The proper way to run your code when the page is ready is wait for the element you need to change is ready/available. Many JS libraries have this as built-in functionality.
Check out:
http://docs.jquery.com/Events/ready#fn
http://developer.yahoo.com/yui/event/#onavailable
Definitely use onload. Keep your scripts separate from your page, or you'll go mad trying to disentangle them later.
Some JavaScript frameworks, such as mootools, give you access to a special event named "domready":
Contains the window Event 'domready', which will execute when the DOM has loaded. To ensure that DOM elements exist when the code attempting to access them is executed, they should be placed within the 'domready' event.
window.addEvent('domready', function() {
alert("The DOM is ready.");
});
window.onload on IE waits for the binary information to load also. It isn't a strict definition of "when the DOM is loaded". So there can be significant lag between when the page is perceived to be loaded and when your script gets fired. Because of this I would recommend looking into one of the plentiful JS frameworks (prototype/jQuery) to handle the heavy lifting for you.
#The Geek
I'm pretty sure that window.onload will be called again when a user hits the back button in IE, but doesn't get called again in Firefox. (Unless they changed it recently).
In Firefox, onload is called when the DOM has finished loading regardless of how you navigated to a page.
While I agree with the others about using window.onload if possible for clean code, I'm pretty sure that window.onload will be called again when a user hits the back button in IE, but doesn't get called again in Firefox. (Unless they changed it recently).
Edit: I could have that backwards.
In some cases, it's necessary to use inline script when you want your script to be evaluated when the user hits the back button from another page, back to your page.
Any corrections or additions to this answer are welcome... I'm not a javascript expert.
My take is the former becauase you can only have 1 window.onload function, while inline script blocks you have an n number.
onLoad because it is far easier to tell what code runs when the page loads up than having to read down through scads of html looking for script tags that might execute.