I've been having an issue getting .getScript to work in some odd cases.
For instance, this works to load the scripts only when needed.
function twitterSDK() {
$jQ.getScript('http://platform.twitter.com/widgets.js');
}
function diggShare() {
$jQ.getScript('http://widgets.digg.com/buttons.js');
}
function buzzShare() {
$jQ.getScript('http://www.google.com/buzz/api/button.js');
}
However it doesn't seem to work on a few scripts that I wrote. If I call .getScript to fetch this JS file I've uploaded to Pastebin ( http://pastebin.com/GVFcMJ4P ) and call tweetStream(); nothing happens. However, if I do the following it works:
var twitter = document.createElement('script');
twitter.type = 'text/javascript';
twitter.async = true;
twitter.src = '/path-to/tweet.js';
$jQ('.twitter-section').append(twitter);
tweetStream();
What am I doing wrong? Any help would be awesome, thanks!
P.S. Which method is faster or more efficient?
Note: My code isn't hosted on pastebin, I just uploaded the contents of the .js file that is on my server to that site so it is easy to share. I am not leeching of pastebin for hosting ;)
jQuery's $jQ.getScript() is an asynchronous call. So if you were calling tweetStream() immediately after the getScript(), it would run before the script arrived.
You can call tweetStream() from (or as) a callback instead.
$jQ.getScript('/path-to/tweet.js', function() {
tweetStream();
});
or this if you don't care about the value of this in tweetStream().
$jQ.getScript('/path-to/tweet.js', tweetStream);
Related
I'm trying to include a JS file in my website via Ajax by getting it from one of my GitHub repositories, using Rawgit. For some reason, it doesn't work when I use the development or production URLs, but when I use the raw version of my files directly from Github, the problem stops. I don't experience any problems when I use the development URL for a CSS file. Does anyone know why this happens?
Here's my code:
$.get("https://rawgit.com/Larpee/The-Khan-Quiz/master/game.js", function (game) {
var sketchProc = function (processingInstance) {
with (processingInstance) {
size(400, 400);
frameRate(30);
eval(game);
}
};
var canvas = document.getElementById("game");
var processingInstance = new Processing(canvas, sketchProc);
});
Update: I think the problem occurs because GitHub (not Rawgit), serves the files as .txt files, while RawGit serves them as .js.
I would still like to receive an explanation for why getting my JavaScript files with a .js extension isn't working, though
For RawGit, you get content-type: application/javascript;charset=utf-8 as expected, and GitHub gives Content-Type: text/plain; charset=utf-8. That seems to be the only difference.
I checked your example and found that when using RawGit (and getting a script response), the success callback wouldn't execute at all, but using GitHub it would (adding a console.log('foo'); statement, for example. I also found that eval() run on your script throws an exception:
Uncaught ReferenceError: createFont is not defined
I made a repo myself with a placeholder JS file that was syntactically correct, in which case success callbacks for $.get() on both RawGit and GitHub did execute, and later added a reference to an undefined name, which caused the call to the RawGit URL fail to execute its callback.
The lesson? When you get a script with $.get(), it is actually immediately executed, and if that eval() fails then everything ends there, silently. This is supported by jQuery getScript load vs execution, which also implies that this (in my opinion) kind of crazy way of dealing with script data ended in version 2.1.0.
Which leads me to suggest that, apart from fixing your script, you should either use $.getScript() (not sure if you have to ensure the result header has application/javascript), or explicitly insert a script element with JS and have an onload callback:
(function(d, script) {
script = d.createElement('script');
script.type = 'text/javascript';
script.async = true;
script.onload = function(){
// remote script has loaded
};
script.src = '[RawGit URL]';
d.getElementsByTagName('head')[0].appendChild(script);
}(document));
Additionally, the with block you use should become:
var sketchProc = function (processing) {
processing.size(400, 400);
processing.frameRate(30);
}
var canvas = $('#game'); // use jQuery since you have it
var processingInstance = new Processing(canvas, sketchProc);
// game code now manipulates `processingInstance`
// either directly in scope (or even better as object with parameter)
processingInstance.draw = function () {
var processing = this;
// if your code originally used `processing` throughout,
// do this so that it refers to the specific instance you've created
// ...
}
processingInstance.loop();
// if initialized without a `.draw()` method like above, you need to start the loop manually once there is one
The Processing.js documentation has a lot of examples that are basically in Java and require small modifications to properly deal with JS scope. Sad!
A friend has asked me to capture a client-side rendered website built with React.js, preferably using PhantomJS. I'm using a simple rendering script as follows:
var system = require('system'),
fs = require('fs'),
page = new WebPage(),
url = system.args[1],
output = system.args[2],
result;
page.open(url, function (status) {
if (status !== 'success') {
console.log('FAILED to load the url');
phantom.exit();
} else {
result = page.evaluate(function(){
var html, doc;
html = document.querySelector('html');
return html.outerHTML;
});
if(output){
var rendered = fs.open(output,'w');
rendered.write(result);
rendered.flush();
rendered.close();
}else{
console.log(result);
}
}
phantom.exit();
});
The url is http://azertyjobs.tk
I consistently get an error
ReferenceError: Can't find variable: Promise
http://azertyjobs.tk/build/bundle.js:34
http://azertyjobs.tk/build/bundle.js:1 in t
...
Ok so I figured out that ES6 Promises aren't natively supported by PhantomJS yet, so I tried various extra packages like the following https://www.npmjs.com/package/es6-promise and initiated the variable as such:
var Promise = require('es6-promise').Promise
However this still produces the same error, although Promise is now a function. The output of the webpage is also still as good as empty (obviously..)
Now I'm pretty oldschool, so this whole client-side rendering stuff is kind of beyond me (in every aspect), but maybe someone has a solution. I've tried using a waiting script too, but that brought absolutely nothing. Am I going about this completely wrong? Is this even possible to do?
Much appreciated!
Ludwig
I've tried the polyfill you linked and it didn't work, changed for core.js and was able to make a screenshot. You need to inject the polyfill before the page is opened:
page.onInitialized = function() {
if(page.injectJs('core.js')){
console.log("Polyfill loaded");
}
}
page.open(url, function (status) {
setTimeout(function(){
page.render('output.jpg');
phantom.exit();
}, 3000);
});
What you need to understand is that there are several parts of a page loading. First there is the HTML - the same thing you see when you "view source" on a web page. Next there are images and scripts and other resources loaded. Then the scripts are executed, which may or may not result in more content being loaded and possible modifications to the HTML.
What you must do then is figure out a way to determine when the page is actually "loaded" as the user sees it. PhantomJS provides a paradigm for you to waitFor content to load. Read through their example and see if you can figure out a method which works for you. Take special note of where they put phantom.exit(); as you want to make sure that happens at the very end. Good luck.
Where (how) are you trying to initialise Promise? You'll need to create it as a property of window, or use es6-promise as a global polyfill, like this require('es6-promise').polyfill(); or this require('es6-promise/auto'); (from the readme).
Also, what do you mean by "capture"? How If you're trying to scrape data, you may have better luck using X-ray. It supports Phantom, Nightmare and other drivers.
Keep in mind also that React can also be server rendered. React is like templating, but with live data bindings. It's not as complicated as you're making it out to be.
I know that I can run an external Javascript file from within HTML with the following syntax:
<script type="text/javascript"
src="http://somesite.com/location/of/javascript.js">
</script>
This will result in http://somesite.com/location/of/javascript.js being run the moment the browser reads that line of the HTML.
But is there a way I can run an external Javascript file from within Javascript? Something like:
if (x == 1)
{
run this! -> http://somesite.com/location/of/javascript.js;
}
Obviously that's not valid code. But I can't find any example of what might be the right way to do this (if it exists), because all the help text I find with Google searches tell me how to run Javascript from within HTML
I know that I can include a Javascript file and then call functions within it. However, in this situation, I do not have any control over http://somesite.com/location/of/javascript.js, and it is designed to execute the moment it is called. I can't change how it works, so I need to figure out how to call it at the right time in the right way.
Is there a way I can get it to be called and executed immediately depending on a conditional statement?
Yes, in Pure Javascript you can Load javascript dynamically
var s = document.createElement("script");
s.src = "test.js";
document.body.appendChild(s);
There is a way...
var extfile = document.createElement('script')
extfile.setAttribute("type","text/javascript")
extfile.setAttribute("src", external_jsfilename)
document.getElementsByTagName("head")[0].appendChild(extfile)
Simple as that ....
Use jQuery's .getScript() the file will be loaded and then executed
if (x == 1)
{
$.getScript( "http://somesite.com/location/of/javascript.js");
}
On my web site, I'm trying to accomplishes the fastest page load as possible.
I've noticed that it appears my JavaScript are not loading asynchronously. Picture linked below.
alt text http://img249.imageshack.us/img249/2452/jsasynch2.png
How my web site works is that it needs to load two external JavaScript files:
Google Maps v3 JavaScript, and
JQuery JavaScript
Then, I have inline JavaScript within the HTML that cannot be executed until those two files above are downloaded.
Once it loads these external javascript files, it then, and only then, can dynamically render the page. The reason why my page can't load until both Google Maps and JQuery are loaded is that - my page, based on the geolocation (using Gmaps) of the user will then display the page based on where they are located (e.g. New York, San Francisco, etc). Meaning, two people in different cities viewing my site will see different frontpages.
Question: How can I get my JavaScript files to download asynchronously so that my overall page load time is quickest?
UPDATE:
If I were to download, somehow, Google-maps and JQuery asynchronously, how would I create an event that would be fired once both Google-maps and JQuery have downloaded since my page has a hard dependency on those files to execute.
UPDATE 2
Even though there are 3 answers below, none still actually answer the problem I have. Any help would be greatly appreciated.
HTTP downloads are generally limited by browsers to two simultaneous downloads per domain. This is why some sites have the dynamic content on www.domain.tla and the images and javascript on static.domain.tla.
But browsers act slightly differently with scripts, while a script is downloading, however, the browser won't start any other downloads, even on different hostnames.
The standard solution is to move scripts to the bottom of the page, but there is a workaround that might or might not work for you: Insert the script DOM element using Javascript.
You could use something like this, which works pretty well in most browsers. It has some issues in IE6 at least, but I don't really have the time to investigate them.
var require = function (scripts, loadCallback) {
var length = scripts.length;
var first = document.getElementsByTagName("script")[0];
var parentNode = first.parentNode;
var loadedScripts = 0;
var script;
for (var i=0; i<length; i++) {
script = document.createElement("script");
script.async = true;
script.type = "text/javascript";
script.src = scripts[i];
script.onload = function () {
loadedScripts++;
if (loadedScripts === length) {
loadCallback();
}
};
script.onreadystatechange = function () {
if (script.readyState === "complete") {
loadedScripts++;
if (loadedScripts === length) {
loadCallback();
}
}
};
parentNode.insertBefore(script, first);
}
};
require([
"http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js",
"http://ajax.googleapis.com/ajax/libs/prototype/1.6.1.0/prototype.js",
"http://ajax.googleapis.com/ajax/libs/yui/2.7.0/build/yuiloader/yuiloader-min.js"
], function () {
console.log(jQuery);
console.log($);
console.log(YAHOO);
});
Someone asked me to comment on this thread, but that was before #lonut posted a response. #lonut's code is a very good solution, but I have some comments (critical and not so critical):
First, #lonut's code assumes that the scripts do NOT have load dependencies on the other scripts. This is a little hard to explain, so let's work with the simple example of jquery.min.js and prototype.js. Suppose we have a simple page that just loads these two scripts like this:
<script src="jquery.min.js"></script>
<script src="prototype.js"></script>
Remember - there's nothing else in the page - no other JavaScript code. If you load that page the two scripts get downloaded and everything's fine. Now, what happens if you remove the jquery.min.js script? If you get errors from prototype.js because it's trying to reference symbols defined in jquery.min.js, then prototype.js has a load dependency on jquery.min.js - you cannot load prototype.js unless jquery.min.js has already been loaded. If, however, you don't get any errors, then the two scripts can be loaded in any order you wish. Assuming you have no load dependencies between your external scripts, #lonut's code is great. If you do have load dependencies - it gets very hard and you should read Chapter 4 in Even Faster Web Sites.
Second, one problem with #lonut's code is some versions of Opera will call loadCallback twice (once from the onload handler and a second time from the onreadystatechange handler). Just add a flag to make sure loadCallback is only called once.
Third, most browsers today open more than 2 connections per hostname. See Roundup on Parallel Connections.
The LABjs dynamic script loader is designed specifically for this type of case. For instance, you might do:
$LAB
.script("googlemaps.js")
.script("jquery.js")
.wait(function(){
// yay, both googlemaps and jquery have been loaded, so do something!
});
If the situation was a little more complex, and you had some scripts that had dependencies on each other, as Steve Souders has mentioned, then you might do:
$LAB
.script("jquery.js")
.wait() // make sure jquery is executed first
.script("plugin.jquery.js")
.script("googlemaps.js")
.wait(function(){
// all scripts are ready to go!
});
In either case, LABjs will download all of the scripts ("jquery.js", "googlemaps.js", and "plugin.jquery.js") in parallel, as least up to the point the browser will allow. But by judicious use of the .wait() in the chain, LABjs will make sure they execute in the proper order. That is, if there's no .wait() in between the two scripts in the chain, they will each execute ASAP (meaning indeterminate order between tehm). If there's a .wait() in between two scripts in the chain, then the first script will execute before the second script, even though they loaded in parallel.
Here is how I've managed to load gmaps asynchronously on a jquery mobile:
First, you can load jquery (i.e. with the require function posted above by IonuČ› G. Stan)
Then you can make use of the callback param in gmaps to do the following:
<!DOCTYPE html>
<html>
<body>
<script type="text/javascript">
var require = function (scripts, loadCallback) {
var length = scripts.length;
var first = document.getElementsByTagName("script")[0];
var parentNode = first.parentNode;
var loadedScripts = 0;
var script;
for (var i=0; i<length; i++) {
script = document.createElement("script");
script.async = true;
script.type = "text/javascript";
script.src = scripts[i];
script.onload = function () {
loadedScripts++;
if (loadedScripts === length) {
loadCallback();
}
};
script.onreadystatechange = function () {
if (script.readyState === "complete") {
loadedScripts++;
if (loadedScripts === length) {
loadCallback();
}
}
};
parentNode.insertBefore(script, first);
}
};
require([
"http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js",], function () {
$.ajax({
type: "GET",
url: 'http://maps.googleapis.com/maps/api/js?v=3&sensor=false&callback=setMyMap',
dataType: "script"
});
});
function setMyMap() {
console.log('your actions here');
var coords = new google.maps.LatLng(40.5439532,-3.6441775);
var mOptions = {
zoom: 8,
center: coords,
mapTypeId: google.maps.MapTypeId.ROADMAP
}
var map = new google.maps.Map(document.getElementById("gmap"), mOptions);
}
</script>
<div id="gmap" style="width:299px; height:299px"></div>
</body>
The point is load jquery async (whathever method you choose) and on that callback place a new async call to gmaps with your starting method in the callback param of the gmaps script string.
Hope it helps
Regardless what order they download in, the scripts should be parsed/executed in the order in which they occur on the page (unless you use DEFER).
So, you can put both Google Maps first in the head, THEN JQuery. Then, in the body of your page somewhere:
<script language="Javascript">
function InitPage() {
// Do stuff that relies on JQuery and Google, since this script should
// not execute until both have already loaded.
}
$(InitPage); // this won't execute until JQuery is ready
</script>
But this does have the disadvantage of blocking your other connections while loading the beginning of the page, which isn't so awesome for page performance.
Instead, you can keep JQuery in the HEAD, but load the Google scripts from the InitPage() function, using JQuery's Javascript-loading functionality rather than the Google JSAPI. Then start your rendering when that call-back function executes. Same as the above, but with this InitPage() function instead:
function InitPage() {
$.getScript('Google Maps Javascript URL', function() {
// Safe to start rendering now
});
Move your javascript includes (<script src="...) from the HEAD element to the end of your BODY element. Generally whatever is placed in the HEAD is loaded synchronously, whatever is placed in the BODY is loaded asynchronously. This is more or less true for script includes, however most browsers these days block everything below the script until it is loaded - hence why having scripts included at the bottom of the body is best practice.
Here is the YUI guildline for this which explains it in further detail:
http://developer.yahoo.net/blog/archives/2007/07/high_performanc_5.html
This is also the reason why stylesheets should be in the head, and javascript should be in the body. As we do not want to see our page turn from spaghetti to niceness while the styles load asynchronously, and we don't want to wait on our javascript while our page loads.
The objective you have in mind would be served by using requireJS. RequireJS downloads the js resources asynchronously. Its a very simple and useful library to implement. Please read more here. http://requirejs.org/
I have a javascript function I'm writing which is being used to include an external JS file, but only once. The reason I need such a function is because it is being called when some content is loaded via AJAX and I need to run page-specific code to that content (no, just using .live won't cover it).
Here's my attempt, shortened for brevity:
$.include_once = function(filename) {
if ($("script[src='" + filename + "']").length === 0) {
var $node = $("<script></script>")
.attr({
src : filename,
type : "text/javascript"
})
;
$(document.body).append($node);
}
};
This works fine: the function is called, it loads the external file, and that file is being run when loaded. Perfect.
The problem is that it will always re-load that external file: the query I'm using to check for the presence of the script always finds nothing!
When debugging this, I added some lines:
alert($("script").length); // alerts: 4
$(document.body).append($node);
alert($("script").length); // alerts: 4
Looking in the dynamic source (the HTML tab of Firebug), I can't find the script tag at all.
I know that I could maintain an array of files that I've previously included, but I was hoping to go with a method such as this, which (if it worked), seems a bit more robust, since not all the JS files are being included in this way.
Can anyone explain the behaviour seen in this second snippet?
jQuery is a bit of a dumb-dumb in this case; it doesn't do at all what you'd expect. When you append($node) jQuery does this:
jQuery.ajax({
url: $node.src,
async: false,
dataType: "script"
})
Woops! For local files (eg on the same domain) jQuery performs a standard XMLHttpRequest for the .js file body, and proceeds to "eval" it by a whole convoluted process of creating a <script> tag (again!) and settings it's contents to your .js file body. This is to simulate eval but in the global context.
For cross-domain files, since it cannot perform the standard XMLHttpRequest due to the same-domain policy, jQuery once again creates a <script> element and inserts it into <head>.
In both the local and cross-domain cases above jQuery finally gets around to doing this:
head.removeChild(script);
And booms your .length check! Bummer.
So on to your problem, don't bother jQuery with this. Just do
document.getElementsByTagName('head')[0]
.appendChild(
document.createElement('script')
)
.src = filename;
Which will do what you'd expect, particularly wrt querying for it later.
You're trying to solve a problem that has already been solved several times over. Try LazyLoad for example. There are also similar plugins for jQuery.
Instead of setting the source attribute of the script-tag, set the "text" attribute of the script tag. This works in all modern browsers (the application where I use that in practice does not support IE6, so I do not know about this creep...).
In practice it would look like this (you HAVE TO add code to omit double inclusion on yourself - e.g. a simple array of all alread loaded scripts, though thats very application specific, and why should you anyway load code twice? try to omit double-loading code...!):
var script_source_code_string = <some dynamically loaded script source code>;
var $n = $("<script></script>");
$n.get(0).text = script_source_code_string;
$(document.body).append($n);
Or even simpler (without jquery, my code at this stage does not know jquery, it may also be loaded dynamically):
var script_source_code_string = <some dynamically loaded script source code>;
var s = document.createElement('script');
document.getElementsByTagName('head')[0].appendChild(s);
s.text = script_source_code_string;