Multiple synchronous XmlHttpRequests getting rejected/aborted - javascript

For a bit of fun and to learn more about JS and the new HTML5 specs, I decided to build a file uploader plugin for my own personal use.
After I select my files, I send the files to my Web Worker to be spliced into 1mb chunks and uploaded to my server individually. I opted to send them individually to benefit from individual progress callbacks and pause/resume functionality (later).
The problem I'm getting is when I select a lot of files to upload. If I only select 8, no problems. If I select 99, the server rejects/aborts after about the 20th file, although, sometimes, it could stop after 22, 31, 18 - toally random.
Firefox can get away with more than Chrome before aborting. Chrome calls them 'failed' and Firefox calls them 'aborted'. Firefox usually aborts the files after about file 40. Not only this but my test server becomes unresponsive and throws a 'the connection was reset' error - becoming responsive again less than 20-seconds later.
Because I'm using a Web Worker, I am setting my XmlHttpRequests to synchronous to allow each request to complete before starting a new one and the PHP script is on the same domain, so I'm baffled to see the requests rejected and would love to hear what is wrong with my code that's causing this to happen.
This is the plugin part that sends to the Worker. Pretty irrelevant but who knows:
var worker = new Worker('assets/js/uplift/workers/uplift-worker.js');
worker.onmessage = function(e) {
console.log(e.data);
};
worker.postMessage({'files':filesArr});
And this is uplift-worker.js:
var files = [], p = true;
function upload(chunk) {
var upliftRequest = new XMLHttpRequest();
upliftRequest.onreadystatechange = function() {
if (upliftRequest.readyState == 4 && upliftRequest.status == 200) {
// do something
}
};
upliftRequest.setRequestHeader("Cache-Control", "no-cache");
upliftRequest.setRequestHeader("X-Requested-With", "XMLHttpRequest");
upliftRequest.open('POST', '../php/uplift.php', false);
upliftRequest.send(chunk);
}
function processFiles() {
for (var j = 0; j < files.length; j++) {
var blob = files[j];
const BYTES_PER_CHUNK = 1024 * 1024; // 1mb chunk sizes.
const SIZE = blob.size;
var start = 0,
end = BYTES_PER_CHUNK;
while (start < SIZE) {
var chunk = blob.slice(start, end);
upload(chunk);
start = end;
end = start + BYTES_PER_CHUNK;
}
p = j == (files.length -1);
self.postMessage(blob.name + " uploaded successfully");
}
}
self.addEventListener('message', function(e) {
var data__Files = e.data.files;
for (var j = 0; j < data__Files.length; j++) {
files.push(data__Files[j]);
}
if (p) processFiles();
});
BUGS FOUND:
I managed to get this from Chrome console:
ERROR: Line 27 in
http://xxxxxxx/demos/uplift/assets/js/uplift/workers/uplift-worker.js
: Uncaught NetworkError: Failed to execute 'send' on
'XMLHttpRequest': Failed to load
'http://xxxxxxx/demos/uplift/assets/js/uplift/php/uplift.php'.
Which points to the Worker script line: upliftRequest.send(chunk);.
Firebug didn't give me much to work with at all but this shows the aborted requests:
And this shows the header that is sent with the requests:
I initially thought it was a problem server-side, so I removed all PHP from uplift.php and left an empty page to simply test the upload-to-browser parts and posting the requests, but the problems continued.
UPDATE:
I'm beginning to think my hosting provider are limiting request rates by using Apache Mod Security rules - possibly to prevent my IP from attacking the server with brute-force. Adding to that, my uploader works fine on my localhost (MAMP).
I did a little more research on my new suspicions. If my homemade upload plugin was having troubles sending multiple files/requests to my host, then surely some of the other popular upload plugins available, that use the same technology and are posting files to the same host, would have similar complaints posted online. That yielded some good results, with many people backing up the experience I'm having. One guy uploads 'lots of images' to the same host, using Uploadify HTML5 (which also sends individual requests), and his requests get blocked too. I suppose I'd better contact them to see what the deal is with their rate-limiting.

possible problem
I think this is a server-side issue, even with a plain php file.. the server will open a new thread for each request. Check it with top in the console.
You are uploading the chunks in a while loop, without waiting till the previous chunk upload finished..
suggestion
I would create an array of all chunks and do upload(chunks) and manage the concurrency there, onreadystatechange is a good place to go with the next chunk in the array..

Related

How to control the XMLHttpRequest object on an HTML5 Web Worker?

I have a page which will normally overrides window.XMLHttpRequest with a wrapper that does a few extra things like inserting in headers on certain requests.
I have some functionality in a 3rd party library that uses HTML5 Worker, and we are seeing that this request does not use the XMLHttpRequest wrapper object. So any request that this library makes is missing the required headers, and so the request will fail.
Is there a way to control the XMLHttpRequest that any Worker the current thread creates?
This 3rd party library code looks like this:
function createWorker(url) {
var worker = new Worker(url);
worker.onmessage = function (e) {
if (e.data.status) {
onprogress(e.data.status);
} else if (e.data.error) {
onerror(e.data.error);
} else {
exportUtils.saveFile(new Blob([e.data]), params.fileName);
onfinish();
}
};
worker.postMessage(params); // window.location.origin +
return worker;
}
The Javascript that is returned by the URL variable above contains code like this:
return new Promise(function(t, r) {
var n = new XMLHttpRequest
, a = "batch_" + o()
, u = e.dataUrl.split(e.serviceUrl)[1]
, c = [];
n.onload = function() {
for (var e = this.responseText, n = this.responseText.split("\r\n"), o = 0, a = n.length, i = a - 1; o < a && "{" !== n[o].slice(0, 1); )
o++;
for (; i > 0 && "}" !== n[i].slice(-1); )
i--;
n = n.slice(o, i + 1),
e = n.join("\r\n");
try {
var u = JSON.parse(e);
t(u)
} catch (t) {
r(s + e)
}
}
,
n.onerror = function() {
r(i)
}
,
n.onabort = function() {
r(i)
}
,
n.open("POST", e.serviceUrl + "$batch", !0),
n.setRequestHeader("Accept", "multipart/mixed"),
n.setRequestHeader("Content-Type", "multipart/mixed;boundary=" + a);
for (var p in e.headers)
"accept" != p.toLowerCase() && n.setRequestHeader(p, e.headers[p]);
c.push("--" + a),
c.push("Content-Type: application/http"),
c.push("Content-Transfer-Encoding: binary"),
c.push(""),
c.push("GET " + u + " HTTP/1.1");
for (var p in e.headers)
c.push(p + ":" + e.headers[p]);
c.push(""),
c.push(""),
c.push("--" + a + "--"),
c.push(""),
c = c.join("\r\n"),
n.send(c)
}
)
The answer is both a soft "no" and an eventual "yes".
When a piece of code runs in a different context (like a webworker or an iframe), you do not have direct control of its global object (1).
What's more, XMLHttpRequest isn't the only way to send out network requests - you have several other methods, chief among them the fetch api.
However, there's a relatively new kid in block called Service Workers which can help you quite a bit!
Service workers
Service workers (abbrev. SWs) are very much like the web workers you already know, but instead of only running in the current page, they continue to run in the background as long as your user stays in your domain. They are also global to your entire domain, so any request made from your site will be passed through them.
Their main purpose in life is reacting to network requests, usually used for caching purposes and offline content, serving push notifications, and several other niche uses.
Let's see a small example (note, run these from a local webserver):
// index.html
<script>
navigator.serviceWorker.register('sw.js')
.then(console.log.bind(console, 'SW registered!'))
.catch(console.error.bind(console, 'Oh nose!'));
setInterval(() => {
fetch('/hello/');
}, 5000);
</script>
// sw.js
console.log('Hello from a friendly service worker');
addEventListener('fetch', event => {
console.log('fetch!', event);
})
Here we're registering a service worker and then requesting a page every 5 seconds. In the service worker, we're simple logging each network event, which can be caught in the fetch event.
On first load, you should see the service worker being registered. SWs only begin intercepting requests from the first page after they were installed...so refresh the page to begin seeing the fetch events being logged. I advise you to play around with the event properties before reading on so things will be clearer.
Cool! We can see from poking around with the event in the console that event.request is the Request object our browser constructed. In an ideal world, we could access event.request.headers and add our own headers! Dreamy, isn't it!?
Unfortunately, request/response headers are guarded and immutable. Fortunately, we are a stubborn bunch and can simply re-construct the request:
// sw.js
console.log('Hello from a friendly service worker');
addEventListener('fetch', event => {
console.log('fetch!', event);
// extract our request
const { request } = event;
// clone the current headers
const newHeaders = new Headers();
for (const [key, val] of request.headers) {
newHeaders.append(key, val);
}
// ...and add one of our own
newHeaders.append('Say-What', 'You heard me!');
// clone the request, but override the headers with our own
const superDuperReq = new Request(request, {
headers: newHeaders
});
// now instead of the original request, our new request will take precedence!
return fetch(superDuperReq);
});
This is a few different concepts at play so it's okay if it takes more than once to get. Essentially though, we're creating a new request which will be sent in place of the original one, and setting a new header! Hurray!
The Bad
Now, to some of the downsides:
Since we're hijacking every single request, we can accidentally change requests we didn't mean to and potentially destroy the entire universe!
Upgrading SWs is a huge pain. SW lifecycle is complex, debugging it on your users is difficult. I've seen a good video on dealing with it, unfortunately can't find it right now, but mdn has a fairly good description
Debugging SWs is often a very annoying experience, especially when combined with their weird lifecycles
Because they are so powerful, SWs can only be served over https. You should already be using https anyway, but this is still a hindrance
This is a lot of things to do for a relatively small benefit, so maybe reconsider its necessity
(1) You can access the global object of an iframe in the same origin as you, but getting your code to run first to modify the global object is tricky indeed.

What is the best technology for a chat/shoutbox system? Plain JS, JQuery, Websockets, or other?

I have an old site running, which also has a chat system, which always used to work fine. But recently I picked up the project again and started improving and the user base has been increasing a lot. (running on a VPS)
Now this shoutbox I have (running at http://businessgame.be/shoutbox) has been getting issues lately, when there are over 30 people online at the same time, it starts to really slow down the entire site.
This shoutbox system was written years ago by the old me (which ironically was the young me) who was way too much into old school Plain Old JavaScript (POJS?) and refused to use frameworks like JQuery.
What I do is I poll every 3 seconds with AJAX if there are new messages, and if YES, load all those messages (which are handed as an XML file which is then parsed by the JS code into HTML blocks which are added to the shoutbox content.
Simplified the script is like this:
The AJAX functions
function createRequestObject() {
var xmlhttp;
if (window.XMLHttpRequest) { // code for IE7+, Firefox, Chrome, Opera, Safari
xmlhttp = new XMLHttpRequest();
} else { // code for IE6, IE5
xmlhttp = new ActiveXObject("Microsoft.XMLHTTP");
}
// Create the object
return xmlhttp;
}
function getXMLObject(XMLUrl, onComplete, onFail) {
var XMLhttp = createRequestObject();
// Check to see if the latest shout time has been initialized
if(typeof getXMLObject.counter == "undefined") {
getXMLObject.counter = 0;
}
getXMLObject.counter++;
XMLhttp.onreadystatechange = function() {
if(XMLhttp.readyState == 4) {
if(XMLhttp.status == 200) {
if(onComplete) {
onComplete(XMLhttp.responseXML);
}
} else {
if(onFail) {
onFail();
}
}
}
};
XMLhttp.open("GET", XMLUrl, true);
XMLhttp.send();
setTimeout(function() {
if(typeof XMLhttp != "undefined" && XMLhttp.readyState != 4) {
XMLhttp.abort();
if(onFail) {
onFail();
}
}
}, 5000);
}
Chat functions
function initShoutBox() {
// Check for new shouts every 2 seconds
shoutBoxInterval = setInterval("shoutBoxUpdate()", 3000);
}
function shoutBoxUpdate() {
// Get the XML document
getXMLObject("/ajax/shoutbox/shoutbox.xml?time=" + shoutBoxAppend.lastShoutTime, shoutBoxAppend);
}
function shoutBoxAppend(xmlData) {
process all the XML and add it to the content, also remember the timestamp of the newest shout
}
The real script is far more convoluted, with slower loading times when the page is blurred and keeping track of AJAX calls to avoid double calls at the same time, ability to post a shout, load settings etc. All not very relevant here.
For those interested, full codes here:
http://businessgame.be/javascripts/xml.js
http://businessgame.be/javascripts/shout.js
Example of the XML file containing the shout data
http://businessgame.be/ajax/shoutbox/shoutbox.xml?time=0
I do the same for getting a list of the online users every 30 seconds and checking for new private messages every 2 minutes.
My main question is, since this old school JS is slowing down my site, will changing the code to JQuery increase the performance and fix this issue? Or should I choose to go for an other technology alltogether like nodeJS, websockets or something else? Or maybe I am overlooking a fundamental bug in this old code?
Rewriting an entire chat and private messages system (which use the same backend) requires a lot of effort so I'd like to do this right from the start, not rewriting the whole thing in JQuery, just to figure out it doesn't solve the issue at hand.
Having 30 people online in the chatbox at the same time is not really an exception anymore so it should be a rigid system.
Could perhaps changing from XML data files to JSON increase performance as well?
PS: Backend is PHP MySQL
I'm biased, as I love Ruby and I prefer using Plain JS over JQuery and other frameworks.
I believe you're wasting a lot of resources by using AJAX and should move to websockets for your use-case.
30 users is not much... When using websockets, I would assume a single server process should be able to serve thousands of simultaneous updates per second.
The main reason for this is that websockets are persistent (no authentication happening with every request) and broadcasting to a multitude of connections will use the same amount of database queries as a single AJAX update.
In your case, instead of everyone reading the whole XML every time, a POST event will only broadcast the latest (posted) shout (not the whole XML), and store it in the XML for persistent storage (used for new visitors).
Also, you don't need all the authentication and requests that end up being answered with a "No, there aren't any pending updates".
Minimizing the database requests (XML reads) should prove to be a huge benefit when moving from AJAX to websockets.
Another benifit relates to the fact that enough simultaneous users will make AJAX polling behave the same as a DoS attack.
Right now, 30 users == 10 requests per second. This isn't much, but it can be heavy if each request would take more than 100ms - meaning, that the server answers less requests than it receives.
The home page for the Plezi Ruby Websocket Framework has this short example for a shout box (I'm Plezi's author, I'm biased):
# finish with `exit` if running within `irb`
require 'plezi'
class ChatServer
def index
render :client
end
def on_open
return close unless params[:id] # authentication demo
broadcast :print,
"#{params[:id]} joind the chat."
print "Welcome, #{params[:id]}!"
end
def on_close
broadcast :print,
"#{params[:id]} left the chat."
end
def on_message data
self.class.broadcast :print,
"#{params[:id]}: #{data}"
end
protected
def print data
write ::ERB::Util.html_escape(data)
end
end
path_to_client = File.expand_path( File.dirname(__FILE__) )
host templates: path_to_client
route '/', ChatServer
The POJS client looks like so (the DOM update and from data access ($('#text')[0].value) use JQuery):
ws = NaN
handle = ''
function onsubmit(e) {
e.preventDefault();
if($('#text')[0].value == '') {return false}
if(ws && ws.readyState == 1) {
ws.send($('#text')[0].value);
$('#text')[0].value = '';
} else {
handle = $('#text')[0].value
var url = (window.location.protocol.match(/https/) ? 'wss' : 'ws') +
'://' + window.document.location.host +
'/' + $('#text')[0].value
ws = new WebSocket(url)
ws.onopen = function(e) {
output("<b>Connected :-)</b>");
$('#text')[0].value = '';
$('#text')[0].placeholder = 'your message';
}
ws.onclose = function(e) {
output("<b>Disonnected :-/</b>")
$('#text')[0].value = '';
$('#text')[0].placeholder = 'nickname';
$('#text')[0].value = handle
}
ws.onmessage = function(e) {
output(e.data);
}
}
return false;
}
function output(data) {
$('#output').append("<li>" + data + "</li>")
$('#output').animate({ scrollTop:
$('#output')[0].scrollHeight }, "slow");
}
If you want to add more events or data, you can consider using Plezi's auto-dispatch feature, that also provides you with an easy to use lightweight Javascript client with an AJAJ (AJAX + JSON) fallback.
...
But, depending on your server's architecture and whether you mind using heavier frameworks or not, you can use the more common socket.io (although it starts with AJAX and only moves to websockets after a warmup period).
EDIT
Changing from XML to JSON will still require parsing. The question is actually whether XML vs. JSON parsing speeds.
JSON will be faster on the client javascript, according to the following SO question and answer: Is parsing JSON faster than parsing XML
JSON seems to be also favored on the server-side for PHP (might be opinion based rather than tested): PHP: is JSON or XML parser faster?
BUT... I really think your bottleneck isn't the JSON or the XML. I believe the bottleneck relates to the multitude of times that the data is accessed, (parsed?) and reviewed by the server when using AJAX.
EDIT2 (due to comment about PHP vs. node.js)
You can add a PHP websocket layer using Ratchet... Although PHP wasn't designed for long running processes, so I would consider adding a websocket dedicated stack (using a local proxy to route websocket connections to a different application).
I love Ruby since it allows you to quickly and easily code a solution. Node.js is also commonly used as a dedicated websocket stack.
I would personally avoid socket.io, because it abstracts the connection method (AJAX vs Websockets) and always starts as AJAX before "warming up" to an "upgrade" (websockets)... Also, socket.io uses long-polling when not using websockets, which I this is terrible. I'd rather show a message telling the client to upgrade their browser.
Jonny Whatshisface pointed out that using a node.js solution he reached a limit of ~50K concurrent users (which could be related to the local proxy's connection limit). Using a C solution, he states to have no issues with more than 200K concurrent users.
This obviously depends also on the number of updates per second and on whether you're broadcasting the data or sending it to specific clients... If you're sending 2 updates per user per second for 200K users, that's 400K updates. However, updating all the users only once every 2 seconds, that's 100K updates per second. So trying to figure out the maximum load can be a headache.
Personally, I didn't get to reach these numbers on my apps, so I never got to discover Plezi's limits first hand... although, during testing, I had no issues with sending hundred of thousands of updates per second (but I did had a connection limit due to available ports and open file handle limits on my local machine).
This definitely shows how vast of an improvement you can reach by utilizing websockets (especially since you stated to notice slowdowns with 30 concurrent users).

<video>.currentTIme doesn't want to be set

I'm trying to write a piece of Javascript that switches between two videos at timed intervals (don't ask). To make matters worse, each video has to start at specific place (about ten seconds, and again, don't ask.)
I got the basics working by just using the YUI Async library to fire to switch the videos at intervals:
YUI().use('async-queue', function (Y) {
// AsyncQueue is available and ready for use.
var cumulativeTime = 0;
var q = new Y.AsyncQueue()
for (var x = 0; x < settings.length; x++) {
cumulativeTime = cumulativeTime + (settings[x].step * 1000)
q.add( {
fn: runVideo,
args: settings[x].mainWindow,
timeout: cumulativeTime
})
}
q.run()
});
So far, so good. The problem is that I can't seem to get the video to start at ten seconds in.
I'm using this code to do it:
function runVideo(videoToPlay) {
console.log('We are going to play -> ' + videoToPlay)
var video = document.getElementById('mainWindow')
video.src = '/video?id=' + videoToPlay
video.addEventListener('loadedmetadata', function() {
this.currentTime = 10 // <-- Offending line!
this.play();
})
}
The problem is that this.currentTime refuses to hold any value I set it to. I'm running it through Chrome (the file is served from Google Storage behind a Google App Engine Instance) and when the debugger goes past the line, the value is always zero.
Is there some trick I'm missing in order to set this value?
Thanks in advance.
Try use Apache server.
setCurrentTime not working with some simple server.
ex) python built in server, php built in server
HTTP server should be Support partial content response. (HTTP Status 206)

The request is too large for IE to process properly

I am using Websync3, Javascript API, and subscribing to approximately 9 different channels on one page. Firefox and Chrome have no problems, but IE9 is throwing an alert error stating The request is too large for IE to process properly.
Unfortunately the internet has little to no information on this. So does anyone have any clues as to how to remedy this?
var client = fm.websync.client;
client.initialize({
key: '********-****-****-****-************'
});
client.connect({
autoDisconnect: true,
onStreamFailure: function(args){
alert("Stream failure");
},
stayConnected: true
});
client.subscribe({
channel: '/channel',
onSuccess: function(args) {
alert("Successfully connected to stream");
},
onFailure: function(args){
alert("Failed to connect to stream");
},
onSubscribersChange: function(args) {
var change = args.change;
for (var i = 0; i < change.clients.length; i++) {
var changeClient = change.clients[i];
// If someone subscribes to the channel
if(change.type == 'subscribe') {
// If something unsubscribes to the channel
}else{
}
}
},
onReceive: function(args){
text = args.data.text;
text = text.split("=");
text = text[1];
if(text != "status" && text != "dummytext"){
//receiveUpdates(id, serial_number, args.data.text);
var update = eval('(' + args.data.text + ')');
}
}
});
This error occurs when WebSync is using the JSON-P protocol for transfers. This is mostly just for IE, cross domain environments. Meaning websync is on a different domain than your webpage is being served from. So IE doesn't want do make regular XHR requests for security reasons.
JSON-P basically encodes the up-stream data (your 9 channel subscriptions) as a URL encoded string that is tacked onto a regular request to the server. The server is supposed to interpret that URL-encoded string and send back the response as a JavaScript block that gets executed by the page.
This works fine, except that IE also has a limit on the overall request URL for an HTTP request of roughly 2kb. So if you pack too much into a single request to WebSync you might exceed this 2kb upstream limit.
The easiest solution is to either split up your WebSync requests into small pieces (ie: subscribe to only a few channels at a time in JavaScript), or to subscribe to one "master channel" and then program a WebSync BeforeSubscribe event that watches for that channel and re-writes the subscription channel list.
I suspect because you have a key in you example source above, you are using WebSync On-Demand? If that's the case, the only way to make a BeforeSubscribe event handler is to create a WebSync proxy.
So for the moment, since everyone else is stumped by this question as well, I put a trap in my PHP to not even load this Javascript script if the browser is Internet Destroyer (uhh, I mean Internet Explorer). Maybe a solution will come in the future though.

Loading CSS from different domain, and accessing it from Javascript

I want to split up my website acrosss different servers and use subdomains for this purpose.
xttp://site.com will serve the main php file
xttp://static.site.com will serve the css and js
xttp://content.site.com will serve images and such
(xttp to prevent stackoverflow form thinking it is a url)
For the why, read below.
However, I run into a problem when I try to access through javascript any of the css rules. NS_ERROR_DOM_SECURITY_ERR to be precise. This is a relatively recent security measure and has to do with protection against cross domain scripting.
In the past, there were measures to fix this including just turning this protection off. This no longer works.
My question:
Is there anyway to access a normally loaded css rule through javascript if it is from a different domain then the main page?
The javascript:
MUI.getCSSRule=function(selector){
for(var ii=0;ii<document.styleSheets.length;ii++){
var mysheet=document.styleSheets[ii];
var myrules=mysheet.cssRules?mysheet.cssRules:mysheet.rules;
for(i=0;i<myrules.length;i++){
if(myrules[i].selectorText==selector){
return myrules[i]
}
}
}
return false
};
The javascript and css is loaded from the html with absolute paths
and the site url is "http://site.com"
Both domains are fully under my control but they are seperate machines (virtual for now but if it is even possible, in production they might not even be in the same location)
Rephrasing the question:
Is there any way to let Firefox and other browsers know that it should treat certain domains as being the same even though the domain names are different?
Why? So, I can easily use different servers with their own configuration, optimized for their task. A fast machine for the php, a simple one to serve the static stuff, a large machine for the content.
Why? Costs. A static server typically has little need for security against anyone downloading the files. It has little content so no need for an expensive array. Just load it in memory and serve from there. Memory itself can be limitted as well, try it. A PHP server, in my case at least, however will typically need lots of memory, need redundant storage, extensive logging. A content server will need massive storage and massive bandwidth but relatively little in the way of CPU power. Different hardware/hosting requirements for each. Finetuning each not only gives better performance but also reduces hosting costs, for me at least still one of the biggest costs of running a website.
CORS (cross-origin resource sharing) is a standard that allows sites to opt-in to access of resources cross-origin. I do not know if Firefox applies this to CSS yet; I know that it works for XMLHttpRequest, and it is intended that it will work for most other cross-domain request restrictions, but I haven't tested it in your precise use-case.
You can add following header to responses from static.site.com to allow your main page to access the content of resources served from there:
Access-Control-Allow-Origin: http://site.com
Or even, if you don't consider any of your content on static.site.com to be sensitive:
Access-Control-Allow-Origin: *
There's more information available on the Mozilla Developer Network.
I wrote a little function that will solve the loading problem cross-browser including FF. The comments on GitHub help explain usage. Full code at https://github.com/srolfe26/getXDomainCSS.
Disclaimer: The code below is jQuery dependent.
Sometimes, if you're pulling CSS from a place that you can't control the CORS settings you can still get the CSS with a <link> tag, the main issue to be solved then becomes knowing when your called-for CSS has been loaded and ready to use. In older IE, you could have an on_load listener run when the CSS is loaded.
Newer browsers seem to require old-fashioned polling to determine when the file is loaded, and have some cross-browser issues in determining when the load is satisfied. See the code below to catch some of those quirks.
/**
* Retrieves CSS files from a cross-domain source via javascript. Provides a jQuery implemented
* promise object that can be used for callbacks for when the CSS is actually completely loaded.
* The 'onload' function works for IE, while the 'style/cssRules' version works everywhere else
* and accounts for differences per-browser.
*
* #param {String} url The url/uri for the CSS file to request
*
* #returns {Object} A jQuery Deferred object that can be used for
*/
function getXDomainCSS(url) {
var link,
style,
interval,
timeout = 60000, // 1 minute seems like a good timeout
counter = 0, // Used to compare try time against timeout
step = 30, // Amount of wait time on each load check
docStyles = document.styleSheets // local reference
ssCount = docStyles.length, // Initial stylesheet count
promise = $.Deferred();
// IE 8 & 9 it is best to use 'onload'. style[0].sheet.cssRules has problems.
if (navigator.appVersion.indexOf("MSIE") != -1) {
link = document.createElement('link');
link.type = "text/css";
link.rel = "stylesheet";
link.href = url;
link.onload = function () {
promise.resolve();
}
document.getElementsByTagName('head')[0].appendChild(link);
}
// Support for FF, Chrome, Safari, and Opera
else {
style = $('<style>')
.text('#import "' + url + '"')
.attr({
// Adding this attribute allows the file to still be identified as an external
// resource in developer tools.
'data-uri': url
})
.appendTo('body');
// This setInterval will detect when style rules for our stylesheet have loaded.
interval = setInterval(function() {
try {
// This will fail in Firefox (and kick us to the catch statement) if there are no
// style rules.
style[0].sheet.cssRules;
// The above statement will succeed in Chrome even if the file isn't loaded yet
// but Chrome won't increment the styleSheet length until the file is loaded.
if(ssCount === docStyles.length) {
throw(url + ' not loaded yet');
}
else {
var loaded = false,
href,
n;
// If there are multiple files being loaded at once, we need to make sure that
// the new file is this file
for (n = docStyles.length - 1; n >= 0; n--) {
href = docStyles[n].cssRules[0].href;
if (typeof href != 'undefined' && href === url) {
// If there is an HTTP error there is no way to consistently
// know it and handle it. The file is considered 'loaded', but
// the console should will the HTTP error.
loaded = true;
break;
}
}
if (loaded === false) {
throw(url + ' not loaded yet');
}
}
// If an error wasn't thrown by this point in execution, the stylesheet is loaded, proceed.
promise.resolve();
clearInterval(interval);
} catch (e) {
counter += step;
if (counter > timeout) {
// Time out so that the interval doesn't run indefinitely.
clearInterval(interval);
promise.reject();
}
}
}, step);
}
return promise;
}
document.domain = "site.com";
Add to a JS file that is loaded before your CSS file. I would also add the HTTP headers suggested above.

Categories