I want to split up my website acrosss different servers and use subdomains for this purpose.
xttp://site.com will serve the main php file
xttp://static.site.com will serve the css and js
xttp://content.site.com will serve images and such
(xttp to prevent stackoverflow form thinking it is a url)
For the why, read below.
However, I run into a problem when I try to access through javascript any of the css rules. NS_ERROR_DOM_SECURITY_ERR to be precise. This is a relatively recent security measure and has to do with protection against cross domain scripting.
In the past, there were measures to fix this including just turning this protection off. This no longer works.
My question:
Is there anyway to access a normally loaded css rule through javascript if it is from a different domain then the main page?
The javascript:
MUI.getCSSRule=function(selector){
for(var ii=0;ii<document.styleSheets.length;ii++){
var mysheet=document.styleSheets[ii];
var myrules=mysheet.cssRules?mysheet.cssRules:mysheet.rules;
for(i=0;i<myrules.length;i++){
if(myrules[i].selectorText==selector){
return myrules[i]
}
}
}
return false
};
The javascript and css is loaded from the html with absolute paths
and the site url is "http://site.com"
Both domains are fully under my control but they are seperate machines (virtual for now but if it is even possible, in production they might not even be in the same location)
Rephrasing the question:
Is there any way to let Firefox and other browsers know that it should treat certain domains as being the same even though the domain names are different?
Why? So, I can easily use different servers with their own configuration, optimized for their task. A fast machine for the php, a simple one to serve the static stuff, a large machine for the content.
Why? Costs. A static server typically has little need for security against anyone downloading the files. It has little content so no need for an expensive array. Just load it in memory and serve from there. Memory itself can be limitted as well, try it. A PHP server, in my case at least, however will typically need lots of memory, need redundant storage, extensive logging. A content server will need massive storage and massive bandwidth but relatively little in the way of CPU power. Different hardware/hosting requirements for each. Finetuning each not only gives better performance but also reduces hosting costs, for me at least still one of the biggest costs of running a website.
CORS (cross-origin resource sharing) is a standard that allows sites to opt-in to access of resources cross-origin. I do not know if Firefox applies this to CSS yet; I know that it works for XMLHttpRequest, and it is intended that it will work for most other cross-domain request restrictions, but I haven't tested it in your precise use-case.
You can add following header to responses from static.site.com to allow your main page to access the content of resources served from there:
Access-Control-Allow-Origin: http://site.com
Or even, if you don't consider any of your content on static.site.com to be sensitive:
Access-Control-Allow-Origin: *
There's more information available on the Mozilla Developer Network.
I wrote a little function that will solve the loading problem cross-browser including FF. The comments on GitHub help explain usage. Full code at https://github.com/srolfe26/getXDomainCSS.
Disclaimer: The code below is jQuery dependent.
Sometimes, if you're pulling CSS from a place that you can't control the CORS settings you can still get the CSS with a <link> tag, the main issue to be solved then becomes knowing when your called-for CSS has been loaded and ready to use. In older IE, you could have an on_load listener run when the CSS is loaded.
Newer browsers seem to require old-fashioned polling to determine when the file is loaded, and have some cross-browser issues in determining when the load is satisfied. See the code below to catch some of those quirks.
/**
* Retrieves CSS files from a cross-domain source via javascript. Provides a jQuery implemented
* promise object that can be used for callbacks for when the CSS is actually completely loaded.
* The 'onload' function works for IE, while the 'style/cssRules' version works everywhere else
* and accounts for differences per-browser.
*
* #param {String} url The url/uri for the CSS file to request
*
* #returns {Object} A jQuery Deferred object that can be used for
*/
function getXDomainCSS(url) {
var link,
style,
interval,
timeout = 60000, // 1 minute seems like a good timeout
counter = 0, // Used to compare try time against timeout
step = 30, // Amount of wait time on each load check
docStyles = document.styleSheets // local reference
ssCount = docStyles.length, // Initial stylesheet count
promise = $.Deferred();
// IE 8 & 9 it is best to use 'onload'. style[0].sheet.cssRules has problems.
if (navigator.appVersion.indexOf("MSIE") != -1) {
link = document.createElement('link');
link.type = "text/css";
link.rel = "stylesheet";
link.href = url;
link.onload = function () {
promise.resolve();
}
document.getElementsByTagName('head')[0].appendChild(link);
}
// Support for FF, Chrome, Safari, and Opera
else {
style = $('<style>')
.text('#import "' + url + '"')
.attr({
// Adding this attribute allows the file to still be identified as an external
// resource in developer tools.
'data-uri': url
})
.appendTo('body');
// This setInterval will detect when style rules for our stylesheet have loaded.
interval = setInterval(function() {
try {
// This will fail in Firefox (and kick us to the catch statement) if there are no
// style rules.
style[0].sheet.cssRules;
// The above statement will succeed in Chrome even if the file isn't loaded yet
// but Chrome won't increment the styleSheet length until the file is loaded.
if(ssCount === docStyles.length) {
throw(url + ' not loaded yet');
}
else {
var loaded = false,
href,
n;
// If there are multiple files being loaded at once, we need to make sure that
// the new file is this file
for (n = docStyles.length - 1; n >= 0; n--) {
href = docStyles[n].cssRules[0].href;
if (typeof href != 'undefined' && href === url) {
// If there is an HTTP error there is no way to consistently
// know it and handle it. The file is considered 'loaded', but
// the console should will the HTTP error.
loaded = true;
break;
}
}
if (loaded === false) {
throw(url + ' not loaded yet');
}
}
// If an error wasn't thrown by this point in execution, the stylesheet is loaded, proceed.
promise.resolve();
clearInterval(interval);
} catch (e) {
counter += step;
if (counter > timeout) {
// Time out so that the interval doesn't run indefinitely.
clearInterval(interval);
promise.reject();
}
}
}, step);
}
return promise;
}
document.domain = "site.com";
Add to a JS file that is loaded before your CSS file. I would also add the HTTP headers suggested above.
Related
I noticed that GitHub and Facebook are both implementing this policy now, which restricts third party scripts from being run within their experience/site.
Is there a way to detect whether a document is running against CSP using JavaScript?
I'm writing a bookmarklet, and want to give the user a message if they're on a site that doesn't support embedding a script tag.
You can try to catch a CSP violation error using an event "securitypolicyviolation"
From: https://developer.mozilla.org/en-US/docs/Web/API/SecurityPolicyViolationEvent
example:
document.addEventListener("securitypolicyviolation", (e) => {
console.log(e.blockedURI);
console.log(e.violatedDirective);
console.log(e.originalPolicy);
});
What about this. For slow connections, the timeout should probably be raised. Onload is what I used to detect it and it seems to work. If it loads then CSP obviously isn't enabled or it is configured improperly.
var CSP = 0;
frame = document.createElement('script');
frame.setAttribute('id', 'theiframe');
frame.setAttribute('src', location.protocol+'//example.com/');
frame.setAttribute('onload', 'CSP=1;');
document.body.appendChild(frame);
setTimeout(function(){if (0 == CSP){alert("CSP IS ENABLED");}}, 250);
From https://github.com/angular/angular.js/blob/cf16b241e1c61c22a820ed8211bc2332ede88e62/src/Angular.js#L1150-L1158, function noUnsafeEval
function noUnsafeEval() {
try {
/* jshint -W031, -W054 */
new Function('');
/* jshint +W031, +W054 */
return false;
} catch (e) {
return true;
}
}
Currently, there is no way to do so in shipping browsers.
However, something such as the following should work, per spec, and does in Chrome with experimental web platform features enabled in chrome://flags/:
function detectCSPInUse() {
return "securityPolicy" in document ? document.securityPolicy.isActive : false;
}
The SecurityPolicy interface (what you get from document.securityPolicy if it is implemented) has a few attributes that give more detail as to what is currently allowed.
An easy way to detect support for CSP is just by checking if JavaScript's eval()-method can be run without throwing an error, like so:
try {
eval("return false;");
} catch (e) {
return true;
}
However, this only works if CSP is actually turned on (obviously), with Content-Security-Policy being set in the response headers the page loaded with, and without 'unsafe-eval' in script-src.
I came here looking for a way to detect CSP support in browsers without CSP actually being turned on. It would seem this is not possible though.
On a side note, IE does not support CSP, only the sandbox directive in IE 10+, which, by looking at the CSP standard, does not make it a conformant web browser.
From https://hackernoon.com/im-harvesting-credit-card-numbers-and-passwords-from-your-site-here-s-how-9a8cb347c5b5:
fetch(document.location.href)
.then(resp => {
const csp = resp.headers.get('Content-Security-Policy');
// does this exist? Is is any good?
});
This will fail however with connect-src='none' and be reported.
I am checking onError event in my bookmarklet code and prompt a user to install my extension if script is not loaded.
javascript:(function(){
var s=document.createElement('script');
s.setAttribute('type','text/javascript');
s.setAttribute('src','https://example.ru/bookmarklet?hostname=%27+encodeURIComponent(location.hostname));
s.setAttribute('onerror', 'if(confirm(`Downloading from the site is possible only through the "MyExtensionName" extension. Install extension?`)){window.open("https://chrome.google.com/webstore/detail/myextensionlink");}');
document.body.appendChild(s);})();
I have an old site running, which also has a chat system, which always used to work fine. But recently I picked up the project again and started improving and the user base has been increasing a lot. (running on a VPS)
Now this shoutbox I have (running at http://businessgame.be/shoutbox) has been getting issues lately, when there are over 30 people online at the same time, it starts to really slow down the entire site.
This shoutbox system was written years ago by the old me (which ironically was the young me) who was way too much into old school Plain Old JavaScript (POJS?) and refused to use frameworks like JQuery.
What I do is I poll every 3 seconds with AJAX if there are new messages, and if YES, load all those messages (which are handed as an XML file which is then parsed by the JS code into HTML blocks which are added to the shoutbox content.
Simplified the script is like this:
The AJAX functions
function createRequestObject() {
var xmlhttp;
if (window.XMLHttpRequest) { // code for IE7+, Firefox, Chrome, Opera, Safari
xmlhttp = new XMLHttpRequest();
} else { // code for IE6, IE5
xmlhttp = new ActiveXObject("Microsoft.XMLHTTP");
}
// Create the object
return xmlhttp;
}
function getXMLObject(XMLUrl, onComplete, onFail) {
var XMLhttp = createRequestObject();
// Check to see if the latest shout time has been initialized
if(typeof getXMLObject.counter == "undefined") {
getXMLObject.counter = 0;
}
getXMLObject.counter++;
XMLhttp.onreadystatechange = function() {
if(XMLhttp.readyState == 4) {
if(XMLhttp.status == 200) {
if(onComplete) {
onComplete(XMLhttp.responseXML);
}
} else {
if(onFail) {
onFail();
}
}
}
};
XMLhttp.open("GET", XMLUrl, true);
XMLhttp.send();
setTimeout(function() {
if(typeof XMLhttp != "undefined" && XMLhttp.readyState != 4) {
XMLhttp.abort();
if(onFail) {
onFail();
}
}
}, 5000);
}
Chat functions
function initShoutBox() {
// Check for new shouts every 2 seconds
shoutBoxInterval = setInterval("shoutBoxUpdate()", 3000);
}
function shoutBoxUpdate() {
// Get the XML document
getXMLObject("/ajax/shoutbox/shoutbox.xml?time=" + shoutBoxAppend.lastShoutTime, shoutBoxAppend);
}
function shoutBoxAppend(xmlData) {
process all the XML and add it to the content, also remember the timestamp of the newest shout
}
The real script is far more convoluted, with slower loading times when the page is blurred and keeping track of AJAX calls to avoid double calls at the same time, ability to post a shout, load settings etc. All not very relevant here.
For those interested, full codes here:
http://businessgame.be/javascripts/xml.js
http://businessgame.be/javascripts/shout.js
Example of the XML file containing the shout data
http://businessgame.be/ajax/shoutbox/shoutbox.xml?time=0
I do the same for getting a list of the online users every 30 seconds and checking for new private messages every 2 minutes.
My main question is, since this old school JS is slowing down my site, will changing the code to JQuery increase the performance and fix this issue? Or should I choose to go for an other technology alltogether like nodeJS, websockets or something else? Or maybe I am overlooking a fundamental bug in this old code?
Rewriting an entire chat and private messages system (which use the same backend) requires a lot of effort so I'd like to do this right from the start, not rewriting the whole thing in JQuery, just to figure out it doesn't solve the issue at hand.
Having 30 people online in the chatbox at the same time is not really an exception anymore so it should be a rigid system.
Could perhaps changing from XML data files to JSON increase performance as well?
PS: Backend is PHP MySQL
I'm biased, as I love Ruby and I prefer using Plain JS over JQuery and other frameworks.
I believe you're wasting a lot of resources by using AJAX and should move to websockets for your use-case.
30 users is not much... When using websockets, I would assume a single server process should be able to serve thousands of simultaneous updates per second.
The main reason for this is that websockets are persistent (no authentication happening with every request) and broadcasting to a multitude of connections will use the same amount of database queries as a single AJAX update.
In your case, instead of everyone reading the whole XML every time, a POST event will only broadcast the latest (posted) shout (not the whole XML), and store it in the XML for persistent storage (used for new visitors).
Also, you don't need all the authentication and requests that end up being answered with a "No, there aren't any pending updates".
Minimizing the database requests (XML reads) should prove to be a huge benefit when moving from AJAX to websockets.
Another benifit relates to the fact that enough simultaneous users will make AJAX polling behave the same as a DoS attack.
Right now, 30 users == 10 requests per second. This isn't much, but it can be heavy if each request would take more than 100ms - meaning, that the server answers less requests than it receives.
The home page for the Plezi Ruby Websocket Framework has this short example for a shout box (I'm Plezi's author, I'm biased):
# finish with `exit` if running within `irb`
require 'plezi'
class ChatServer
def index
render :client
end
def on_open
return close unless params[:id] # authentication demo
broadcast :print,
"#{params[:id]} joind the chat."
print "Welcome, #{params[:id]}!"
end
def on_close
broadcast :print,
"#{params[:id]} left the chat."
end
def on_message data
self.class.broadcast :print,
"#{params[:id]}: #{data}"
end
protected
def print data
write ::ERB::Util.html_escape(data)
end
end
path_to_client = File.expand_path( File.dirname(__FILE__) )
host templates: path_to_client
route '/', ChatServer
The POJS client looks like so (the DOM update and from data access ($('#text')[0].value) use JQuery):
ws = NaN
handle = ''
function onsubmit(e) {
e.preventDefault();
if($('#text')[0].value == '') {return false}
if(ws && ws.readyState == 1) {
ws.send($('#text')[0].value);
$('#text')[0].value = '';
} else {
handle = $('#text')[0].value
var url = (window.location.protocol.match(/https/) ? 'wss' : 'ws') +
'://' + window.document.location.host +
'/' + $('#text')[0].value
ws = new WebSocket(url)
ws.onopen = function(e) {
output("<b>Connected :-)</b>");
$('#text')[0].value = '';
$('#text')[0].placeholder = 'your message';
}
ws.onclose = function(e) {
output("<b>Disonnected :-/</b>")
$('#text')[0].value = '';
$('#text')[0].placeholder = 'nickname';
$('#text')[0].value = handle
}
ws.onmessage = function(e) {
output(e.data);
}
}
return false;
}
function output(data) {
$('#output').append("<li>" + data + "</li>")
$('#output').animate({ scrollTop:
$('#output')[0].scrollHeight }, "slow");
}
If you want to add more events or data, you can consider using Plezi's auto-dispatch feature, that also provides you with an easy to use lightweight Javascript client with an AJAJ (AJAX + JSON) fallback.
...
But, depending on your server's architecture and whether you mind using heavier frameworks or not, you can use the more common socket.io (although it starts with AJAX and only moves to websockets after a warmup period).
EDIT
Changing from XML to JSON will still require parsing. The question is actually whether XML vs. JSON parsing speeds.
JSON will be faster on the client javascript, according to the following SO question and answer: Is parsing JSON faster than parsing XML
JSON seems to be also favored on the server-side for PHP (might be opinion based rather than tested): PHP: is JSON or XML parser faster?
BUT... I really think your bottleneck isn't the JSON or the XML. I believe the bottleneck relates to the multitude of times that the data is accessed, (parsed?) and reviewed by the server when using AJAX.
EDIT2 (due to comment about PHP vs. node.js)
You can add a PHP websocket layer using Ratchet... Although PHP wasn't designed for long running processes, so I would consider adding a websocket dedicated stack (using a local proxy to route websocket connections to a different application).
I love Ruby since it allows you to quickly and easily code a solution. Node.js is also commonly used as a dedicated websocket stack.
I would personally avoid socket.io, because it abstracts the connection method (AJAX vs Websockets) and always starts as AJAX before "warming up" to an "upgrade" (websockets)... Also, socket.io uses long-polling when not using websockets, which I this is terrible. I'd rather show a message telling the client to upgrade their browser.
Jonny Whatshisface pointed out that using a node.js solution he reached a limit of ~50K concurrent users (which could be related to the local proxy's connection limit). Using a C solution, he states to have no issues with more than 200K concurrent users.
This obviously depends also on the number of updates per second and on whether you're broadcasting the data or sending it to specific clients... If you're sending 2 updates per user per second for 200K users, that's 400K updates. However, updating all the users only once every 2 seconds, that's 100K updates per second. So trying to figure out the maximum load can be a headache.
Personally, I didn't get to reach these numbers on my apps, so I never got to discover Plezi's limits first hand... although, during testing, I had no issues with sending hundred of thousands of updates per second (but I did had a connection limit due to available ports and open file handle limits on my local machine).
This definitely shows how vast of an improvement you can reach by utilizing websockets (especially since you stated to notice slowdowns with 30 concurrent users).
I've made a component for an SAP solution (whatever) that is embedded into a report through an iframe. After I deployed the report on an SAP plateform (BO), I got this error (on Chrome, but does not work on IE or FF either):
Uncaught SecurityError: Blocked a frame with origin "http://support.domain.com" from accessing a frame with origin "http://support.domain.com". The frame requesting access set "document.domain" to "domain.com", but the frame being accessed did not. Both must set "document.domain" to the same value to allow access.
The iframe is embedded into my component so it's suppose to run on the same domain with same port than report.
I found this post on SO and this one, but it does not really helped me to understand what I need to do.
Is there a way to get rid of this, or at least work around this ?
Thanks :).
EDIT:
Host Page URL : http://support.domain.com/BOE/OpenDocument/opendoc/openDocument.jsp?sIDType=CUID&iDocID=AbmffWLjCAlFsLj14TjuDWg
URL of the file calling a property on the iframe (and generating the error) : http://support.domain.com/BOE/OpenDocument/1411281523/zenwebclient/zen/mimes/sdk_include/com.domain.ds.extension/res/cmp/js/component.js
URL of the frame :
http://support.domain.com/BOE/OpenDocument/1411281523/zenwebclient/zen/mimes/sdk_include/com.domain.ds.extension/res/cmp/js/map/js/map.html
The iframe embed itself some script tag, I can see everything loading fine in the Network tag of the console.
Maybe it can help.
EDIT 2 :
I just realized SAP report is itself embedded into an iframe. That means my iframe is within an iframe, that might be the issue. Still, when lauching the report from Eclipse, everything is working.
I've finally found a solution.
The top of my iframe had a domain.location set to domain.com and my iframe a domain.location set to support.domain.com.
Event though I still think that both belong to the same domain, browsers don't like it it seems so.
Re-setting the domain.location did the work.
To answer the ones asking about how to re-set location.domain, here is the snippet of code my team used to use. This is quite old (2y ago), not really optimized and we do not use it anymore, but I guess it's worth sharing.
Basically, what we were doing is load the iframe with passing it top domain in the URL parameters.
var topDomain = (function handleDomain(parameters) {
if (typeof parameters === "undefined") {
return;
}
parameters = parameters.split("&");
var parameter = [],
domain;
for (var i = 0; i<parameters.length; ++i) {
parameter.push(parameters[i]);
}
for (var j = 0; j<parameter.length; ++j) {
if (parameter[j].indexOf("domain") > -1) {
domain = parameter[j];
break;
}
}
if (typeof domain !== "undefined") {
domain = domain.split("=");
return domain[1];
}
return;
})(window.location.search),
domain = document.domain;
if (domain.indexOf(topDomain) > -1 && domain !== topDomain) {
document.domain = topDomain;
}
The previous answer is no longer valid:
Document.domain - https://developer.mozilla.org/en-US/docs/Web/API/Document/domain
Deprecated: This feature is no longer recommended. Though some browsers might still support it, it may have already been removed from the relevant web standards, may be in the process of being dropped, or may only be kept for compatibility purposes. Avoid using it, and update existing code if possible; see the compatibility table at the bottom of this page to guide your decision. Be aware that this feature may cease to work at any time.
The current solution would be to use message exchanges. See samples on:
The solution is https://developer.mozilla.org/en-US/docs/Web/API/Window/postMessage
I am attempting to make ie7-js work with my wordpress installation. After reading about this library it seems like it is a great solution to making my website more compatible with older versions of IE more specifically IE8. It may seem odd to try and still support an old browser like that but I have noticed that several hundred visitors a month are visiting our site using browsers as old as IE6. A majority of our users are elderly and still using Windows XP. Now on to the problem.
I am running a copy of the Windows XP virtual machine with IE8 from modern.ie I have followed the instructions on the library code page on how to include the file. I am trying to get the Specific IE9.js file to work. When I access the page in IE8 I get the error
permission denied: line 850 character 37
I have tracked it down to the line below:
for (var i = 0, imported; i < styleSheet.imports.length; i++)
from this function:
function getCSSText(styleSheet, path, media, level) {
var cssText = "";
if (!level) {
media = toSimpleMedia(styleSheet.media);
level = 0;
}
if (media === "none") {
styleSheet.disabled = true;
return "";
}
if (media === "all" || media === self.media) {
// IE only allows importing style sheets three levels deep.
// it will crash if you try to access a level below this
try {
var canAcess = !!styleSheet.cssText;
} catch (exe) {}
if (level < 3 && canAcess) {
var hrefs = styleSheet.cssText.match(IMPORTS);
// loop through imported style sheets
for (var i = 0, imported; i < styleSheet.imports.length; i++) {
var imported = styleSheet.imports[i];
var href = styleSheet._href || styleSheet.href;
imported._href = hrefs[i].replace(TRIM_IMPORTS, "");
// call this function recursively to get all imported style sheets
cssText += getCSSText(imported, getPath(href, path), media, level + 1);
}
}
// retrieve inline style or load an external style sheet
cssText += encode(styleSheet.href ? loadStyleSheet(styleSheet, path) : styleSheet.owningElement._cssText);
cssText = parseMedia(cssText, self.media);
}
return cssText;
};
Upon researching to see if anyone else has had the same issue I did find posts regarding it but none had solutions to them. I have been trying to sort this out for a few hours now only to be banging my head against the desk. Does anyone have possible solutions or things to check next? I have tried changing file permissions to 777 but that does not seem to work either.
Older browsers have limitations when working with CORS on client-side.
Not quite a "bug" from his javascript and can not be corrected by the client-side.
The best way is to work with CDNs that have permissions to CORS.
http://schock.net/articles/2013/07/03/hosting-web-fonts-on-a-cdn-youre-going-to-need-some-cors/
http://support.maxcdn.com/howto/use-cdn-with-webfonts/
Yet still can be difficult, so another alternative would be to put all the CSS on a sub domain your (or your own domain).
Read about CORS:
http://en.wikipedia.org/wiki/Cross-origin_resource_sharing
http://enable-cors.org/
I am using the ExternalInterface on Flex 3. We are actually using flex to compress a large amount of DOM data, so this is specifically being used with LARGE data.
To further investigate, if there is a limitation, is this universal? (IE. Silverlight)
First, let me state that this is being done with an application that was made by inexperienced software engineers. This is an app that we need to buy time by compressing the data so that we can build a long-term solution. We have no other options, unfortunately.
Background: This is an application that is actually a web-spreadsheet. Our long term solution is to make a Office Business Application.
No, Flash do not impose any size limits on ExternalInterface communication.
I think it does, or there is some other configuration which governs this. I was testing a file upload using FileReference object and wanted to pass the data sent from server back to hosting page via external interface call. Below is a snippet of my UPLOAD_COMPLETE_DATA event handler
private function onFileUploadCompleteData (e:DataEvent):void
{
var file:FileReference = FileReference(e.target);
Alert.show("onFileUploadCompleteData : " + e.data );
if(ExternalInterface.available && callBackOnUploadCompleteData.length > 0)
{
var data:Object = new Object();
data.FileName = file.name;
data.ServerData = e.data;
//data.ServerData = e.data.substr(0, 50);
ExternalInterface.call(callBackOnUploadCompleteData, data);
}
}
This event gets fired but the call to my javascript is never made. If I uncomment the line which trims the returned data to first 50 characters, it start working and calls the javascript correctly.
Either there is a size restriction imposed by flash (10.2) or IE9 (which is what I was using), or there is something else I am missing.