I've seen a couple of pages load their javascript files into a page via new Image.src ='';
<script type="text/javascript">
new Image().src = "XXXX\/js\/jquery-1.7.2.min.js";
</script>
I was just wondering the benefits or purpose of this.
It's a quick and dirty way of initiating an HTTP request (as the comments on the question suggest).
There may be a minor advantage gained by initiating the download at the top of the page and then including <script src='the-same-file.js'></script> at the bottom of the page so that the file can be loaded from the browser cache.
This might allow the latency of the download to be parallelized with a parsing task. For example, the download initiated in the head might run while the body is still being parsed.
Why not just reference the file in the head using the src attribute?
If neither [the defer or async] attribute is present, then the script is fetched and
executed immediately, before the user agent continues parsing the
page.
Source (suggested reading)
In other words, this method attempts to allow the browser to download the file without incurring the blocking behavior until later in the process.
However
If this is really necessary, I would consider the defer attribute which is intended for such purposes rather than the new Image() hack.
This "optimization" could backfire depending on cache headers. You could end up making two HTTP requests or even downloading the file twice.
"In the Wild"
A quick investigation of several major sites (Google search, Gmail, Twitter, Facebook, Netflix) shows that this technique is not used to fetch JavaScript files and used very sparingly overall.
For example, Facebook appears to use it not for caching/performance but for tracking purposes when the site is (potentially maliciously) loaded into a frameset. By creating an Image instance and setting the source, they initiate an HTTP request to a page which tracks the clickjacking attempt.
This is an isolated case; under normal circumstances this script will never be run.
Related
Is there a way to block a particular line of code from being executed within a third-party website, appearing within an iFrame?
I have an embedded widget which is loading JQuery within the iFrame. I don't need the extra network request, because my site is already loading JQuery.
If I can block only one line of code (in this case, line 77) then I can prevent JQuery from being loaded again.
I imagine this action would take place prior to the iFrame being rendered.
The same-origin policy prevents you from touching any part of an iframe for a third-party website, so there's nothing you can directly do to prevent that request from being sent out. Even if you could, the iframe and your website have no shared state, so the other website will most likely break because it has no way to access your instance of jQuery. Think of what would happen if you loaded the third-party website in a new tab but blocked the request.
There are, however, a few things you can do to ensure the browser uses a cached copy of the library, which doesn't actually send a request anywhere:
If the external library is being loaded from a CDN, there's a good chance some other website has requested that same URL, so that user's browser has a cached copy of it.
Since you yourself use jQuery, you could use the other website's same version of jQuery. That way, a user's browser will have a cached copy of the file already from the CDN and no second request will be made.
Otherwise, if the website is using an old version of jQuery that you cannot yourself use or if it is being self-hosted without a CDN, there's nothing else you can do.
I'm working on an extension that injects script in a page.
The extension is basically a content script that injects another script into the DOM. ( Why not just a content script? )
(There aren't any issues with my code, it works fine. The main purpose here is to learn about security issues in web development only)
The injected script is a source file in my extension and I get it with JQuery.get, using the address from chrome.extension.getURL('myscript.js').
Are there any security issues I should be aware of?
The page is not https, can this get return something different from my script?
I also insert HTML content using the same method. The HTML file is from my extension, just like the scritp. Is there any possibility of the responsetext be corrupted by a man in the middle??
What are the common practices to avoid such security issues if they exist?
Differently, if I create a script (document.createElement('script')) and set its source to my file. Would it be possible for someone to interfere when I inject this cript into the dom? (document.documentElement.appendChild(myScipt))
Also, what are the security issues involving this approach? Injecting a script that changes the XMLHttpRequest methods open and send in order to capture ajax calls, add listeners and send them with the same exact original arguments.
So, namely, say I have these:
var myScript = document.createElement('script');
myScript.src = chrome.extension.getURL('myscript.js');
var page = chrome.extension.getURL('mypage.html');
In such context, can a $.get('mypage.html') return anything different from my page due to a man in the middle? (In other words, could I unknowingly inject a malicious page?)
Could a document.documentElement.append(myScript) inject a different script? Could a supposed man in the middle get between the .src and change the actual script?
Since the script is meant to change the XMLHttpRequest prototype as described in the linked approach, could I ever send with arguments different from those passed by the original call?
Thank you!
First of all, Chrome is both the client and the server when you fetch a file from an extension, so you don't need https, it's worthless in this scenario. There is no man in the middle here.
One can think of another extension intercepting the ajax, but to do so that extension should already have proper permissions granted by the user, so it won't be an unauthorized interception. At least it won't be any less secure than any https ajax.
And, as you say, another man in the middle attack consists in redefining XMLHttpRequest, which you can do with an extension (with proper user authorization) or any other way to inject a script in the page (specially if the page is not a secure one).
I wonder if you can inject and run a script before the page loads, or at least before any other script execute, with the only purpose to "secure" the original XMLHttpRequest object (with something like mySecureAjax = XMLHttpRequest;)
You can execute before any script on the page, but you can't guarantee to execute before another extension's injection.
I have had this question for awhile and am surprised that I have yet to come across a good/complete answer to it.
The question is essentially this:
When it comes to loading js files, in what situations should you load them from the web if available versus serving them up yourself? What case typically allows for the lowest latency?
E.g.
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.3/jquery.min.js"></script>
vs.
<script src="js/jquery-1-11-3.min.js"></script>
Complete answer: Both.
Loading it off of the web will benefit you in a couple of ways:
1) There is a limit to the number of maximum open HTTP requests a browser can have. However, this limit is per domain. So reaching out to google's servers will not prevent you from loading your CSS/images.
2) It's very likely that the user will already have that file cached, so they will get an HTTP 304 not changed response, and not have to download the file.
Now, with that said, sometimes the server will be down, or network issues will otherwise stop you from loading that file. When that happens, you need a workaround, which we can do like so:
<script>
if (typeof jQuery == 'undefined') {
document.write(unescape("%3Cscript src='/js/jquery-2.0.0.min.js' type='text/javascript'%3E%3C/script%3E"));
}
</script>
Put this tag AFTER loading JQuery from the CDN, if the load failed, jQuery will be undefined and proceed to load it from the local source. If the load from the CDN worked, then this block will be skipped over.
Including from own server pros:
Technically google servers can be down, and your website won't load correctly.
people, that dont trust google and have it blocked, script blocked etc.
These ppl too wouldnt want to include the file from google directly.
connections to google can have higher latency. if you have your audience in your own country and a good provider, your connec can be faster than googles.
contra:
higher webserver traffic
more connections
higher CPU impact
You have to decide for yourself, which one is better for you. For smaller sites I'd go for local stored file.
Is there any difference between linking a stylesheet or script to our website from EXTERNAL servers and uploading the stylesheet to our server and link to it? What are the benefits of uploading to our server?
Usually the external ones are served by a CDN server, that's means that your users probably have them cached ( for example jquery or other popular library )
If you store something on your server it will mean your server will have to process more data, that is send your scripts or CSS together with every request. That will add some extra work for your server which may mean extra costs or slower responses.
The advantage of using links to external servers is less load on your server and also most of the popular frameworks like jQuery for example, they have your file available all over the world, so if user opens your website based in europe for example and he is based in USA he will not have to wait for file to be delivered across the world, but instead will most likely get that file from a location closer to him. Less traffic, less load on server and faster response times.
Yes, There is lots of difference:
If you will include stylesheet or script from external source then it will take time to load the page.
If you will include stylesheet or script from external source then it may causes not to load all due to firewall(I have faced this situation).
In future if there will change in the files then it will also reflect your site.
The security issue, anyone can add the code to track your data.
So always recommend to download and add these file from your on server instead of including stylesheet or script from external source.
When leaving a website my user get a message you are now leaving, which redirects after 4 seconds. Can that time be used to somehow preload the target site's content, so that after the redirect the site appears faster?
If the site to be fetched is on your domain, you can parse the next file with JavaScript and request the assets.
If not, you can't figure out its assets (not via XHR anyway, most of the time) so you can't preload them. You could hack it by placing the site in a hidden iframe. You could also use your server as a proxy, to get a list of assets and pass it to your HTML to start preloading them.
You could try also using this meta tag...
<link rel="prefetch" href="/images/big.jpeg">
Source.
It is a lot of effort for arguably not much gain though.
You could start loading the site into an invisible <iframe>. If it's cached properly, this will reduce load time when the user actually enters the page.
This would however have all kinds of potential side effects, like autoplay music and videos starting in the background.
I would tend to leave this kind of preloading to the web browser (and/or the prefetch tag that #alex shows!)