Is it possible to use JavaScript to dynamically change the HTTP Headers received when loading an image from an external source? I'm trying to control the caching of the image (Expires, Max-Age, etc...) client-side since I do not have access to the server.
As the others have said, no, it is not possibly to manipulate http headers and caching directives from the server in client code.
What is possible
What you do have the ability to do is ensure you get a new file. This can be done by appending a unique string to the URL of the request as a query string parameter.
e.g. if you wanted to ensure you got a new file each hour
<script type="text/javascript">
var d = new Date();
url += ("?" +d.getYear() + "_" + d.getDay() + "_" + d.getHours());
</script>
What this does is add a value containing the year, day and hour to the url, so it will be unique for each hour, hence ensuring a new file request. (Not tested!)
Obviously this can be made much more generic and fine tuned, but hopefully you'll get the idea.
What is impossible
What you can't do is ensure you will not retrieve a new version from the server.
Caching directives are in the server's responsibility. You can't manipulate them on the client side.
Maybe it's an option for you to install a proxy server, e.g. if you are aiming at company employees?
I do not think Javascript can actually do that : the images are requested by the browser, and it's up to him to define HTTP-headers to issue.
One way to use some custom headers would be with some kind of Ajax-request, not passing by any <img> tag ; but you'd have to know what to do with the returned data... Don't think it would help much.
If you want your images to be kept in cache by the browser, you server has to send the right headers in the responses (like Etag, and/or Expires -- see mod_expires, for Apache, for instance)
If you want to be absolutly sure the browser will download a new image, and not use the version it has in cache, you should use a different URL each time.
This is often done using the timestamp as a parameter to the URL ; like example.com/image.jpg?123456789 (123456789 being, more or less, the current timestamp -- obviously less than more, but you get the idea : each second, the browser will see the URL has changed)
EDIT after the edit of the question :
The Expires header is generated by the server, and is one of the headers that come in the Response (it's not a header the client sends in the Request ; see List of HTTP headers).
So, you absolutly have no control over it from the client-side : it's the server that must be configured to do the work, here...
If you want more answers : what are you trying to do exactly ? Why ?
Related
The Issue
Recently, I deployed a page to production containing javascript which used setInterval() to ping a webservice every few seconds. The behavior can be summed up as follows:
Every X seconds, Javascript on the upcomingEvents.aspx page calls the hitWebService() function, which sits in the hitWebService.js file.
The X interval value set proved to be too small, so, I removed all references to hitWebService(), the hitWebService.js file itself, and the web service it was trying to reach.
Attempts to hit the web service from normal IP addressess dropped off, but I am still getting attempted hits from a number of users who use a proxy service.
My theory is that my upcomingEvents.aspx and hitWebService.js have been cached by the proxy service. Indeed, when I log the referrer strings when a user hits the error page (every so often, one of these users will get redirected here), they are being referred from upcomingEvents.aspx.
The issue is that the attempts to hit this web service are filling up the IIS logs at an uncomfortable rate, and are causing unnecessary traffic on the server.
What I have attempted
Removed web service completely
Deleting the hitWebService.js file, also replaced it with dummy file
Changed content expiration on IIS so that content expires immediately
Added Response.Cache.SetCacheability(HttpCacheability.NoCache) to the .vb codebehind on page
Completely republished site with changes
Restarted IIS, stopped and started IIS.
Interesting bits
I can alter the vb codebehind on UpcomingEvents.apsx to log session details, etc, and it seems to update almost instantly for the proxy service users
The Question
If my theory is correct, and the proxy server is indeed caching these the files hitWebService.js and upcomingEvents.aspx, are there any other routes that I can go down to force a code refresh, considering the above strategies haven't worked?
Thanks a lot,
In my case, i had a ajax call begin cached by asp.net. I used a param with the javascript date number so each call have a different querystring.
like this:
function addQueryStringAntiCache(url)
{
var d = new Date();
var n = d.getTime();
return url + (url.indexOf("?") == -1 ? "?" : "&") + "nocache=" + n;
}
you can do the same thing for script:
<script src="myScript.js?v=8912398314812" />
In case you have access to the machine, use fiddle to check it the browser even call to get the files or if it use it's own cache. In that case you can try to include metaData or http Header to prevent it caching it. Check this response
Hope it help you.
I'm trying to determine the best way to cache my JavaScript and CSS files.
There are several ways of doing this:
Using the Date, Expires and Cache-Control headers
Using the ETag header
Cache forever and change the filename when the file changes
Append a querystring to the filename in the HTML with the last mod time or an MD5 of the file contents
I was under the impression that the last method (4) was the most reliable and would result in the fewest unnecessary requests, but my friend just told me that sometimes the querystring method is unreliable and you actually need to change the filename.
Are there any downsides to setting the HTTP headers to cache forever and just using a query-string with the last mod time, or are there scenarios where another method would be more beneficial?
I'm a big fan of method 4, but I use the Session Id, on it. So, a user that enters my website will load it once per session (a session usually dies if the visitor keeps inactive for more than 20 minutes or if he closes the browser window).
In Asp.net, I use that syntax:
<script src="js/DetalhesCurso.js?<%=Session.SessionID%>"></script>
Your third method is the most reliable. Some CDNs/proxies ignore the query string altogether, and just serve the same cached file regardless of the query string value.
Amazon and Azure do support it, but others might not.
Do note that in method #3 you don't actually have to update the filename itself. You can just use some URL rewriting to always get that same file. You'll only have to update your HTML.
I'm looking to make one XmlHttpRequest (every 30 minutes -- retrieving weather forecast) and use the XML response over multiple html documents. I've looked near and far and can only get the parsed XML to show on one document.
Is there a way to reference different documents from one javascript function?
No framework, just straight javascript/ajax.
forecastXMLreq = new XMLHttpRequest();
forecastXMLreq.open("GET",forecastURL,false);
forecastXMLreq.send();
forecastXML = forecastXMLreq.responseXML;
var day1 = forecastXML.getElementsByTagName("weekday_short")[1].childNodes[0].nodeValue;
document.getElementById("day1").innerHTML = day1.toUpperCase();
Multiple html files, one XHR call is what I'm looking for
The easiest way would be to leverage regular http caching. On the next pages, you still need to request the file in your code, but the browser can potentially skip the request and just automatically and transparently fetch from the local disk cache instead. You aren't guaranteed it will be cached for the full 30 minutes, as the browser has a limited amount of cache space and it decides what to purge, and when.
Just configure the server to send the following http header for that xml response
Cache-Control: max-age=1800
more info on http caching
http://www.mnot.net/cache_docs/
An alternative is to make use of the limited support that browser offer for html5 local storage. No web server config is needed, although browser support is limited, and you don't need to re request the file in the code, but then again you will have different code for retrieving it from local storage.
I am writing an extension for the Chrome browser (and later hope to port to Firefox). The extension downloads a configuration file from my server - an xml file via XMLHttpRequest. What I am find is that it downloads the file once and every subsequent call simply seems to use the cached original version of the file. It doesn't matter whether or not I change the file on the server.
I read that you could try
xmlhttp.setRequestHeader( 'Pragma', 'Cache-Control: no-cache');
and so I've done this but it doesn't seem to make any difference. The only way I can get the new file seems to be to delete the browser cache - which obviously is not a solution for my growing users.
This seems like a problem that I wouldn't be the first person to experience - so given thatcacheing rules seem to uphold this as a policy that can't be easily avoided, my question is, what's the better design? is there a best practice I don't know about? Should I be pushing rather than pulling somehow?
An easy way is to add a useless parameter containing the time to the request. Since time tends to go forwards and never backwards, you can be reasonably sure that your query is unique and therefore won't be cached.
For instance (assuming the URL is in a string url):
url += '?_time=' + (new Date()).getTime();
or, if your URL already has query parameters,
url += '&_time=' + (new Date()).getTime();
I'm interested in writing a script, preferably one easy to add on to browsers with tools such as Greasemonkey, that sends a page's HTML source code to an external server, where it will later be parsed and useful data would be sent to a database.
However, I haven't seen anything like that and I'm not sure how to approach this task. I would imagine some sort of HTTP post would be the best approach, but I'm completely new to those ideas, and I'm not even exactly where to send the data to parse it (it doesn't make sense to send an entire HTML document to a database, for instance).
So basically, my overall goal is something that works like this (note that I only need help with steps 1 and 2. I am familiar with data parsing techniques, I've just never applied them to the web):
User views a particular page
Source code is sent via greasemonkey or some other tool to a server
The code is parsed into meaningful data that is stored in a MySQL database.
Any tips or help is greatly appreciated, thank you!
Edit: Code
ihtml = document.body.innerHTML;
GM_xmlhttpRequest({
method:'POST',
url:'http://www.myURL.com/getData.php',
data:"SomeData=" + escape(ihtml)
});
Edit: Current JS Log:
Namespace/GMScriptName: Server Response: 200
OK
4
Date: Sun, 19 Dec 2010 02:41:55 GMT
Server: Apache/1.3.42 (Unix) mod_gzip/1.3.26.1a mod_auth_passthrough/1.8 mod_log_bytes/1.2 mod_bwlimited/1.4 FrontPage/5.0.2.2635 mod_ssl/2.8.31 OpenSSL/0.9.8e-fips-rhel5 PHP-CGI/0.9
Connection: close
Transfer-Encoding: chunked
Content-Type: text/html
Array
(
)
http://www.url.com/getData.php
As mentioned in the comment on your Q, I'm not convinced this is a good idea and personally, I'd avoid any extension that did this like the plague but...
You can use the innerHTML property available on all html elements to get the HTML inside that node - eg the body element. You could then use an AJAX HTTP(S!) request to post the data.
You might also want to consider some form of compression as some pages can be very large and most users have better download speeds than upload speeds.
NB: innerHTML gets a representation of the source code that would display the page in its current state, NOT the actual source that was sent from the web server - eg if you used JS to add an element, the source for that element would be included in innerHTML even though it was never sent across the web.
An alternative would be to use an AJAX request to GET the current URL and send yourself the response. This would be exactly what was sent to the client but the server in question will be aware the page was served twice (and in some web applications that may cause problems - e.g. by "pressing" a delete button twice)
one final suggestion would be to simply send the current URL to yourself and do the download on your own servers - This would also mitigate some of the security risks as you wouldn't be able to retrieve the content for pages which aren't public
EDIT:
NB: I've deleted much spurious information which was used in tracking down the problem, check the edit logs if you want full details
PHP Code:
<?php
$PageContents = $_POST['PageContents']
?>
GreaseMonkey script:
var ihtml = document.body.innerHTML;
GM_xmlhttpRequest({
method:'POST',
url:'http://example.com/getData.php',
data:"PageContents=" + escape(ihtml),
headers: {'Content-type': 'application/x-www-form-urlencoded'}
});