I know there are many ways to prevent image caching (such as via META tags), as well as a few nice tricks to ensure that the current version of an image is shown with every page load (such as image.jpg?x=timestamp), but is there any way to actually clear or replace an image in the browsers cache so that neither of the methods above are necessary?
As an example, lets say there are 100 images on a page and that these images are named "01.jpg", "02.jpg", "03.jpg", etc. If image "42.jpg" is replaced, is there any way to replace it in the cache so that "42.jpg" will automatically display the new image on successive page loads? I can't use the META tag method, because I need everuthing that ISN"T replaced to remain cached, and I can't use the timestamp method, because I don't want ALL of the images to be reloaded every time the page loads.
I've racked my brain and scoured the Internet for a way to do this (preferrably via javascript), but no luck. Any suggestions?
If you're writing the page dynamically, you can add the last-modified timestamp to the URL:
<img src="image.jpg?lastmod=12345678" ...
<meta> is absolutely irrelevant. In fact, you shouldn't try use it for controlling cache at all (by the time anything reads content of the document, it's already cached).
In HTTP each URL is independent. Whatever you do to the HTML document, it won't apply to images.
To control caching you could change URLs each time their content changes. If you update images from time to time, allow them to be cached forever and use a new filename (with a version, hash or a date) for the new image — it's the best solution for long-lived files.
If your image changes very often (every few minutes, or even on each request), then send Cache-control: no-cache or Cache-control: max-age=xx where xx is the number of seconds that image is "fresh".
Random URL for short-lived files is bad idea. It pollutes caches with useless files and forces useful files to be purged sooner.
If you have Apache and mod_headers or mod_expires then create .htaccess file with appropriate rules.
<Files ~ "-nocache\.jpg">
Header set Cache-control "no-cache"
</Files>
Above will make *-nocache.jpg files non-cacheable.
You could also serve images via PHP script (they have awful cachability by default ;)
Contrary to what some of the other answers have said, there IS a way for client-side javascript to replace a cached image. The trick is to create a hidden <iframe>, set its src attribute to the image URL, wait for it to load, then forcibly reload it by calling location.reload(true). That will update the cached copy of the image. You may then replace the <img> elements on your page (or reload your page) to see the updated version of the image.
(Small caveat: if updating individual <img> elements, and if there are more than one having the image that was updated, you've got to clear or remove them ALL, and then replace or reset them. If you do it one-by-one, some browsers will copy the in-memory version of the image from other tags, and the result is you might not see your updated image, despite its being in the cache).
I posted some code to do this kind of update here.
Change the image url like this, add a random string to the querystring.
"image1.jpg?" + DateTime.Now.ToString("ddMMyyyyhhmmsstt");
I'm sure most browsers respect the Last-Modified HTTP header. Send those out and request a new image. It will be cached by the browser if the Last-Modified line doesn't change.
You can append a random number to the image which is like giving it a new version. I have implemented the similar logic and it's working perfectly.
<script>
var num = Math.random();
var imgSrc= "image.png?v="+num;
$(function() {
$('#imgID').attr("src", imgSrc);
})
</script>
I found this article on how to cache bust any file
There are many ways to force a cache bust in this article but this is the way I did it for my image:
fetch('/thing/stuck/in/cache', {method:'POST', credentials:'include'});
The reason the ?x=timestamp trick is used is because that's the only way to do it on a per image basis. That or dynamically generate image names and point to an application that outputs the image.
I suggest you figure out, server side, if the image has been changed/updated, and if so then output your tag with the ?x=timestamp trick to force the new image.
No, there is no way to force a file in a browser cache to be deleted, either by the web server or by anything that you can put into the files it sends. The browser cache is owned by the browser, and controlled by the user.
Hence, you should treat each file and each URL as a precious resource that should be managed carefully.
Therefore, porneL's suggestion of versioning the image files seems to be the best long-term answer. The ETAG is used under normal circumstances, but maybe your efforts have nullified it? Try changing the ETAG, as suggested.
Change the ETAG for the image.
See http://en.wikipedia.org/wiki/URI_scheme
Notice that you can provide a unique username:password# combo as a prefix to the domain portion of the uri. In my experimentation, I've found that inclusion of this with a fake ID (or password I assume) results in the treatment of the resource as unique - thus breaking the caching as you desire.
Simply use a timestamp as the username and as far as I can tell the server ignores this portion of the uri as long as authentication is not turned on.
Btw - I also couldn't use the tricks above with a google map marker icon caching problem I was having where the ?param=timestamp trick worked, but caused issues with disappearing overlays. Never could figure out why this was happening, but so far so good using this method. What I'm unsure of, is if passing fake credentials will have any adverse server performance affects. If anyone knows I'd be interested to know as I'm not yet in high volume production.
Please report back your results.
Since most, if not all, answers and comments here are copies of parts the question, or close enough, I shall throw my 2 cents in.
I just want to point out that even if there is a way it is going to be difficult to implement. The logic of it traps us. From a logical stance telling the browser to replace it's cached images for each changed image on a list since a certain date is ideal BUT... When would you take the list down and how would you know if everyone has the latest version who would visit again?
So my 1st "suggestion", as the OP asked for, is this list theory.
How I see doing this is:
A.) Have a list that our dynamic and manual changed image urls can be stored.
B.) Set a dead date where the catch will be reset and the list will be truncated regardless.
C.0) Check list on site entrance vs browser via i frame which could be ran in the background with a shorter cache header set to re-cache them all against the farthest date on the list or something of that nature.
C.1) Using the Iframe or ajax/xhr request I'm thinking you could loop through each image of the list refreshing the page to show a different image and check the cache against it's own modified date. So on this image's onload use serverside to decipher if it is not the last image when it is loaded go to the next image.
C.1a) This would mean that our list may need more information per image and I think the obvious one is the possible need of some server side script to adjust the headers as required by each image to minimize the footstep of re-caching changed site images.
My 2nd "suggestion" would be to notify the user of changes and direct them to clear their cache. (Carefully, remove only images and files when possible or warn them of data removal due to the process)
P.S. This is just an educated ideation. A quick theory. If/when I make it I will post the final. Probably not here because it will require server side scripting. This is at least a suggestion not mentioned in the OP's question that he say's he already tried.
It sounds like the base of your question is how to get the old version of the image out of the cache. I've had success just making a new call and specifying in the header not to pull from cache. You're just throwing this away once you fetch it, but the browser's cache should have the updated image at that point.
var headers = new Headers()
headers.append('pragma', 'no-cache')
headers.append('cache-control', 'no-cache')
var init = {
method: 'GET',
headers: headers,
mode: 'no-cors',
cache: 'no-cache',
}
fetch(new Request('path/to.file'), init)
However, it's important to recognize that this only affects the browser this is called from. If you want a new version of the file for any browser once the image is replaced, that will need to be accomplished via server configuration.
Here is a solution using the PHP function filemtime():
<?php
$addthis = filemtime('myimf.jpg');
?>
<img src="myimg.jpg?"<?= $addthis;?> >
Use the file modified time as a parameter will cause it to read from a cached version until the file has changed. This approach is better than using e.g. a random number as caching will still work if the file has not changed.
In the event that an image is re-uploaded, is there a way to CLEAR or REPLACE the previously cached image client-side? In my example above, the goal is to make the browser forget what "42.jpg" is
You're running firefox right?
Find the Tools Menu
Select Clear Private Data
Untick all the checkboxes except make sure Cache is Checked
Press OK
:-)
In all seriousness, I've never heard of such a thing existing, and I doubt there is an API for it. I can't imagine it'd be a good idea on part of browser developers to let you go poking around in their cache, and there's no motivation that I can see for them to ever implement such a feature.
I CANNOT use the META tag method OR the timestamp method, because I want all of the images cached under normal circumstances.
Why can't you use a timestamp (or etag, which amounts to the same thing)? Remember you should be using the timestamp of the image file itself, not just Time.Now.
I hate to be the bearer of bad news, but you don't have any other options.
If the images don't change, neither will the timestamp, so everything will be cached "under normal circumstances". If the images do change, they'll get a new timestamp (which they'll need to for caching reasons), but then that timestamp will remain valid forever until someone replaces the image again.
When changing the image filename is not an option then use a server side session variable and a javascript window.location.reload() function. As follows:
After Upload Complete:
Session("reload") = "yes"
On page_load:
If Session("reload") = "yes" Then
Session("reload") = Nothing
ClientScript.RegisterStartupScript(Me.GetType), "ReloadImages", "window.location.reload();", True)
End If
This allows the client browser to refresh only once because the session variable is reset after one occurance.
Hope this helps.
To replace cache for pictore you can store on server-side some version value and when you load picture just send this value instead timestamp. When your image will be changed change it`s version.
Try this code snippet:
var url = imgUrl? + Math.random();
This will make sure that each request is unique, so you will get the latest image always.
After much testing, the solution I have found in the following way.
1- I create a temporary folder to copy the images with the name adding time () .. (if the folder exists I delete content)
2- load the images from that temporary local folder
in this way I always make sure that the browser never caches images and works 100% correctly.
if (!is_dir(getcwd(). 'articulostemp')){
$oldmask = umask(0);mkdir(getcwd(). 'articulostemp', 0775);umask($oldmask);
}else{
rrmfiles(getcwd(). 'articulostemp');
}
foreach ($images as $image) {
$tmpname = time().'-'.$image;
$srcimage = getcwd().'articulos/'.$image;
$tmpimage = getcwd().'articulostemp/'.$tmpname;
copy($srcimage,$tmpimage);
$urlimage='articulostemp/'.$tmpname;
echo ' <img loading="lazy" src="'.$urlimage.'"/> ';
}
try below solutions,
myImg.src = "http://localhost/image.jpg?" + new Date().getTime();
Above solutions work for me :)
I usually do the same as #Greg told us, and I have a function for that:
function addMagicRefresh(url)
{
var symbol = url.indexOf('?') == -1 ? '?' : '&';
var magic = Math.random()*999999;
return url + symbol + 'magic=' + magic;
}
This will work since your server accepts it and you don't use the "magic" parameter any other way.
I hope it helps.
I have tried something ridiculously simple:
Go to FTP folder of the website and rename the IMG folder to IMG2. Refresh your website and you will see the images will be missing. Then rename the folder IMG2 back to IMG and it's done, at least it worked for me in Safari.
Related
In my webapp the user has the option to download a file containing some data, which they do by clicking on a button. For small amounts of data the file starts downloading pretty much immediately and that shows in the browser's download area. Which is good.
For large amounts of data it can take the server a substantial amount of time to calculate the data, even before we start downloading. This is not good. I want to indicate that the calculation is in progress. However I don't want to put a "busy" indicator on my UI, because the action does not block the UI - the user should be able to do other things while the file is being prepared.
A good solution from my point of view would be to start the download process before I have finished the calculation. We always know (or can quickly calculate) the first few hundred bytes of the file. Is there a mechanism where I can have the server respond to a download request with those few bytes, thus starting the download and making the file show up in the download area, and provide the rest of the file when I have finished calculating it? I'm aware that it will look like the download is stalled, and that's not a problem.
I can make a pretty good estimate of the file size very quickly. I would prefer not to have to use a third-party package to achieve this, unless it's a very simple one. We are using Angular but happy to code raw JS if needed.
To indicate that the link points to a download on the client, the easiest way is the download attribute on the link. The presence of the attribute tells the browser not to unload the current tab or create a new one; the value of the attribute is the suggested filename.
For the back-end part, after setting the correct response headers, just write the data to the output stream as it becomes available.
You asked for a general solution
1) First, at your HTML/JS you can prevent the UI from being blocked by setting you download target to any other WebPage, the preferred way for doing this is to set the target to an IFRAME:
<!-- your link must target the iframe "downloader-iframe" -->
<a src="../your-file-generator-api/some-args?a=more-args" target="downloader-iframe">Download</a>
<!-- you don't need the file to be shown -->
<iframe id="downloader-iframe" style="display: none"></iframe>
2) Second, at your back-end you'll have to use both Content-Disposition and Content-Length(optional) headers, be careful using the "length" one, if you miss calculate the fileSize it will not be downloaded. If you don't use Content-Length you'll not see the "downloading progress".
3) Third, at you'r back-end you have to make sure that you are writing your bytes directly at your response! that way your Browser and your Web-Server will know that the download is "in progress",
Example for Java:
Using ServletOutputStream to write very large files in a Java servlet without memory issues
Example for C#:
Writing MemoryStream to Response Object
HOW this 3 steps are built will be up to you, frameworks and libraries you are using, for example Dojo & JQuery have great IFRAME manipulation utilities, all thought you can do the coding by yourself, this is a JQuery sample:
Using jQuery and iFrame to Download a File
Also:
Adding a "busy" animation is ok! you just have to make sure that it's not blocking you'r UI, something like this:
I know it's possible to force reload from server using location.reload(true). However, let's say I used that to refresh index.html. If index.html loads a bunch of javascript files, those are still coming from the cache for me. Is there any way to ignore the cache for the duration of a request?
My use case is that I'm doing AB testing on my app, and want to provide a way for users to go back to the old version if something isn't working. But some of the URLs are the same, even though the files between versions are different. It would be nice to be able to handle this in JS rather than having to change every URL on the new version.
There is actually at least 535 different ways to reload a page via javascript, FYI ;).
Have you tried to put document on front? document.location.reload(true);
Try also this other option:
window.location.href = window.location.href;
or
history.go(0);
Sure, both are soft reload, but seems to work in certain situation.
If nothing works, you have to append random data to the url (like timestamp) to force the download from server, bypassing the cache.
If you want to bypass browser taking js files from cache, you need to fetch from server not just files like script.js but rather script.12345.js When you update your file on server, you change file's hash number to let's say script.54321.js And browser understands that the file is different, it must download it again. You can actually use Webpack for this purpose to automate things. In output instead of {filename: bundle.js} you write {filename: bundle.[hash].js}
In my web app, I have a large collection of thumbnails, the user is able to select a thumbnail and client side recrop from the original image to recreate a new thumbnail.
That's fine, in my app, I just set the newly created image instantly to the image source, without reloading it from the server, and next to this, the new image is uploaded to the server. This is to ensure a very responsive feeling. The problem is that when to user refreshes the page, he sees the cached old version of the thumbnail.
I know I could use some image.jpg?sometimestamp to be sure the browser has to download a new version of the thumbnail, but as I said, the app needs to be very responsive, even on small &slow internet connections. (Thats why the app in itself is stored on the user's computer and not downloaded live. Only uploads, downloads and jsons are transitting)
The ideal solution would be to be able to tell the browser : remove this particular file from your cache : someurl.com/somefolder/image.jpg, so that the browser has to fetch it again when it needs it. is this possible and how?
So I'm not asking how not to cache files or how to force re-verification each call, I'm asking how I could remove some particular file from browser's cache.
ps: this can be a webkit-only solution as this is the only platform it is running on.(it is actually a qtwebkit on a Qt project.
The function window.URL.revokeObjectURL() can solve your problem.
The URL.revokeObjectURL() static method releases an existing object
URL which was previously created by calling
window.URL.createObjectURL(). Call this method when you've finished
using a object URL, in order to let the browser know it doesn't need
to keep the reference to the file any longer.
For details: https://developer.mozilla.org/en-US/docs/Web/API/URL.revokeObjectURL
Note that this is an experimental function with limited cross-browser support.
You can't. Append a ?[something_volatile] to the request when you need to download it again, and you'll get the same effect. That method won't take away any of the snappiness if you only change the appendage string when needed.
Sorry for necropost, but I think it could be helpful for others in the same situation:
When your upload is done, perform an AJAX get request with no-cache header.
By specifing this in the REQUEST, you tell the browser to ignore cache and to perform a new request to server.
Assuming I have a cached image at /images/jondoe.jpg, it could be like this :
Code (using Axios):
Axios.post('/upload/new-image', FormDataObject).then(() => {
Axios.get('/images/jondoe.jpg', {headers: {'Cache-Control': 'no-cache'}}).then(reloadPageFunction)
// Now you'll have a new image in your cache
})
Maybe this link is useful for u, according to it, u can use below code :
caches.open('v1').then(function(cache) {
cache.delete('/images/image.png').then(function(response) {
someUIUpdateFunction();
});
})
I am trying to clone an image which is generated randomly.
Although I am using the exact same url a different image is load. (tested in chrome and firefox)
I can't change the image server so I am looking for a pure javascript/jQuery solution.
How do you force the browser to reuse the first image?
Firefox:
Chrome:
Try it yourself (maybe you have to reload it several times to see it)
Code:
http://jsfiddle.net/TRUbK/
$("<img/>").attr('src', img_src)
$("<div/>").css('background', background)
$("#source").clone()
Demo:
http://jsfiddle.net/TRUbK/embedded/result/
You can't change the image server if it isn't yours, but you can trivially write something on your own server to handle it for you.
First write something in your server-side language of choice (PHP, ASP.NET, whatever) that:
Hits http://a.random-image.net/handler.aspx?username=chaosdragon&randomizername=goat&random=292.3402&fromrandomrandomizer=yes and downloads it. You generate a key in one of two way. Either get a hash of the whole thing (MD5 should be fine, it's not a security-related use so worries that it's too weak these days don't apply). Or get the size of the image - the latter could have a few duplicates, but is faster to produce.
If the image isn't already stored, save it in a location using that key as part of its filename, and the content-type as another part (in case there's a mixture of JPEGs and PNGs)
Respond with an XML or JSON response with the URI for the next stage.
In your client side-code, you hit that URI through XmlHttpRequest to obtain the URI to use with your images. If you want a new random one, hit that first URI again, if you want the same image for two or more places, use the same result.
That URI hits something like http://yourserver/storedRandImage?id=XXX where XXX is the key (hash or size as decided above). The handler for that looks up the stored copies of the images, and sends the file down the response stream, with the correct content-type.
This is all really easy technically, but the possible issue is a legal one, since you're storing copies of the images on another server, you may no longer be within the terms of your agreement with the service sending the random images.
You can try saving the base64 representation of the image.
Load the image in an hidden div/canvas, then convert it in base64. (I'm not sure if a canvas can be hidden, nor if it is possible to convery the img using html4 tag)
Now you can store the "stringified" image in a cookie, and use it unlimited times...
The headers being sent from your random image generator script include a Cache-Control: max-age=0 declaration which is in essence telling the browser not to cache the image.
You need to modify your image generator script/server to send proper caching headers if you want the result to be cached.
You also need to make sure that the URL stays the same (I didn't look at that aspect since there were tons of parameter being passed).
There seems to be two workarounds:
If you go with the Canvas method, see if you can get the image to load onto the Canvas itself so that you can manipulate the image data directly instead of making a 2nd http request for the image. You can feed the image data directly onto a 2nd Canvas.
If you're going to build a proxy, you can have the proxy remove the No-Cache directive so that subsequent requests by your browser use the cache (no guarantees here - depends on browser/user settings).
First off, you can "force" anything on the web. If you need to force things, then web development is the wrong medium for you.
What you could try, is to use a canvas element to copy the image. See https://developer.mozilla.org/en-US/docs/Web/Guide/HTML/Canvas_tutorial/Using_images for examples.
Tell it to stop getting a random image, seems to work the way you want when I add this third replace call:
// Get the canvas element.
var background = ($("#test").css('background-image')),
img_src = background.replace(/^.+\('?"?/, '').replace(/'?"?\).*$/, '').replace(/&fromrandomrandomizer=yes/,'')
try:
var myImg = new Image();
myImg.src = img_src;
and then append "myImg" to where you want:
$(document).append(myImg);
I did this with your fiddler scripts and got the same image every time
#test {
background:url(http://a.random-image.net.nyud.net/handler.aspx?username=chaosdragon&randomizername=goat&random=292.3402&fromrandomrandomizer=yes);
width: 150px;
height: 150px;
}
note the .nyud.net after the domain name.
I've been learning JavaScript recently, and I've seen a number of examples (Facebook.com, the Readability bookmarklet) that use Math.rand() for appending to links.
What problem does this solve? An example parameter from the Readability bookmarklet:
_readability_script.src='http://lab.arc90.com/....script.js?x='+(Math.random());
Are there collisions or something in JavaScript that this is sorting out?
As Rubens says, it's typically a trick employed to prevent caching. Browsers typically cache JavaScript and CSS very aggressively, which can save you bandwidth, but can also cause deployment problems when changing your scripts.
The idea is that browsers will consider the resource located at http://www.example.com/something.js?foo different from http://www.example.com/something.js?bar, and so won't use their local cache to retrieve the resource.
Probably a more common pattern is to append an incrementing value which can be altered whenever the resource needs to change. In this way, you benefit by having repeat requests served by the client-side cache, but when deploying a new version, you can force the browser to fetch the new version.
Personally, I like to append the last-modified time of the file as as a Unix timestamp, so I don't have to go hunting around and bumping version numbers whenever I change the file.
Main point is to avoid browser caching those resources.
This will ensure that the script is unique and will not cached as a static resource since the querystring changes each time.
This is because Internet Explorer likes to cache everything, including requests issued via JavaScript code.
Another way to do this, without random numbers in the URL, is to add Cache-Control headers to the directories with the items you don't want cached:
# .htaccess
Header set Cache-Control "no-cache"
Header set Pragma "no-cache"
Most browsers respect Cache-Control but IE (including 7, haven't tested 8) only acknowledge the Pragma header.
Depending on how the browser chooses to interpret the caching hints of a resource you might not get the desired effect if you just asked the browser to change the url to a url it has previously used. (most mouse-over image buttons rely on the fact that the browser will reuse the cached resource for speed)
When you want to make sure that the browser gets a fresh copy of the resource (like a dynamic stock ticker image or the like) you force the browser to always think the content is newby appending either the date/time or a every-incresing number or random gargabe).
There is a tool called squid that can cache web pages. Using a random number will guarantee that request will not be ached by an intermediate like this. Even with Header set Cache-Control "no-cache" you may still need to add a random number to get through something like "squid".