I am caching some data like thumbnails and also JSON in our web app.
Now I want to delete old data when I reach the disk space.
Chrome shows in his web tools (not perfectly, it doesn't show the correct time for self-created responses) the attribute Time Cached in the Cache Storage.
So this data must be somewhere and I want to use it.
My plan would be to do work with cache.matchAll and sort the result by the Time Cached attribute to delete the oldest one.
But match All just returns normal Responses, where I don't have Time Cached.
Actually CacheAPI stores the new response over the old one,if the request URL is the same, so when you cache.match(event.request) you always get the newest (and only one). Also, in my case the response has a 'date' header, which you can compare against the current date and find out if you need to fetch from network or not
Related
I'm reading TIFF and PDF files off a network drive and returning each page as an image to the browser which get displayed as JPG. This part works fine. However I'm finding it inefficient because I first have to make a request to the server to determine how many pages the file has, which results in reading in the image on the server, getting the number of pages and returning that value. Once that value is returned to the browser, I then have to make a request for each page to be returned as an image, so the file on the network drive is read in again, and the requested page number is returned as a byte[] representing a BufferedImage of the page.
What I'd ultimately like to do is make a request for the first page in the file, and then in the response to the browser, indicate the number of pages in the file so that each additional page can be requested. This would reduce the amount of requests as no initial request would be required just to determine the number of pages.
I'm not sure if this is possible. I've spent some time researching to see if I could get response headers from images, but haven't found anything.
Why not adding the info to a cookie via reponse in your image-serving controller?
If you name the cookie in a way that would "link" it to a specific set of images you wouldn't need to hassle with "how can i read headers of an image?"
We're using DataTables as our table, and we're having a problem/disagreement with somehow keeping the history of filters that were applied to the table before, so that users can back/forth and refresh through these.
Now, one solution that was proposed was that I keep the filters string in the URL, and pass it around as a GET request, which would work well with back/forth and refresh. But as I have very customized filtering options (nested groups of filters), the filter string gets quite long, actually too long to be able to pass it with the GET request because of the length limit.
So as GET is out of the question, the obvious solution would be a POST request, and this is what we can't agree upon.
First solution is to use the POST request, and get the "annoying" popup every time we try to go back/forth or refresh. We also break the POST/Redirect/GET pattern that we use throughout the site, since there will be no GET.
Pros:
Simple solution
No second requests to the server
No additional database request
No additional database data
Only save the filter to the database when you choose to, so that you can re-apply it whenever you want
Cons:
Breaks the POST/Redirect/GET pattern
Having to push POST data with pushState (history.js)
How to get refresh to work?
Second solution is to use the POST request, server side saves the data in the DB, gets an ID for requesting the saved data, returns it, and the client then does a GET request with this ID, which the server side matches back to the data, returning the right filter, thus retaining the POST/Redirect/GET pattern. This solution makes two requests and saves every filter that users use to the database. Each user would have only a limited number of 'history' filters saved in the database, the older ones getting removed as new ones are applied. Basically the server side would shorten your URL by saving the long data to the database, like an URL shortening site does.
Pros:
Keeps the POST/Redirect/GET pattern
No popup messages when going back/forth and refreshing the page due to the post data being sent again
Cons:
Complicated solution
Additional request to the server
Additional request to the database
A lot of data in the database that will not be used unless the user goes back/forth or refreshes the page
A third solution would be very welcome, or pick one of the above and ideally explain why.
This is a fleeting thought i just had...you can save state of length, filtering, pagination and sorting by using bStateSave http://datatables.net/examples/basic_init/state_save.html
My thought was, theoretically you could save the cookie generated by datatables.js into a database table, like you mention in the second solution, but the request only has to happen each time you want to overwrite the current filter, replacing the current cookie with the previous "history" cookie
I'm trying to determine the best way to cache my JavaScript and CSS files.
There are several ways of doing this:
Using the Date, Expires and Cache-Control headers
Using the ETag header
Cache forever and change the filename when the file changes
Append a querystring to the filename in the HTML with the last mod time or an MD5 of the file contents
I was under the impression that the last method (4) was the most reliable and would result in the fewest unnecessary requests, but my friend just told me that sometimes the querystring method is unreliable and you actually need to change the filename.
Are there any downsides to setting the HTTP headers to cache forever and just using a query-string with the last mod time, or are there scenarios where another method would be more beneficial?
I'm a big fan of method 4, but I use the Session Id, on it. So, a user that enters my website will load it once per session (a session usually dies if the visitor keeps inactive for more than 20 minutes or if he closes the browser window).
In Asp.net, I use that syntax:
<script src="js/DetalhesCurso.js?<%=Session.SessionID%>"></script>
Your third method is the most reliable. Some CDNs/proxies ignore the query string altogether, and just serve the same cached file regardless of the query string value.
Amazon and Azure do support it, but others might not.
Do note that in method #3 you don't actually have to update the filename itself. You can just use some URL rewriting to always get that same file. You'll only have to update your HTML.
I am populating a dropdown from a jquery ajax call to a web service that is returning json data. If for some reason the webservice fails or is unavailable how do I handle this? I don't want to have my users looking at an empty dropdown.
How can I cache the last successful call and use that instead?
If the data is the same for all users and doesn't change too frequently, you could cache it on your server, and only do the AJAX call if cache is unavailable/stale. (If cache is stale, yet AJAX call fails, it may be better to serve slightly stale data than no data.)
Be careful about caching the data - there are several services (I'm thinking about the DVLA here in Britain) for which caching data constitutes a break of their terms of usage...
You need to indicate to the user that the service has failed for whatever reason.
Perhaps you should try to load the data once the page is ready, and if the webservice fails navigate to an error page?
if the service is unavailable, then you will have to inform your user that the page did not load correctly. or retry once or twice, but chances are if it's down the first time, it'll be down for a while
You can use DOM Storage to cache results in the browsers that support it. Or write a cookie.
And for ones that don't, or if you have no data, just hide the drop down and display an error panel in its place.
I want to display from cache for a long time and I want a slightly different behavior on page render vs loading the page from cache. Is there an easy way I can determine this with JavaScript?
I started with the answer "Daniel" gave above but I fear that over a slow connection I could run into some latency issues.
Here is the solution that ultimately worked for me. On the server side I add a cookie refCount and set it's value to 0. On document load in javascript I first check refCount and then increment it. When checking if refCount is greater than 1 I know the page is cached. So for this works like a charm.
Thanks guys for leading me to this solution.
One way you could do it is to include the time the page was generated in the page and then use some javascript to compare the local time to the time the page was generated. If the time is different by a threshold then the page has come from a cache. The problem with that is if the client machine has its time set incorrectly, although you could get around this by making the client include its current system time in the request to generate the page and then send that value back to the client.
With the new Resource Timing Level 2 spec you can use the transfer size property to check if the page is loaded from cache:
var isCached = performance.getEntriesByType("navigation")[0].transferSize === 0;
Spec: https://www.w3.org/TR/resource-timing-2/#dom-performanceresourcetiming-transfersize
Browser support: https://developer.mozilla.org/en-US/docs/Web/API/PerformanceNavigationTiming#Browser_compatibility
note that at the time of writing, it shows Safari does not support while in actuality the latest version does.
While this question is already 4 years old. I thought I would add my 2 cents using jQuery and the History plugin.
$(document).ready(function()
{
$('body').append('<div class="is_cached"></div>');
});
History.Adapter.bind(window,'statechange',function(){
if($('.is_cached').length >= 1)
{
alert('this page is cached');
}
});
When the document is first loaded. A new div.is_cached is appended. There isn't a compatible way to execute javascript when loading a cached page, but you can monitor for history changes. When history changes and the div.is_cached exists, then the user is viewing a cached paged.
Using XmlHttpRequest you can pull up the current page and then examine the http headers of the response.
Best case is to just do a HEAD request and then examine the headers.
For some examples of doing this have a look at http://www.jibbering.com/2002/4/httprequest.html
Not directly, some browsers may have some custom command for it.
There is a workaround that would do what you want. Use a cookie to store timestamp of the first visit and then use the META HTTP-EQUIV to set the length of time the file is cached (cacheLength). If the current time is within the time period from timestamp to timestamp+cacheLength then treat as if they loaded from cache. Once the cache has expired reset the cookie time.
Add unique data to the page on the server at creation. For example a random number, or the creation time:
window.rand = {{ rand() }}
Use local storage to save the url with the number and compare it later if needed:
reloadIfCached() {
var cached = localStorage.getItem(window.location.href) == window.rand;
if (cached) {
window.location.reload();
}
localStorage.setItem(window.location.href, window.rand);
}