Why cache AJAX with jQuery? - javascript

I use jQuery for AJAX. My question is simple - why cache AJAX? At work and in every tutorial I read, they always say to set caching to false. What happens if you don't, will the server "store" such requests and get "clogged up"? I can find no good answer anywhere - just links telling you how to set caching to false!

It's not that the server stores requests (though they may do some caching, especially higher volume sites, like SO does for anonymous users).
The issue is that the browser will store the response it gets if instructed to (or in IE's case, even when it's not instructed to). Basically you set cache: false if you don't want to user's browser to show stale data it fetched X minutes ago for example.
If it helps, look at what cache: false does, it appends _=190237921749817243 as a query string pair (random number, the actual one is the current time, so it's always....current). This forces the browser to make the request to the server for data again, since it doesn't know what that query string means, it may be a different page...and since it can't know or be sure, it has to fetch again.

The server won't cache the requests, the browser will. Remember that browsers are built to display pages quickly, so they have a cache that maps URLs to the results last returned by those URLs. Ajax requests are URLs returning results, so they could also be cached.
But usually, Ajax requests are meant to do something, you don't want to skip them ever, even if they look like the same URL as a previous request.
If the browser cached Ajax requests, you'd have stale responses, and server actions being skipped.

If you don't turn it off you'll have issues trying to figure why you AJAX works but your functions aren't responding as you'd like them to. Forced re-validation at the header level is probably the best way to gain a cache-less assimilation of the data being AJAX'd in.

Here's a hypothetical scenario. Say you want the user to be able to click any word on your page and see a tooltip with the definition for that word. The definition is not going to change, so it's fine to cache it.

The main problem with caching requests in any kind of dynamic environment is that you'll get stale data back some of the time. And it can be unpredictable when you'll get a 'fresh' pull vs. a cached pull.
If you're pulling static content via AJAX, you could maybe leave caching on, but how sure are you that you'll never want to change that fetched content?

The problem is, as always, Internet Explorer. IE will usually cache the whole request. So, if you are repeatedly firing the same AJAX request then IE will only do it once and always show the first result (even though subsequent requests could return different results).

The browser caches the information, not the server. The point in using Ajax is usually because you're going to be getting information that changes. If there's a part of a website or something you know isn't going to change, you don't bother with it more than once (in which case, caching is ok), that's the beauty of Ajax. Since you should only be dealing with information that may be changing, you want to get the new information. Therefore, you don't want the browser to cache.
For example, Gmail uses Ajax. If caching was simply left on you wouldn't see your new e-mail for quite awhile, which would be bad.

Related

Is it a good idea to use service worker to cache app requests and periodically refetching them?

I have a very slow API which I don't have access to change.
it has endpoints like /data?param1=option1&param2=option2
is it a good idea to create a service worker, which will save responses of the requests as cache and periodically refetch them(with jwt for authentication)(data changes pretty rarely, like once a week)? Is there any caveats?
Nope, that's totally fine.
Probably the most important problem you will encounter is, as usually with caching, cache invalidation process.
In your case, if you strongly decide to cache responses for a limited amount of time without additional conditions, you probably can add a header to every cached response, which will hold a caching date.
Here's the article that explains the idea: How to set an expiration date for items in a service worker cache
In a nutshell, when you intercept a response, you unpack it, add a header with a current timestamp inside, then put in into the SW cache. Next, every time you intercept the request to the same endpoint, you get the response from the cache and check the timestamp in the header you set. Then return the cached response or refetch data, depending on the check result.
Another one thing to think about is the refetching strategy. You see, there are different approaches here:
Cache-First. It means that you always return the cached data to your app, and then go for refetch. This way even when it's time to show fresh data, your user will get the data from cache, and only next time they will get the fresh data. Not so pleasant for users, but this way the loading time will always be blazing fast (except the first visit, of course).
Network-First. Vice versa. This way users will wait for the fresh data, but it always will be actual.
Stale-While-Revalidate. Nice but header to implement. You return cached value, then, when SW fetched the fresh one, you trigger your app to rerender the page.
Here's a nice tutorial explaining how all this caching works: Service workers tutorial

Does visiting not existing image URL affects web server?

I'm planning to set a short polling that loads image URL instead of using AJAX just to check if the image is already uploaded from the other hardware device(camera, fingerprint, etc...).
Does visiting not existing image URL every seconds affects or slow web server? Is it the same of AJAX short polling?
it has nothing to do with the slowing or fastening web server, its all about need of your application.
Usually short polling AJAX is used in order to avoid "image not found" exception. If you have some other way of handling it or if it's not a need of your application, go right ahead and load as many non existing images as you like, it won't affect any web server speed.
However, I do not recommend doing AJAX call every second if you decide to go that way because the completion of an AJAX request depends upon the internet speed of the end user and in case of slow net, browser might queue many AJAX requests which might create issues for your application.
Also, in that case, your script will use the last response returned from server which might not necessarily be the response of the last AJAX made and you might end up showing wrong results. The same thing for which we use debounce in search process.
I hope it helps

Chrome Extension webRequest synchronous behavior for async calls

I've created a test web server that I'm using to act as a 'web filter' of sorts. I'm trying to create an extension that uses the webRequest API to make sure that my web server allows or blocks all incoming URL's.
To do this, I'm making an AJAX call from within webRequest to my web server and I'd like to use the response to determine whether to block or allow the specified URL. The problem is, the webRequest method is async, and AJAX calls are async, so I can't wait reliably wait for a response from my server.
I also can't store all blocked / allowed URL's in localStorage, because there could potentially be hundreds of thousands. I've tried using jQuery's async: false property in it's ajax implementation, but that makes the browser almost completely unusable when hundreds of requests are happening at the same time. Anyone have any ideas as to how I might be able to work around this?
EDIT: I know similar questions to this have been asked before, but there haven't been any viable solutions to this problem that I've seen.
I see only two good choices:
make that site a webproxy
use unlimitedStorage permission and store the urls in WebSQL database (it's also the fastest). Despite the general concern that it may be deprecated in Chrome after W3C stopped developing the specification in favor of IndexedDB I don't think it'll happen any time soon because all the other available storage options are either [much] slower or less functional.

CSRF-Prevention for XHR-Requests

I did just learn about the details of CSRF-prevention. In our application, all "writing" requests are done using XHR. Not a single form is actually submitted in the whole page, everything is done via XHR.
For this scenario, Wikipedia suggests Cookie-to-Header Token. There, some random value is stored in a cookie during login (or at some other point in time). When making an XHR-request, this value is then copied to a custom http-header (e.g. "X-csrf-token="), which is then checked by the server.
Now I am wondering, if the random value is actually necessary at all in this scenario. I think it should be enough to just set a custom header like "X-anti-csrf=true". Seems a lot more stable than dragging a random value around. But does this open any security issues?
It depends on how many risky assumptions you want to make.
you have properly configured your CORS headers,
and the user's browser respects them,
so there is no way a malicious site can send an XHR to your domain,
which is the only way to send custom headers with a request
If you believe in all that, sure, a fixed custom header will work.
If you remove any of the assumptions, then your method fails.
If you make the header impossible to guess, then you don't need to make those assumptions. You're still relying on the assumption that the value of that header can't be intercepted and duplicated by a third party (there's TLS for that).

Performance implications of multiple GET calls vs. more complex POST/GET request to AJAX in JSON from a server

I'm working on a website where the frontend is powered by AJAX, interacting with the server in the ordinary RESTful manner and receiving responses as JSON.
It's simple enough to manage POST, DELETE, and PUT requests, since these won't differ at all from a traditional approach. The challenge comes in GETting content. Sometimes, the client needs to GET one resource from the server. Sometimes more.
Is there a performance issue with firing off each individual GET request asynchronously and populating the DOM elements as the responses come in?
I imagine so (but correct me if I'm wrong), and that a performance gain is possible by, say, providing an array of API queries in one request, and for the server to reply with a corresponding JSON array of responses.
POST, DELETE, and PUT are semantically incorrect for this sort of task. Also, GET request seems incorrect. After all sending a GET request to /ajax_handler?q=get_users,get_pages,get_current_user seems kind of weird, since I'm used to seeing a GET request as that of a single resource.
Yet another alternative is just to prepare all the relevant data for each GET query (like you would in a regular non-AJAX page), and put it all together, leaving figuring out what's significant/new to the client, possibly through a last-modified item in each JSON array.
For fear of being closed as subjective, my specific question is whether there is a semantically ideal way to use one GET request to GET multiple, only distantly related pieces of data from a server, or whether the performance gain is even worth it?
The performance difference will depend on several factors. You may need to be concerned about latency between the UI and the server side. If this is substantial, then more than a few round-trips will start to make your UI look sluggish.
In addition, repeatedly connecting to the UI will force authentication checks, and may also increase the load on your DB for both ASP.NET state if you're using a DB for that and for the retrieval of individual resources unless you're caching the results somewhere. Caching just to support a chatty interface will increase the memory load on your servers, and could result in thrashing during periods of high load.
Not only that, you may be paying for extra bandwidth that is not relevant to the request itself (i.e. TCP/IP packet headers) that will increase in proportion to how chatty your system is.

Categories