I'm looking to make one XmlHttpRequest (every 30 minutes -- retrieving weather forecast) and use the XML response over multiple html documents. I've looked near and far and can only get the parsed XML to show on one document.
Is there a way to reference different documents from one javascript function?
No framework, just straight javascript/ajax.
forecastXMLreq = new XMLHttpRequest();
forecastXMLreq.open("GET",forecastURL,false);
forecastXMLreq.send();
forecastXML = forecastXMLreq.responseXML;
var day1 = forecastXML.getElementsByTagName("weekday_short")[1].childNodes[0].nodeValue;
document.getElementById("day1").innerHTML = day1.toUpperCase();
Multiple html files, one XHR call is what I'm looking for
The easiest way would be to leverage regular http caching. On the next pages, you still need to request the file in your code, but the browser can potentially skip the request and just automatically and transparently fetch from the local disk cache instead. You aren't guaranteed it will be cached for the full 30 minutes, as the browser has a limited amount of cache space and it decides what to purge, and when.
Just configure the server to send the following http header for that xml response
Cache-Control: max-age=1800
more info on http caching
http://www.mnot.net/cache_docs/
An alternative is to make use of the limited support that browser offer for html5 local storage. No web server config is needed, although browser support is limited, and you don't need to re request the file in the code, but then again you will have different code for retrieving it from local storage.
Related
I am currently working on a tool that requires fetching data from a webpage. (something similar to scraping but not exactly). What I need is a way to get the response body for all requests loaded for a page. I found a solution(confess.js) which uses phantomjs to fetch the body of the main (initiator) requests. List down the the URLs, headers and cookies for the main and sub requests, even the body size. But I can't seem to find a way to fetch the body data for the sub requests(resources like JS, CSS, Images etc and any xhr requests). What could be the best way to achieve this? (I do not want to hit each url individually thereby doubling the number of hits on my webpage) Any help would be appreciated. Thanks.
there is a simple answer:
https://mitmproxy.org/
install it locally and configure your browser to use this proxy .
than you can track all the traffic .(and will support https easily)
if you need a programmatic access to this data, you better take a look at some nodejs proxy libraries .(http://anyproxy.io, https://github.com/nodejitsu/node-http-proxy)
you want a "reverse proxy" where you pass-through all requests.
then you get control over the request / response of all outgoing requests from the page.
you can "catch" urls, bodys , etc ..
I have a JavaScript application that uses REST API server as a data provider.
There is one method on API that takes GET request and returns raw response that contains email (as far as I can see there is some kind of .eml content).
I use a simple xmlhttprequest.
The question is: how could I take a response (the file content) and delegate it ti browser so the browser can begin a downloading process ?
Is it possible to do at all with GET method ?
Javascript does not support downloading and saving arbitrary files on a user's computer due to obvious security concerns.
There are, however, a few ways to indirectly trigger the download using javascript. One of those ways would be using an invisible iframe and setting the source to the path towards the file.
You might be waiting for browsers to implement window.saveAs, see also the question Using HTML5/Javascript to generate and save a file
There are several snipets you could try, for instance https://github.com/eligrey/FileSaver.js or https://gist.github.com/MrSwitch/3552985
Depending on how you have your client running you could use local storage.
to store the item
localStorage.setItem('NAME', DATA);
and to retrieve
localStorage.getItem('NAME');
and to delete
localStorage.removeItem('NAME');
and then set up a callback or promise to render into the html. If you use axios you can set this up with a promise https://github.com/mzabriskie/axios
I've noticed that Firefox does not cache GET requests automatically. Following the code I use:
var ajax = new XMLHttpRequest();
ajax.open("GET","page.php?val=" + val,true);
ajax.send();
With jquery is possible to give cache: true;, how can I save in the cache with Vanilla Javascript (client side)? Is also possible to decide for how long? Can you give me an example of code? Thank you in advance!
Web caching is largely controlled by the headers sent from the Server (Expires:, etc.). Browsers sometimes "cheat" and don't really cache even though the headers would allow them to ...probably because the user had used their UI to turn off caching, for example by setting the cache size to zero. But browsers that "cheat" the other direction, caching anyway even though the headers don't allow it, are (for good reason) extremely uncommon.
If caching is not happening for you, it's a function of the file and the server (or perhaps the browser configuration), not of any browser type or version. (To say the same thing a different way, your Firefox would cache just fine if the Server sent the needed headers.) The headers are controlled a variety of ways by different servers and different providers. For the Apache server, the nitty-gritty may be in a ".htaccess" file, pre-written templates of which are often available.
To a first approximation, with HTML4, you simply cannot control web caching from the client side, no matter what tool you use and no matter what your program does. A generic exception is provided by the new "online application cache" or "appcache" in HTML5 ...but with other restrictions, for example those about "one per site" and "same origin".
You can cache responses using a simple hash, something like:
var cache = {};
function getData(variable) {
if (cache[variable]) {
return cache[variable];
}
// previous ajax code to get the data...
// in the response handler, do:
cache[variable] = data;
}
That's a naive implementation of a caching mechanism: only works for the lifetime of the page (i.e., until the page is refreshed or navigated away from), doesn't have any expiration mechanism, and other shortcomings I'm sure. For instance, you could use localStorage to get around the refresh issue.
But hey, I'm not getting paid to write this :).
I need to find out if something has changed in a website using an RSS Feed. My solution was to constantly download the entire rss file, get the entries.length and compare it with the last known entries.length. I find it to be a very inelegant solution. Can anyone suggest a different approach?
Details:
• My application is an html file which uses javascript. It should be small enough to function as a desktop gadget or a browser extension.
• Currently, it downloads the rss file every thirty seconds just to get the length.
• It can download from any website with an Rss feed.
Comments and suggestions are appreciated, thanks in advance~ ^^
Many RSS feeds use the <lastBuildDate> element, which is a child of <channel>, to indicate when they were last updated. There's also a <pubDate> element, child of <item>, that serves the same purpose. If you plan on reading ATOM feeds, they have the <updated> element.
There are HTTP headers that can be used to determine if a resource has changed. Learn how to use the following headers to make your application more efficient.
HTTP Request Headers
If-Modified-Since
If-None-Match
HTTP Response Headers
Last-Modified
ETag
The basic strategy is to store the above-mentioned response headers that are returned on the first request and then send the values you stored in the HTTP request headers in future requests. If the HTTP resource has not been changed, you'll get back an HTTP 304 - Not Modified response and the resource will not even be downloaded. So this results in a very lightweight check for updates. If the resource has changed, you'll get back an HTTP 200 OK response and the resource will be downloaded in the usual way.
You should be keeping track of the GUID's/ArticleId's to see if you've seen an article before.
You should also see if your source supports conditional gets. It will allow you to check if anything has changed without needing to download the whole file. You can quickly check with this tool to see if your source supports conditional gets. (I wish everyone did.)
Is it possible to use JavaScript to dynamically change the HTTP Headers received when loading an image from an external source? I'm trying to control the caching of the image (Expires, Max-Age, etc...) client-side since I do not have access to the server.
As the others have said, no, it is not possibly to manipulate http headers and caching directives from the server in client code.
What is possible
What you do have the ability to do is ensure you get a new file. This can be done by appending a unique string to the URL of the request as a query string parameter.
e.g. if you wanted to ensure you got a new file each hour
<script type="text/javascript">
var d = new Date();
url += ("?" +d.getYear() + "_" + d.getDay() + "_" + d.getHours());
</script>
What this does is add a value containing the year, day and hour to the url, so it will be unique for each hour, hence ensuring a new file request. (Not tested!)
Obviously this can be made much more generic and fine tuned, but hopefully you'll get the idea.
What is impossible
What you can't do is ensure you will not retrieve a new version from the server.
Caching directives are in the server's responsibility. You can't manipulate them on the client side.
Maybe it's an option for you to install a proxy server, e.g. if you are aiming at company employees?
I do not think Javascript can actually do that : the images are requested by the browser, and it's up to him to define HTTP-headers to issue.
One way to use some custom headers would be with some kind of Ajax-request, not passing by any <img> tag ; but you'd have to know what to do with the returned data... Don't think it would help much.
If you want your images to be kept in cache by the browser, you server has to send the right headers in the responses (like Etag, and/or Expires -- see mod_expires, for Apache, for instance)
If you want to be absolutly sure the browser will download a new image, and not use the version it has in cache, you should use a different URL each time.
This is often done using the timestamp as a parameter to the URL ; like example.com/image.jpg?123456789 (123456789 being, more or less, the current timestamp -- obviously less than more, but you get the idea : each second, the browser will see the URL has changed)
EDIT after the edit of the question :
The Expires header is generated by the server, and is one of the headers that come in the Response (it's not a header the client sends in the Request ; see List of HTTP headers).
So, you absolutly have no control over it from the client-side : it's the server that must be configured to do the work, here...
If you want more answers : what are you trying to do exactly ? Why ?