I'm writing an application and I'm trying to tie simple AJAX functionality in. It works well in Mozilla Firefox, but there's an interesting bug in Internet Explorer: Each of the links can only be clicked once. The browser must be completely restarted, simply reloading the page won't work. I've written a very simple example application that demonstrates this.
Javascript reproduced below:
var xmlHttp = new XMLHttpRequest();
/*
item: the object clicked on
type: the type of action to perform (one of 'image','text' or 'blurb'
*/
function select(item,type)
{
//Deselect the previously selected 'selected' object
if(document.getElementById('selected')!=null)
{
document.getElementById('selected').id = '';
}
//reselect the new selcted object
item.id = 'selected';
//get the appropriate page
if(type=='image')
xmlHttp.open("GET","image.php");
else if (type=='text')
xmlHttp.open("GET","textbox.php");
else if(type=='blurb')
xmlHttp.open("GET","blurb.php");
xmlHttp.send(null);
xmlHttp.onreadystatechange = catchResponse;
return false;
}
function catchResponse()
{
if(xmlHttp.readyState == 4)
{
document.getElementById("page").innerHTML=xmlHttp.responseText;
}
return false;
}
Any help would be appreciated.
This happens because Internet Explorer ignores the no-cache directive, and caches the results of ajax calls. Then, if the next request is identical, it will just serve up the cached version. There's an easy workaround, and that is to just append random string on the end of your query.
xmlHttp.open("GET","blurb.php?"+Math.random();
It looks like IE is caching the response. If you either change your calls to POST methods, or send the appropriate headers to tell IE not to cache the response, it should work.
The headers I send to be sure it doesn't cache are:
Pragma: no-cache
Cache-Control: no-cache
Expires: Fri, 30 Oct 1998 14:19:41 GMT
Note the expiration date can be any time in the past.
The problem is that IE does wacky things when the response handler is set before open is called. You aren't doing that for the first xhr request, but since you reuse the xhr object, when the second open is called, the response handler is already set. That may be confusing, but the solution is simple. Create a new xhr object for each request:
move the:
var xmlHttp = new XMLHttpRequest();
inside the select function.
Read No Problems section in [link text][1] [1]: http://en.wikipedia.org/wiki/XMLHttpRequest
The response header that has worked best for me in the IE AJAX case is Expires: -1, which is probably not per spec but mentioned in a Microsoft Support Article (How to prevent caching in Internet Explorer). This is used in conjunction with Cache-Control: no-cache and Pragma: no-cache.
xmlHttp.open("GET","blurb.php?"+Math.random();
I agree with this one.. it works perfectly as a solution to this problem.
the problem is that IE7's caching of urls were terrible, ignoring the no-cache header and save the resource to its cache using its url as key index, so the best solution is to add a random parameter to the GET url.
In jQuery.ajax, you can set the "cache" setting to false:
http://api.jquery.com/jQuery.ajax/
Using Dojo, this can be done using dojo.date.stamp, just adding the following to the url:
"...&ts=" + dojo.date.stamp.toISOString(new Date())
Related
My XML HTTP request requests a URL that does not exist:
var url = 'http://redHerringObviouslyNonexistentDomainName.com';
var myHttpRequest = new XMLHttpRequest();
myHttpRequest.open('GET', url, true);
myHttpRequest.onreadystatechange = function() {
if (myHttpRequest.readyState == 4 && myHttpRequest.status == 200) {
// Do stuff with myHttpRequest.responseText;
}
}
myHttpRequest.responseType = 'text';
myHttpRequest.send();
I'm OK with the URL not existing ... but I'm not OK with Chrome issuing the following error in the Console:
GET redHerringObviouslyNonexistentDomainName.com/ net::ERR_NAME_NOT_RESOLVED
How do I tell Chrome "I don't care that the URL does not exist - just suppress that error from the Console?"
If someone answers this question by buying that domain name, I will lol so hard ... but then you'll feel bad after I change the URL I use in this question.
as far as this problem, an option would be to send the request to a non-existent ip rather than a non-existent domain because it is throwing that error out of an attempt to find out what ip it points to failing. so by sending it to an ip that you know for a fact will not return anything you may be able to avoid this, although this is purely in theory and you could get an entirely different error thrown at you.
another option is to send the request and have the value returned made into a variable with
errorDomain = yourMethodOfRequestingStatus
this is pure theory and I have not tested any of the suggested methods but I hope it helped you at least a bit.
I want to ensure that data I request via an AJAX call is fresh and not cached. Therefor I send the header Cache-Control: no-cache
But my Chrome Version 33 overrides this header with Cache-Control: max-age=0 if the user presses F5.
Example. Put a test.html on your webserver with the contents
<script>
var xhr = new XMLHttpRequest;
xhr.open('GET', 'test.html');
xhr.setRequestHeader('Cache-Control', 'no-cache');
xhr.send();
</script>
In the chrome debugger on the network tab I see the test.html AJAX call. Status code 200. Now press F5 to reload the page. There is the max-age: 0, and status code 304 Not Modified.
Firefox shows a similar behavior. Intead of just overwriting the request header it modifies it to Cache-Control: no-cache, max-age=0 on F5.
Can I suppress this?
Using a query string for cache control isn't your best option nowadays for multiple reasons, and (only) a few are mentioned in this answer. He even explains the new standard method of version control. Though if you just want to be able to set your request headers, the right way to do it is:
// via Cache-Control header:
xhr.setRequestHeader("Cache-Control", "no-cache, no-store, max-age=0");
// fallbacks for IE and older browsers:
xhr.setRequestHeader("Expires", "Tue, 01 Jan 1980 1:00:00 GMT");
xhr.setRequestHeader("Pragma", "no-cache"); //Edit: I noticed this is required for Chrome some time ago... forgot to mention here
Hope this helps anyone in the future.
An alternative would be to append a unique number to the url.
<script>
var xhr = new XMLHttpRequest;
xhr.open('GET', 'test.html?_=' + new Date().getTime());
//xhr.setRequestHeader('Cache-Control', 'no-cache');
xhr.send();
</script>
timestamp isn't quite unique, but it should be unique enough for your usecase.
I tried (and failed) some sort of randomization to the URL, but it didn't work because the file I was accessing (.json) was being cached as well.
My solution was to add a timestamp to the call to the json file name (similar approach to ones above, slightly modified). This worked perfectly for me (code snippet below).
doSomething('files/data.json?nocache=' + (new Date()).getTime(), function(text){...
I'm very new at all of this so I'm sure there are reasons this isn't a standard/correct solution, but it worked for me.
Removing the element and adding a new element with jQuery (JS also, I think) works for me in Chrome.
// Wordpress Context, selectedImage is an object from the Media Selector Dialog
const imageID = selectedImage.id
const imageURL = selectedImage.url
// This is a div with the img as an only child
const logoImageContainer = $('#q1nv0-preview-image-container')
let clone = $(logoImageContainer).find('img').clone()
// In the cloned img object I change the src to the new image
clone.removeAttr("src").attr('src', imageURL)
// Also I have to remove this in wordpress context:
clone.removeAttr("srcset")
// the div is cleared of the old image
$('#q1nv0-preview-image-container').empty()
// the new image is added to the div
$(logoImageContainer).prepend(clone)
http.setRequestHeader("Cache-Control", "no-cache, no-store, must-revalidate");
The following code produces nothing on the html page, it seems to break down on 'status':
var get_json_file = new XMLHttpRequest();
get_json_file.open("GET", "/Users/files/Documents/time.json", true);
**document.write(get_json_file.status);**
keep in mind, that I am on a Mac, so there is no C: drive....however, this line of code does work fine:
document.write(get_json_file.readyState);
I just want to know that I was able to successfully find my json file. Perhaps, I should ask, what should I be looking for to achieve what I want ?
Another basic question about AJAX. I suggest you to read the MDN article about using XMLHttpRequest. You can't access the 'status' property until it is ready, and you haven't even called the 'send()' method, which performs the actual request. You can't have a status without making an HTTP request first. Learn how AJAX works before trying to use it. Explaining it all would be too long and this is not the place.
You can only get the status when the ajax has finished. That is, when the page was loaded, or a 404 was returned.
Because you're trying to call status straight after the request was sent (or not sent, read the P.S), you're getting nothing.
You need to make an async call, to check that status only when the request finishes:
get_json_file.onreadystatechange = function (){
if (get_json_file.readyState==4 && get_json_file.status==200)
{
alert('success');
}
}
read more at http://www.w3schools.com/ajax/ajax_xmlhttprequest_onreadystatechange.asp
P.S as noted by #Oscar, you're missing the send().
If you want to try a synchronous approach, which would stop the code from running until a response is returned, you can try:
var get_json_file = new XMLHttpRequest();
get_json_file.open("GET", "/Users/files/Documents/time.json", false);
//notice we set async to false (developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest)
get_json_file.send(); //will wait for a response
document.write(get_json_file.status);
//(Credit: orginial asker)
Because of security issues you are not allowed to send requests to files on the local system, but what you could do is look into the fileReader API. (more info here)
<sidenote>
The reason that the readyState works and not status is because by defualt
readyState has a value of 0 and status has no value so it would be undefined.
</sidenote>
in my ubuntu, the path will be prefixed with file:///
i think your json file path should have file:///Users/files/Documents/time.json, because mac and ubuntu based on unix
and then you can check ajax status using #TastySpaceApple answer
if you using google chrome, don't forget to launch it with -–allow-file-access-from-files command, because google chrome not load local file by default due to security reason
I've recently begun running into problems with javascript when I attempt to load element tags from a separate xml file into an html document. I know that I have enabled either XMLHttpRequest or activeX (depending on the internet browser) correctly, but I'm having problems getting the xml file and opening it to access it's tags. In order to open the file, I tried to use:
xhttp.open("GET",filepath,false);
xhttp.send();
xmlDoc=xhttp.responseXML;
the code appears to make it past the first line, but it gets tripped up at the second. I'm wondering if someone would be able to clarify the function of .send(), and if server permissions may by at fault; IE 7/8 it tells me "access is denied" when this block of code runs.
Make sure that ajax requests are sent to the same domain from the resources were accessed.
Taking your code sample here,
xhttp.open("GET",filepath,false);
xhttp.send();
You have requested for a resource with HTTP method GET. This request will be fired only once the send() method is called on the XHR object according to the specification[1]. The arguments for send() will be ignored if the method is GET.
Now once the xhr object is created, it goes through different states[2] such as
UNSENT (numeric value 0)
OPENED (numeric value 1)
HEADERS_RECEIVED (numeric value 2)
LOADING (numeric value 3)
DONE (numeric value 4)
The moment the request is fired(ie , the send() is called), the xhr object will have a state of OPENED.
Now, if we look at the 3rd line of your code "xmlDoc=xhttp.responseXML;", it is quite unclear at what state you are trying to read the content. The best way to read the content is when the state reaches 4 or DONE
Just modify your code as given below:
var xhr = new XMLHttpRequest();
xhr.open("GET", somefilepath, true);
xhr.send();
xhr.onreadystatechange = function() {
if(this.readyState == 4) {
xmlDoc=xhr.responseXML;
}
}
EDIT: It's been pointed out below that this doesn't work because craigslist doesn't set an Allow-Cross-Domain header. OK, I'll buy that. Is there any other way to use javascript in firefox to download a page cross-domain then?
Yes, I know the following code does not work in IE. I know IE expects me to use XDomainRequest() instead. I don't care about that. This is firefox only.
I'm trying to do a cross-domain web request in firefox javascript. I keep getting a status of 0. Does anyone know why?
var url = "http://newyork.craigslist.org";
var xdr = new XMLHttpRequest(); //Yes, I know IE expects XDomainRequest. Don't care
xdr.onreadystatechange = function() {
if (xdr.readyState == 4) {
alert(xdr.status); //Always returns 0! And xdr.responseText is blank too
}
}
xdr.open("get", url, true);
xdr.send(null);
Shouldn't that work?
Craigslist doesn't allow cross-domain requests to it. It needs to send a proper Access-Control-Allow-Origin header.