EDIT: It's been pointed out below that this doesn't work because craigslist doesn't set an Allow-Cross-Domain header. OK, I'll buy that. Is there any other way to use javascript in firefox to download a page cross-domain then?
Yes, I know the following code does not work in IE. I know IE expects me to use XDomainRequest() instead. I don't care about that. This is firefox only.
I'm trying to do a cross-domain web request in firefox javascript. I keep getting a status of 0. Does anyone know why?
var url = "http://newyork.craigslist.org";
var xdr = new XMLHttpRequest(); //Yes, I know IE expects XDomainRequest. Don't care
xdr.onreadystatechange = function() {
if (xdr.readyState == 4) {
alert(xdr.status); //Always returns 0! And xdr.responseText is blank too
}
}
xdr.open("get", url, true);
xdr.send(null);
Shouldn't that work?
Craigslist doesn't allow cross-domain requests to it. It needs to send a proper Access-Control-Allow-Origin header.
Related
I'm doing an API call to the Insightly API using this code:
var x = new XMLHttpRequest();
x.open("GET", "https://api.insight.ly/v2.2/Projects/Search?status=in%20progress&brief=false&count_total=false", true);
x.setRequestHeader('Authorization', 'Basic: 'some-basic-key-and-user');
x.send();
console.log(x.status);
but the responsecode output equals 0
I've tried my URL and AuthKey using hurl.it's tool, and it gives back exactly the response I need. I've also tried x.statusText and x.Response
How come my script doesn't give me the output I need?
EDIT:
After trying adding those parameters from Mozilla's documentation (thank you Amy). I still don't get the correct response code.
My code now:
var x = new XMLHttpRequest();
x.open("GET", "https://api.insight.ly/v2.2/Projects/Search?status=in%20progress&brief=false&count_total=false", true);
x.setRequestHeader('Authorization', 'Basic: b4c6c660-e20f-4f31-b90d-b6a11bfe4ef2');
x.onprogress = function () {
console.log('LOADING', x.status);
};
x.onload = function () {
console.log('DONE', x.status);
};
x.send();
console.log(x.status);
The response code:
0
LOADING 0
Also I've read through the possible answer, and I still don't know how to fix my mistake, I know why it is asynchronous and why it doesn't work this way, but I don't know how to fix.
First of all, you have unnecessary quote symbol on row 3 (right after some-basic-key-and-user).
Second is that insight api doesn't support cross origin resource sharing (CORS for short).
You might want to introduce some proxy on the same domain you query data from.
My XML HTTP request requests a URL that does not exist:
var url = 'http://redHerringObviouslyNonexistentDomainName.com';
var myHttpRequest = new XMLHttpRequest();
myHttpRequest.open('GET', url, true);
myHttpRequest.onreadystatechange = function() {
if (myHttpRequest.readyState == 4 && myHttpRequest.status == 200) {
// Do stuff with myHttpRequest.responseText;
}
}
myHttpRequest.responseType = 'text';
myHttpRequest.send();
I'm OK with the URL not existing ... but I'm not OK with Chrome issuing the following error in the Console:
GET redHerringObviouslyNonexistentDomainName.com/ net::ERR_NAME_NOT_RESOLVED
How do I tell Chrome "I don't care that the URL does not exist - just suppress that error from the Console?"
If someone answers this question by buying that domain name, I will lol so hard ... but then you'll feel bad after I change the URL I use in this question.
as far as this problem, an option would be to send the request to a non-existent ip rather than a non-existent domain because it is throwing that error out of an attempt to find out what ip it points to failing. so by sending it to an ip that you know for a fact will not return anything you may be able to avoid this, although this is purely in theory and you could get an entirely different error thrown at you.
another option is to send the request and have the value returned made into a variable with
errorDomain = yourMethodOfRequestingStatus
this is pure theory and I have not tested any of the suggested methods but I hope it helped you at least a bit.
The following code produces nothing on the html page, it seems to break down on 'status':
var get_json_file = new XMLHttpRequest();
get_json_file.open("GET", "/Users/files/Documents/time.json", true);
**document.write(get_json_file.status);**
keep in mind, that I am on a Mac, so there is no C: drive....however, this line of code does work fine:
document.write(get_json_file.readyState);
I just want to know that I was able to successfully find my json file. Perhaps, I should ask, what should I be looking for to achieve what I want ?
Another basic question about AJAX. I suggest you to read the MDN article about using XMLHttpRequest. You can't access the 'status' property until it is ready, and you haven't even called the 'send()' method, which performs the actual request. You can't have a status without making an HTTP request first. Learn how AJAX works before trying to use it. Explaining it all would be too long and this is not the place.
You can only get the status when the ajax has finished. That is, when the page was loaded, or a 404 was returned.
Because you're trying to call status straight after the request was sent (or not sent, read the P.S), you're getting nothing.
You need to make an async call, to check that status only when the request finishes:
get_json_file.onreadystatechange = function (){
if (get_json_file.readyState==4 && get_json_file.status==200)
{
alert('success');
}
}
read more at http://www.w3schools.com/ajax/ajax_xmlhttprequest_onreadystatechange.asp
P.S as noted by #Oscar, you're missing the send().
If you want to try a synchronous approach, which would stop the code from running until a response is returned, you can try:
var get_json_file = new XMLHttpRequest();
get_json_file.open("GET", "/Users/files/Documents/time.json", false);
//notice we set async to false (developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest)
get_json_file.send(); //will wait for a response
document.write(get_json_file.status);
//(Credit: orginial asker)
Because of security issues you are not allowed to send requests to files on the local system, but what you could do is look into the fileReader API. (more info here)
<sidenote>
The reason that the readyState works and not status is because by defualt
readyState has a value of 0 and status has no value so it would be undefined.
</sidenote>
in my ubuntu, the path will be prefixed with file:///
i think your json file path should have file:///Users/files/Documents/time.json, because mac and ubuntu based on unix
and then you can check ajax status using #TastySpaceApple answer
if you using google chrome, don't forget to launch it with -–allow-file-access-from-files command, because google chrome not load local file by default due to security reason
Edit: Maybe I made the question more complex than it should. My questions is this: How do you make API calls to a server from JS.
I have to create a very simple client that makes GET and POST calls to our server and parses the returned XML. I am writing this in JavaScript, problem is I don't know how to program in JS (started to look into this just this morning)!
As n initial test, I am trying to ping to the Twitter API, here's the function that gets called when user enters the URL http://api.twitter.com/1/users/lookup.xml and hits the submit button:
function doRequest() {
var req_url, req_type, body;
req_url = document.getElementById('server_url').value;
req_type = document.getElementById('request_type').value;
alert("Connecting to url: " + req_url + " with HTTP method: " + req_type);
req = new XMLHttpRequest();
req.open(req_type, req_url, false, "username", "passwd");// synchronous conn
req.onreadystatechange=function() {
if (req.readyState == 4) {
alert(req.status);
}
}
req.send(null);
}
When I run this on FF, I get a
Access to restricted URI denied" code: "1012
error on Firebug. Stuff I googled suggested that this was a FF-specific problem so I switched to Chrome. Over there, the second alert comes up, but displays 0 as HTTP status code, which I found weird.
Can anyone spot what the problem is? People say this stuff is easier to use with JQuery but learning that on top of JS syntax is a bit too much now.
For security reasons, you cannot use AJAX to request a file from a different domain.
Since your Javascript isn't running on http://api.twitter.com, it cannot request files from http://api.twitter.com.
Instead, you can write server-side code on your domain to send you the file.
I'm writing an application and I'm trying to tie simple AJAX functionality in. It works well in Mozilla Firefox, but there's an interesting bug in Internet Explorer: Each of the links can only be clicked once. The browser must be completely restarted, simply reloading the page won't work. I've written a very simple example application that demonstrates this.
Javascript reproduced below:
var xmlHttp = new XMLHttpRequest();
/*
item: the object clicked on
type: the type of action to perform (one of 'image','text' or 'blurb'
*/
function select(item,type)
{
//Deselect the previously selected 'selected' object
if(document.getElementById('selected')!=null)
{
document.getElementById('selected').id = '';
}
//reselect the new selcted object
item.id = 'selected';
//get the appropriate page
if(type=='image')
xmlHttp.open("GET","image.php");
else if (type=='text')
xmlHttp.open("GET","textbox.php");
else if(type=='blurb')
xmlHttp.open("GET","blurb.php");
xmlHttp.send(null);
xmlHttp.onreadystatechange = catchResponse;
return false;
}
function catchResponse()
{
if(xmlHttp.readyState == 4)
{
document.getElementById("page").innerHTML=xmlHttp.responseText;
}
return false;
}
Any help would be appreciated.
This happens because Internet Explorer ignores the no-cache directive, and caches the results of ajax calls. Then, if the next request is identical, it will just serve up the cached version. There's an easy workaround, and that is to just append random string on the end of your query.
xmlHttp.open("GET","blurb.php?"+Math.random();
It looks like IE is caching the response. If you either change your calls to POST methods, or send the appropriate headers to tell IE not to cache the response, it should work.
The headers I send to be sure it doesn't cache are:
Pragma: no-cache
Cache-Control: no-cache
Expires: Fri, 30 Oct 1998 14:19:41 GMT
Note the expiration date can be any time in the past.
The problem is that IE does wacky things when the response handler is set before open is called. You aren't doing that for the first xhr request, but since you reuse the xhr object, when the second open is called, the response handler is already set. That may be confusing, but the solution is simple. Create a new xhr object for each request:
move the:
var xmlHttp = new XMLHttpRequest();
inside the select function.
Read No Problems section in [link text][1] [1]: http://en.wikipedia.org/wiki/XMLHttpRequest
The response header that has worked best for me in the IE AJAX case is Expires: -1, which is probably not per spec but mentioned in a Microsoft Support Article (How to prevent caching in Internet Explorer). This is used in conjunction with Cache-Control: no-cache and Pragma: no-cache.
xmlHttp.open("GET","blurb.php?"+Math.random();
I agree with this one.. it works perfectly as a solution to this problem.
the problem is that IE7's caching of urls were terrible, ignoring the no-cache header and save the resource to its cache using its url as key index, so the best solution is to add a random parameter to the GET url.
In jQuery.ajax, you can set the "cache" setting to false:
http://api.jquery.com/jQuery.ajax/
Using Dojo, this can be done using dojo.date.stamp, just adding the following to the url:
"...&ts=" + dojo.date.stamp.toISOString(new Date())