The chrome.webRequest API has the concept of a request ID (source: Chrome webRequest documention):
Request IDs
Each request is identified by a request ID. This ID is unique within a browser session and the context of an extension. It remains constant during the the life cycle of a request and can be used to match events for the same request. Note that several HTTP requests are mapped to one web request in case of HTTP redirection or HTTP authentication.
You can use it to correlate the requests even across redirects. But how do you initially get hold off the id when start a new request with fetch or XMLHttpRequest?
So far, I have not found anything better than to use the URL of the request as a way to make the initial link between the new request and the requestId. However, if there are overlapping requests to the same resource, this is not reliable.
Questions:
If you make a new request (either with fetch or XMLHttpRequest), how do you reliably get access to the requestId?
Does the fetch API or XMLHttpRequest API allow access to the requestId?
What I want to do is to use the functionality provided by the webRequest API to modify a single request, but I want to make sure that I do not accidentally modify other pending requests.
To the best of my knowledge, there is no direct support in the fetch or XHMLHttpRequest API. Also I'm not aware of completely reliable way to get hold of the requestId.
What I ended up doing was installing a onBeforeRequest listener, storing the requestId, and then immediately removing the listener again. For instance, it could look like this:
function makeSomeRequest(url) {
let listener;
const removeListener = () => {
if (listener) {
chrome.webRequest.onBeforeRequest.removeListener(listener);
listener = null;
}
};
let requestId;
listener = (details) => {
if (!requestId && urlMatches(details.url, url)) {
requestId = details.requestId;
removeListener();
}
};
chrome.webRequest.onBeforeRequest.addListener(listener, { urls: ['<all_urls>'] });
// install other listeners, which can then use the stored "requestId"
// ...
// finally, start the actual request, for instance
const promise = fetch(url).then(doSomething);
// and make sure to always clean up the listener
promise.then(removeListener, removeLister);
}
It is not perfect, and matching the URL is a detail that I left open. You could simply compare whether the details.url is identical to url:
function urlMatches(url1, url2) {
return url1 === url2;
}
Note that it is not guaranteed that you see the identical URL, for instance, if make a request against http://some.domain.test, you will see http://some.domain.test/ in your listener (see my other question about the details). Or http:// could have been replaced by https:// (here I'm not sure, but it could be because of other extensions like HTTPS Everywhere).
That is why the code above should only be seen as a sketch of the idea. It seems to work good enough in practice, as long as you do not start multiple requests to the identical URL. Still, I would be interested in learning about a better way to approach the problem.
How to detect the Internet connection is offline in JavaScript?
Almost all major browsers now support the window.navigator.onLine property, and the corresponding online and offline window events. Run the following code snippet to test it:
console.log('Initially ' + (window.navigator.onLine ? 'on' : 'off') + 'line');
window.addEventListener('online', () => console.log('Became online'));
window.addEventListener('offline', () => console.log('Became offline'));
document.getElementById('statusCheck').addEventListener('click', () => console.log('window.navigator.onLine is ' + window.navigator.onLine));
<button id="statusCheck">Click to check the <tt>window.navigator.onLine</tt> property</button><br /><br />
Check the console below for results:
Try setting your system or browser in offline/online mode and check the log or the window.navigator.onLine property for the value changes.
Note however this quote from Mozilla Documentation:
In Chrome and Safari, if the browser is not able to connect to a local area network (LAN) or a router, it is offline; all other conditions return true. So while you can assume that the browser is offline when it returns a false value, you cannot assume that a true value necessarily means that the browser can access the internet. You could be getting false positives, such as in cases where the computer is running a virtualization software that has virtual ethernet adapters that are always "connected." Therefore, if you really want to determine the online status of the browser, you should develop additional means for checking.
In Firefox and Internet Explorer, switching the browser to offline mode sends a false value. Until Firefox 41, all other conditions return a true value; since Firefox 41, on OS X and Windows, the value will follow the actual network connectivity.
(emphasis is my own)
This means that if window.navigator.onLine is false (or you get an offline event), you are guaranteed to have no Internet connection.
If it is true however (or you get an online event), it only means the system is connected to some network, at best. It does not mean that you have Internet access for example. To check that, you will still need to use one of the solutions described in the other answers.
I initially intended to post this as an update to Grant Wagner's answer, but it seemed too much of an edit, especially considering that the 2014 update was already not from him.
You can determine that the connection is lost by making failed XHR requests.
The standard approach is to retry the request a few times. If it doesn't go through, alert the user to check the connection, and fail gracefully.
Sidenote: To put the entire application in an "offline" state may lead to a lot of error-prone work of handling state.. wireless connections may come and go, etc. So your best bet may be to just fail gracefully, preserve the data, and alert the user.. allowing them to eventually fix the connection problem if there is one, and to continue using your app with a fair amount of forgiveness.
Sidenote: You could check a reliable site like google for connectivity, but this may not be entirely useful as just trying to make your own request, because while Google may be available, your own application may not be, and you're still going to have to handle your own connection problem. Trying to send a ping to google would be a good way to confirm that the internet connection itself is down, so if that information is useful to you, then it might be worth the trouble.
Sidenote: Sending a Ping could be achieved in the same way that you would make any kind of two-way ajax request, but sending a ping to google, in this case, would pose some challenges. First, we'd have the same cross-domain issues that are typically encountered in making Ajax communications. One option is to set up a server-side proxy, wherein we actually ping google (or whatever site), and return the results of the ping to the app. This is a catch-22 because if the internet connection is actually the problem, we won't be able to get to the server, and if the connection problem is only on our own domain, we won't be able to tell the difference. Other cross-domain techniques could be tried, for example, embedding an iframe in your page which points to google.com, and then polling the iframe for success/failure (examine the contents, etc). Embedding an image may not really tell us anything, because we need a useful response from the communication mechanism in order to draw a good conclusion about what's going on. So again, determining the state of the internet connection as a whole may be more trouble than it's worth. You'll have to weight these options out for your specific app.
IE 8 will support the window.navigator.onLine property.
But of course that doesn't help with other browsers or operating systems. I predict other browser vendors will decide to provide that property as well given the importance of knowing online/offline status in Ajax applications.
Until that happens, either XHR or an Image() or <img> request can provide something close to the functionality you want.
Update (2014/11/16)
Major browsers now support this property, but your results will vary.
Quote from Mozilla Documentation:
In Chrome and Safari, if the browser is not able to connect to a local area network (LAN) or a router, it is offline; all other conditions return true. So while you can assume that the browser is offline when it returns a false value, you cannot assume that a true value necessarily means that the browser can access the internet. You could be getting false positives, such as in cases where the computer is running a virtualization software that has virtual ethernet adapters that are always "connected." Therefore, if you really want to determine the online status of the browser, you should develop additional means for checking.
In Firefox and Internet Explorer, switching the browser to offline mode sends a false value. All other conditions return a true value.
if(navigator.onLine){
alert('online');
} else {
alert('offline');
}
There are a number of ways to do this:
AJAX request to your own website. If that request fails, there's a good chance it's the connection at fault. The JQuery documentation has a section on handling failed AJAX requests. Beware of the Same Origin Policy when doing this, which may stop you from accessing sites outside your domain.
You could put an onerror in an img, like <img src="http://www.example.com/singlepixel.gif" onerror="alert('Connection dead');" />.
This method could also fail if the source image is moved / renamed, and would generally be an inferior choice to the ajax option.
So there are several different ways to try and detect this, none perfect, but in the absence of the ability to jump out of the browser sandbox and access the user's net connection status directly, they seem to be the best options.
As olliej said, using the navigator.onLine browser property is preferable than sending network requests and, accordingly with developer.mozilla.org/En/Online_and_offline_events, it is even supported by old versions of Firefox and IE.
Recently, the WHATWG has specified the addition of the online and offline events, in case you need to react on navigator.onLine changes.
Please also pay attention to the link posted by Daniel Silveira which points out that relying on those signal/property for syncing with the server is not always a good idea.
You can use $.ajax()'s error callback, which fires if the request fails. If textStatus equals the string "timeout" it probably means connection is broken:
function (XMLHttpRequest, textStatus, errorThrown) {
// typically only one of textStatus or errorThrown
// will have info
this; // the options for this ajax request
}
From the doc:
Error: A function to be called if the request
fails. The function is passed three
arguments: The XMLHttpRequest object,
a string describing the type of error
that occurred and an optional
exception object, if one occurred.
Possible values for the second
argument (besides null) are "timeout",
"error", "notmodified" and
"parsererror". This is an Ajax Event
So for example:
$.ajax({
type: "GET",
url: "keepalive.php",
success: function(msg){
alert("Connection active!")
},
error: function(XMLHttpRequest, textStatus, errorThrown) {
if(textStatus == 'timeout') {
alert('Connection seems dead!');
}
}
});
window.navigator.onLine
is what you looking for, but few things here to add, first, if it's something on your app which you want to keep checking (like to see if the user suddenly go offline, which correct in this case most of the time, then you need to listen to change also), for that you add event listener to window to detect any change, for checking if the user goes offline, you can do:
window.addEventListener("offline",
()=> console.log("No Internet")
);
and for checking if online:
window.addEventListener("online",
()=> console.log("Connected Internet")
);
The HTML5 Application Cache API specifies navigator.onLine, which is currently available in the IE8 betas, WebKit (eg. Safari) nightlies, and is already supported in Firefox 3
I had to make a web app (ajax based) for a customer who works a lot with schools, these schools have often a bad internet connection I use this simple function to detect if there is a connection, works very well!
I use CodeIgniter and Jquery:
function checkOnline() {
setTimeout("doOnlineCheck()", 20000);
}
function doOnlineCheck() {
//if the server can be reached it returns 1, other wise it times out
var submitURL = $("#base_path").val() + "index.php/menu/online";
$.ajax({
url : submitURL,
type : "post",
dataType : "msg",
timeout : 5000,
success : function(msg) {
if(msg==1) {
$("#online").addClass("online");
$("#online").removeClass("offline");
} else {
$("#online").addClass("offline");
$("#online").removeClass("online");
}
checkOnline();
},
error : function() {
$("#online").addClass("offline");
$("#online").removeClass("online");
checkOnline();
}
});
}
an ajax call to your domain is the easiest way to detect if you are offline
$.ajax({
type: "HEAD",
url: document.location.pathname + "?param=" + new Date(),
error: function() { return false; },
success: function() { return true; }
});
this is just to give you the concept, it should be improved.
E.g. error=404 should still mean that you online
I know this question has already been answered but i will like to add my 10 cents explaining what's better and what's not.
Window.navigator.onLine
I noticed some answers spoke about this option but they never mentioned anything concerning the caveat.
This option involves the use of "window.navigator.onLine" which is a property under Browser Navigator Interface available on most modern browsers. It is really not a viable option for checking internet availability because firstly it is browser centric and secondly most browsers implement this property differently.
In Firefox: The property returns a boolean value, with true meaning online and false meaning offline but the caveat here is that
"the value is only updated when the user follows links or when a script requests a remote page." Hence if the user goes offline and
you query the property from a js function or script, the property will
always return true until the user follows a link.
In Chrome and Safari: If the browser is not able to connect to a local area network (LAN) or a router, it is offline; all other
conditions return true. So while you can assume that the browser is
offline when it returns a false value, you cannot assume that a true
value necessarily means that the browser can access the internet. You
could be getting false positives, such as in cases where the computer
is running a virtualization software that has virtual ethernet
adapters that are always "connected".
The statements above is simply trying to let you know that browsers alone cannot tell. So basically this option is unreliable.
Sending Request to Own Server Resource
This involves making HTTP request to your own server resource and if reachable assume internet availability else the user is offline. There are some few caveats to this option.
No server availability is 100% reliant, hence if for some reason your server is not reachable it would be falsely assumed that the user is offline whereas they're connected to the internet.
Multiple request to same resource can return cached response making the http response result unreliable.
If you agree your server is always online then you can go with this option.
Here is a simple snippet to fetch own resource:
// This fetches your website's favicon, so replace path with favicon url
// Notice the appended date param which helps prevent browser caching.
fetch('/favicon.ico?d='+Date.now())
.then(response => {
if (!response.ok)
throw new Error('Network response was not ok');
// At this point we can safely assume the user has connection to the internet
console.log("Internet connection available");
})
.catch(error => {
// The resource could not be reached
console.log("No Internet connection", error);
});
Sending Request to Third-Party Server Resource
We all know CORS is a thing.
This option involves making HTTP request to an external server resource and if reachable assume internet availability else the user is offline. The major caveat to this is the Cross-origin resource sharing which act as a limitation. Most reputable websites blocks CORS requests but for some you can have your way.
Below a simple snippet to fetch external resource, same as above but with external resource url:
// Firstly you trigger a resource available from a reputable site
// For demo purpose you can use the favicon from MSN website
// Also notice the appended date param which helps skip browser caching.
fetch('https://static-global-s-msn-com.akamaized.net/hp-neu/sc/2b/a5ea21.ico?d='+Date.now())
.then(response => {
// Check if the response is successful
if (!response.ok)
throw new Error('Network response was not ok');
// At this point we can safely say the user has connection to the internet
console.log("Internet available");
})
.catch(error => {
// The resource could not be reached
console.log("No Internet connection", error);
});
So, Finally for my personal project i went with the 2nd option which involves requesting own server resource because basically there are many factors to tell if there is "Internet Connection" on a user's device, not just from your website container alone nor from a limited browser api.
Remember your users can also be in an environment where some websites or resources are blocked, prohibited and not accessible which in turn affects the logic of connectivity check. The best bet will be:
Try to access a resource on your own server because this is your users environment (Typically i use website's favicon because the response is very light and it is not frequently updated).
If there is no connection to the resource, simply say "Error in connection" or "Connection lost" when you need to notify the user rather than assume a broad "No internet connection" which depends on many factors.
I think it is a very simple way.
var x = confirm("Are you sure you want to submit?");
if (x) {
if (navigator.onLine == true) {
return true;
}
alert('Internet connection is lost');
return false;
}
return false;
The problem of some methods like navigator.onLine is that they are not compatible with some browsers and mobile versions, an option that helped me a lot was to use the classic XMLHttpRequest method and also foresee the possible case that the file was stored in cache with response XMLHttpRequest.status is greater than 200 and less than 304.
Here is my code:
var xhr = new XMLHttpRequest();
//index.php is in my web
xhr.open('HEAD', 'index.php', true);
xhr.send();
xhr.addEventListener("readystatechange", processRequest, false);
function processRequest(e) {
if (xhr.readyState == 4) {
//If you use a cache storage manager (service worker), it is likely that the
//index.php file will be available even without internet, so do the following validation
if (xhr.status >= 200 && xhr.status < 304) {
console.log('On line!');
} else {
console.log('Offline :(');
}
}
}
I was looking for a client-side solution to detect if the internet was down or my server was down. The other solutions I found always seemed to be dependent on a 3rd party script file or image, which to me didn't seem like it would stand the test of time. An external hosted script or image could change in the future and cause the detection code to fail.
I've found a way to detect it by looking for an xhrStatus with a 404 code. In addition, I use JSONP to bypass the CORS restriction. A status code other than 404 shows the internet connection isn't working.
$.ajax({
url: 'https://www.bing.com/aJyfYidjSlA' + new Date().getTime() + '.html',
dataType: 'jsonp',
timeout: 5000,
error: function(xhr) {
if (xhr.status == 404) {
//internet connection working
}
else {
//internet is down (xhr.status == 0)
}
}
});
How about sending an opaque http request to google.com with no-cors?
fetch('https://google.com', {
method: 'GET', // *GET, POST, PUT, DELETE, etc.
mode: 'no-cors',
}).then((result) => {
console.log(result)
}).catch(e => {
console.error(e)
})
The reason for setting no-cors is that I was receiving cors errors even when disbaling the network connection on my pc. So I was getting cors blocked with or without an internet connection. Adding the no-cors makes the request opaque which apperantly seems to bypass cors and allows me to just simply check if I can connect to Google.
FYI: Im using fetch here for making the http request.
https://www.npmjs.com/package/fetch
My way.
<!-- the file named "tt.jpg" should exist in the same directory -->
<script>
function testConnection(callBack)
{
document.getElementsByTagName('body')[0].innerHTML +=
'<img id="testImage" style="display: none;" ' +
'src="tt.jpg?' + Math.random() + '" ' +
'onerror="testConnectionCallback(false);" ' +
'onload="testConnectionCallback(true);">';
testConnectionCallback = function(result){
callBack(result);
var element = document.getElementById('testImage');
element.parentNode.removeChild(element);
}
}
</script>
<!-- usage example -->
<script>
function myCallBack(result)
{
alert(result);
}
</script>
<a href=# onclick=testConnection(myCallBack);>Am I online?</a>
request head in request error
$.ajax({
url: /your_url,
type: "POST or GET",
data: your_data,
success: function(result){
//do stuff
},
error: function(xhr, status, error) {
//detect if user is online and avoid the use of async
$.ajax({
type: "HEAD",
url: document.location.pathname,
error: function() {
//user is offline, do stuff
console.log("you are offline");
}
});
}
});
Just use navigator.onLine if this is true then you're online else offline
You can try this will return true if network connected
function isInternetConnected(){return navigator.onLine;}
Here is a snippet of a helper utility I have. This is namespaced javascript:
network: function() {
var state = navigator.onLine ? "online" : "offline";
return state;
}
You should use this with method detection else fire off an 'alternative' way of doing this. The time is fast approaching when this will be all that is needed. The other methods are hacks.
There are 2 answers forthis for two different senarios:-
If you are using JavaScript on a website(i.e; or any front-end part)
The simplest way to do it is:
<h2>The Navigator Object</h2>
<p>The onLine property returns true if the browser is online:</p>
<p id="demo"></p>
<script>
document.getElementById("demo").innerHTML = "navigator.onLine is " + navigator.onLine;
</script>
But if you're using js on server side(i.e; node etc.), You can determine that the connection is lost by making failed XHR requests.
The standard approach is to retry the request a few times. If it doesn't go through, alert the user to check the connection, and fail gracefully.
I have the following code to check whether the webpage can be framed or not at all:
var req = new XMLHttpRequest();
var test = req.open('GET', link, false);
console.log("test",test); //ALWAYS undefined
if(req.send(null)){ //ALWAYS throws error NS_ERROR_FAILURE
var headers = req.getAllResponseHeaders().toLowerCase();
console.log("headers");
}else{
console.log("FAILED");
}
I tested it with several links, frameable or not, but always fails. Do you know exactly why?
Links:
http://www.joomlaworks.net/images/demos/galleries/abstract/7.jpg
http://www.facebook.com (...)
test is undefined because open() is declared void, it does not return any value. Check out MDN on the open method.
Why are you passing null to send? (see edit) If you intend to call the overload of send that doesn't take any argument you should just call req.send(); instead if you want to call another version of the method you should pass a Blob, Document, DOMString or FormData, but null won't work.
EDIT: Often the method is invoked as send(null); it seems to be because at some point in history (is that old?) the argument of send was mandatory. This question unravels the mystery.
Moreover, again send doesn't return any value so the condition in the if will never evaluate true. MDN documents also the send method.
Last, you are performing a cross-domain request, i.e. you're asking for content that is located on another domain. XMLHttpRequest doesn't handle that, most likely you will end up with this error:
XMLHttpRequest cannot load link. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin origin is therefore not allowed access.
Check out this question and this on StackOverflow if you need more information about that.
You may want to take a look at Using XMLHttpRequest again on the MDN network, it has many reliable examples that can help you get acquainted with these requests.
EDIT: expanding on the topic iframe embeddability.
Detecting reliably if a website can be embedded in an iframe is difficult, as this question shows. Detecting (or preventing) frame buster in JavaScript is like a dog chasing its own tail.
Nevertheless, a website that doesn't want to be incorporated, hopefully would send the X-Frame-Options: DENY header in its response. This is not very useful, because you can't perform cross domain requests: your request would fail before getting to know if the X-Frame-Options header is even set. For completeness, this is the way of checking if the response to an XMLHttpRequest contains a given header (but note that this would work only within the same domain, which is presumably under your control, and you would know already if a page is 'frameable'):
function checkXFrame() {
var xframe = this.getResponseHeader("X-Frame-Options");
if (!xframe) {
alert("Frameable.");
return;
}
xframe = xframe.toLowerCase();
if (xframe == "deny") {
alert("Not frameable.");
} else if (xframe == "sameorigin") {
alert("Frameable within the same domain.");
} else if (xframe.startsWith("allow-from")) {
alert("Frameable from certain domains.");
} else {
alert("Someone sent a weird header.");
}
}
var oReq = new XMLHttpRequest();
oReq.open("HEAD" /* use HEAD if you only need the headers! */, "yourpage.html");
oReq.onload = checkXFrame;
oReq.send();
(this code doesn't check for 404 or any other error!)
I'm using the following code in order to make an ajax call to my server.
The code makes the call to the server and in return, it gets a list of all the friends that use the same app.
FB.getLoginStatus(function(response) {
if (response.session) {
uid = response.session.uid;
access_token = response.session.access_token;
$.getJSON(serverLink+"ajax.php?action=getFriendsApp", {token:access_token}
,function(data){
var temp = data;
if(true){
var container = $('#friends_part_main');
var fp = $('#friends_part');
fp.show();
var friends = data;
for(var i in friends){
container.append('<a target="_blank" href="http://www.facebook.com/profile.php?id='+friends[i]+'">\n\
<img src="https://graph.facebook.com/'+friends[i]+'/picture" alt="friend" />\n\
</a>');
}
}
});
}
If I run this code directly from the browser (www.mydomain.com/app) it works.
But when I run it from the canvas page (app.facebook.com) I get the foloowing error:
XMLHttpRequest cannot load
http://www.mydomain.com/src/ajax.php?action=getFriendsApp&token=AAAC0kxh1WAcBAHo3s0QaVy34mgdnCNGvrDZCvIQsZCBHZC8ovR9IuYEFlUKRqK0GgJosWAD6Embg8QrN07vivE6mOuAZAtxUD7WpySDL3wZDZD.
Origin https://www.mydomain.com is not allowed by
Access-Control-Allow-Origin.
Can you figure out why??
For me, the domain in the URL of my ajax page "ajax.php" and the URL of the ajax-calling-page "index.php" weren't exactly the same. "www" missed...
You have to check that your two scripts domains (the calling script and the responding script) are exacty the same ! Check the "http" vs "https", check the "https://my-domain.com" vs "https://www.my-domain.com" etc.
Hope it helps.
xxx
You need JSONP or to allow ajax requests on your domain. You can force it with
header("Allow-Access-Control-Origin:*");
Your XMLHttpRequest is not allowed by access control allow origin because facebook load your application via secure https, but you could access only http. You can't load from other sundomain, protocol or port.
Try JSONP with callback function. You can load Javascript code from any place, if your response contain not only data, but callback function, you could access any data from any place of your server (site).
I'm porting one of my Firefox extensions to Chrome, and I'm running into a little problem with an AJAX query. The following code works fine in the FF extension, but fails with a status of "0" in Chrome.
function IsImage(url) {
var isImage = false;
var reImageContentType = /image\/(jpeg|pjpeg|gif|png|bmp)/i;
var reLooksLikeImage = /\.(jpg|jpeg|gif|png|bmp)/i;
if(!reLooksLikeImage.test(url))
{
return false;
}
var xhr = $.ajax({
async: false,
type: "HEAD",
url: url,
timeout: 1000,
complete : function(xhr, status) {
switch(status)
{
case "success":
isImage = reImageContentType.test(xhr.getResponseHeader("Content-Type"));
break;
}
},
});
return isImage;
}
This particular part of the extension checks what's on the clipboard (another Chrome issue I've already solved), and if it's an image URL, it sends a HEAD request and checks the "Content-Type" response header to be sure it's an image. If so, it'll return true, pasting the clipboard text in an IMG tag. Otherwise, if it looks like a normal URL that's not an image, it wraps it in an A tag. If it's not a URL, it just does a plain paste.
Anyway, the url being checked is definitely an image, and works fine in FF, but in the complete function, xhr.status is "0", and status is "error" when the function completes. Upping the timeout to 10 seconds doesn't help. I've verified the test images should come back as "image/jpeg" when running:
curl -i -X HEAD <imageURL>
I also know I should be using the success and error callbacks instead of complete, but they don't work either. Any ideas?
As you figured out Chris, in Content Scripts, you can't do any Cross-Domain XHRs. You would have to do them in an extension page such as Background, Popup, or even Options to do it.
For more information regarding content script limitation, please refer to:
http://code.google.com/chrome/extensions/content_scripts.html
And for more information regarding xhr limitation, please refer to:
http://code.google.com/chrome/extensions/xhr.html
I've solved part of the problem, actually most of it. First, as Brennan and I mentioned yesterday, I needed to set permissions in manifest.json.
"permissions": [
"http://*/*",
"https://*/*"
],
It's not ideal to give permissions to every domain, but since images can be hosted from any domain, it'll have to do, and I'll have to guard against XSS.
The other problem is that Chrome indeed blocks anything in the content_scripts section from making AJAX calls, failing silently. However, there is no such restriction on the background_page, if you have one. That page can make any AJAX calls it wants, and Chrome has an API to allow your script to open a port and pass requests to that background page. Someone wrote a script called XHRProxy as a workaround, and I modified it to get the appropriate response header. It works!
My only problem now is figuring out how to make the script wait for the result of the call to be set in the event, instead of just returning immediately.
Check your manifest file. Does the extension have permission to access that url?
If it helps to your second problem (or anyone else):
You can send a request to your background page like:
chrome.extension.sendRequest({var1: "var1value", var2: "value", etc},
function(response) {
//Do something once the request is done.
});
The Variable response can be anything you want it to be. It can simply be a success or deny string. Up to you.
On your background page you can add a listener:
chrome.extension.onRequest.addListener(
function(request, sender, sendResponse) {
// Do something here
// Once done you can send back all the info via:
sendResponse( anything you want here );
// and it'll be passed back to your content script.
});
With this you can pass the response from your AJAX request back to your content script and do whatever you wanted to do with it there.
The accepted answer is outdated.
Content scripts can now make cross-site XMLHttpRequests just like background scripts!
The concerning URLs need to be permitted in the manifest:
{
"name": "My extension",
...
"permissions": [
"http://www.google.com/"
],
...
}
You can also use expressions like:
"http://*.google.com/"
"http://*/"
to get more general permissions.
Here is the Link to the documentation.