Is there a difference between an AJAX request and a direct browser request (in terms of how a web page is called and loaded)?
In other words, I mean: is a direct server-side request handled in any way differently than a client-side request (initiated by the browser)?
There may be some header differences, but the main behavior difference is on the client.
When the browser makes a regular request as in window.location.href = "index.html", it clears the current window and loads the server response into the window.
With an ajax request, the current window/document is unaffected and javascript code can examine the results of the request and do what it wants to with those results (insert HTML dynamically into the page, parse JSON and use it the page logic, parse XML, etc...).
The server doesn't do anything different - it's just in how the client treats the response from the two requests.
An AJAX request is identical to a "normal" browser request as far as the server is concerned other than potentially slightly different HTTP headers. e.g. chrome sends:
X-Requested-With:XMLHttpRequest
I'm not sure if that header is standardized or not, or if it's different in every browser or even included at all in every browser.
edit: I take that back, that header is sent by jQuery (and likely other JS libraries), not the browser as is evidenced by:
var xhr = new XMLHttpRequest();
xhr.open('GET', '/');
xhr.send();
which sends:
Accept:*/*
Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3
Accept-Encoding:gzip,deflate,sdch
Accept-Language:en-US,en;q=0.8
Connection:keep-alive
Cookie: ....
Host:stackoverflow.com
If-Modified-Since:Sat, 31 Dec 2011 01:57:24 GMT
Referer:http://stackoverflow.com/questions/8685750/how-does-an-ajax-request-differ-from-a-normal-browser-request/8685758
User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.12 Safari/535.11
which leads me to the conclusion that by default there is absolutely no difference.
Some popular client-side libraries like jQuery include the X-Requested-With header in their requests and set it to XMLHttpRequest to mark them as AJAX.
This seems to have been considered standard enough a few years ago (probably due to the huge popularity of jQuery and its presence in almost every website) that many server-side frameworks even have helpers that take care of checking for this header in the received request for you:
ASP.NET MVC 5:
HttpRequestBase.IsAjaxRequest()
Django:
HttpRequest.is_ajax()
Flask:
flask.Request.is_xhr
However, it seems that with the end of jQuery's reign in the front end world and the standardization of the fetch API and the rise of other modern client-side libraries that don't add any header for this purpose by default, the pattern has fallen into obsolescence also in the backend; with ASP.NET MVC not including the helper in newer versions and Flask marking it as deprecated.
Not really. Except that most Ajax clients send a X-Requested-With=XMLHttpRequest HTTP header
I always check if "text/html" is the request's "best" Accept mimetype, because browsers always send that as the first.
Firefox example:
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Of course, this may still be a ajax request with text/html as the Accept mimetype, but I found this to be reliable when you know what client will consume your backend api.
An AJAX request in Blink and Gecko-powered browsers (Firefox, Chrome, Edge etc) will send this header with AJAX requests:
Sec-Fetch-Dest: empty
That means that:
The destination is the empty string. This is used for destinations that do not have their own value. For example fetch(), navigator.sendBeacon(), EventSource, XMLHttpRequest, WebSocket, etc.
Safari does not send this header at the time of writing (and IE never has and never will, but it's only a few months before IE should be completely irrelevant).
your user-agent, aka browser, sends an XHR header which you can catch from php like this:
$_SERVER['HTTP_X_REQUESTED_WITH']
Related
I have a webserver that return response as,
HTTP/1.1 302 Found
message://ActualMessage
This works well for UI Clients such as Android and iOS, but how can I handle this case on a web browser?
For example, a browser request to
GET https://myserver.com HTTP/1.1
And a response looks like,
HTTP/1.1 302 Moved Temporary
<html><head><title>Object moved</title></head><body>
<h2>Object moved to here.</h2>
</body></html>
Unfortunately can't change the server to not return that response. I am not seeing a way to how to get this response from the browser. A native WebView can easily handle that response.
Browser will follow "302 Redirect". There is no way how to intercept it.
However, as a workaround you may try:
use xmlHTTPRequest may work (if server allows, see this answer for possible issues)
iframe may work
How come when I go to almost any website, for example SO, if I open up the console, inject jQuery and send a cross-domain ajax request to a server I have running on my localhost, I don't get any errors as I would have expected? However, if I open up one of the webpages that I have written myself, and which is also running on my localhost (but on a different port from the one used by the server), if I try to send an ajax request from the console I get this message:
XMLHttpRequest cannot load https://localhost:10000/. Request header field My-First-Header is not allowed by Access-Control-Allow-Headers in preflight response.
The ajax request looks like:
$.ajax({
type: 'POST',
url: 'https://localhost:10000',
headers: {
"My-First-Header":"first value",
"My-Second-Header":"second value"
}
})
To be clear, my question is not about how to fix this, but rather why I am even able to make cross-domain requests from most other websites (shouldn't they be not allowed?). Do these sites have some sort of mechanism set up that automatically bypasses the restrictions?
Request headers:
Accept:*/*
Accept-Encoding:gzip, deflate, sdch
Accept-Language:en-US,en;q=0.8
Connection:keep-alive
Cookie:csrftoken=lxe5MaAlb9GC5lPGQpXtSj9HvCP0QhCz; PHPSESSID=uta0nlhlh8r1uimdklmt3v3ho1
Host:localhost:10000
Referer:http://stackoverflow.com/
User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.94 Safari/537.36
Response headers:
Content-Length:3
Content-Type:text/html
Date:Mon, 16 May 2016 06:29:03 GMT
Server:TwistedWeb/16.0.0
It seems obvious what's going on here: the browser simply handles code typed in the console differently than code run by the page itself. It makes philosophical sense that it would work that way. After all, the point of the Same-Origin Policy is to prevent XSS and CSRF attacks, and if a user opens up their console and sends cross-domain requests, they're just attacking themselves.
On the other hand, it is possible to trick users into performing XSS on themselves. If you go to Facebook and open up the console, Facebook has code that logs a warning message telling ordinary users not to paste unknown code into the console because it could be malicious. Apparently that's a problem they've seen.
This is called as CORS request.
By default all Cross Domain requests are blocked by most of the major browsers.
Most of the portals services that you are able to request cross domain does special settings in Response. If you don't provide those settings at api level then your api will be blocked for cross domain requests.
These Response settings are as follows:
Response Header Need to have access-control-allow-origin attribute.
access-control-allow-origin can specify * for every api service at global level.
access-control-allow-origin can specify specific method name for every api service separately..
I've got the interesting (and theoretically impossible) task of getting AmazonAWS Kinesis analytics from IE 8 and 9. According to Amazon's own SDK, this is not possible since XDomainRequest does not allow custom headers. Contrary to this statement, however, AmazonAWS allows you to authenticate using query string parameters. My goal was to write a shim for XMLHttpRequest which utilized the XDomainRequest object and converted all Amazon headers into query string parameters.
The actual implementation turned out to be much more difficult than I would have liked. Since Amazon's query string authentication only uses the "host" for SignedHeaders (whereas the AmazonAWS SDK was attempting to use host, date, and target) I had to re-compute the signature. This meant CryptoJS and lots of experimentation to get everything working.
After 4 hours of receiving "Computed signature did not match", I finally started getting a different error code: Unable to determine service/operation name to be authorized
Googling this error was not very helpful: anything from a typo to an extra new-line character to using a datestamp instead of a version number. However I tried everything and nothing helped.
Below is an example cURL request and the return value:
curl -H "Content-Type:text/plain" --data "{\"Data\":\"VALID BASE64 DATA\",\"PartitionKey\":\"PARTITION\",\"StreamName\":\"STREAM\"}" "https://kinesis.us-east-1.amazonaws.com/?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAJMAGAYBGGRNZQI4A/20140723/us-east-1/kinesis/aws4_request&X-Amz-Date=20140723T153144Z&X-Amz-SignedHeaders=host&X-Amz-Target=Kinesis_20131202.PutRecord&X-Amz-User-Agent=aws-sdk-js/2.0.0&X-Amz-Signature=VALID_SIGNATURE"
Return:
<AccessDeniedException>
<Message>Unable to determine service/operation name to be authorized</Message>
</AccessDeniedException>
I've tried appending Action and Version parameters (noting that the Version should be in YYYY-MM-DD format as opposed to YYYYMMDD) and this didn't help. I also tried escaping all of my / characters or escaping all of my . characters (or both).
For comparison, here's the same request through Google Chrome using headers instead of a query string:
Remote Address:176.32.102.203:443
Request URL:https://kinesis.us-east-1.amazonaws.com/
Request Method:POST
Status Code:200 OK
Request Headers
Accept:* / *
Accept-Encoding:gzip,deflate,sdch
Accept-Language:en-US,en;q=0.8
Authorization:AWS4-HMAC-SHA256 Credential=AKIAJMAGAYBGGRNZQI4A/20140723/us-east-1/kinesis/aws4_request, SignedHeaders=host;x-amz-date;x-amz-target, Signature=OMITTED
Cache-Control:no-cache
Connection:keep-alive
Content-Length:3236
Content-Type:application/x-amz-json-1.1
Host:kinesis.us-east-1.amazonaws.com
Origin:OMITTED
Pragma:no-cache
Referer:OMITTED
User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1985.125 Safari/537.36
X-Amz-Date:20140723T145554Z
X-Amz-Target:Kinesis_20131202.PutRecord
X-Amz-User-Agent:aws-sdk-js/2.0.0
Request Payload
OMITTED (because it's long)
Response:
{"SequenceNumber":"49540780386103606919741841581837328106424971136629473281","ShardId":"shardId-000000000000"}
Does anyone know what I'm doing wrong, and why I can't communicate with Kinesis?
Going through and cleaning up some old questions that never got answers.
Per Michael - sqlbot:
Plan B is to send your request to your application server and proxy it to kinesis
This turned out to be the only solution for communicating with Kinesis. Set up a proxy which allowed me to pass custom headers as a query string, then it recombobulated it and sent the request onward
I've posted before on this subject, but after a year of getting on with other things, I've managed to get into a pickle once again. I'll try and give a brief overview of the scenario and the current attempts to make things work:
IIS web server hosting HTML, JS etc. on host: iis.mycompany.com (referred to as foo)
WCF RESTful web services hosted via a Windows Service on host: wcf.mycompany.com (referred to as bar)
The Javascript served from foo works by making RESTful ajax calls (GET or POST depending on the action) to the WCF services on bar, obviously these are cross domain calls as they aren't on the same host.
The Javascript uses the jQuery (1.7.2) framework to manipulate the DOM and perform ajax calls to bar, the expected content type for POSTS is JSON, and the response from GETS is expected to be JSON too (application/json).
Bar has it's WCF services configured using TransportCredentialOnly as the security mode and the transport client credentail type is NTLM, so only authed users to contact the services.
CORS Support has been added to bar's WCF services using an extension to WCF:
http://blogs.msdn.com/b/carlosfigueira/archive/2012/05/15/implementing-cors-support-in-wcf.aspx
We have added additional headers and modfied some that the post already contained based on numerous internet articles:
property.Headers.Add("Access-Control-Allow-Headers", "Accept, Content-Type");
property.Headers.Add("Access-Control-Allow-Methods", "POST, GET, OPTIONS");
property.Headers.Add("Access-Control-Max-Age", "172800");
property.Headers.Add("Access-Control-Allow-Origin", "http://iis.mycompany.com");
property.Headers.Add("Access-Control-Allow-Credentials", "true");
property.Headers.Add("Content-type", "application/json");
Sites giving information on enabling CORS suggest that the Access-Control-Allow-Origin response header should be set to "*" however, this is not possible in our case as we make jQuery ajax calls using the following setup:
$.ajaxSetup({
cache: "false",
crossDomain: true,
xhrFields: {
withCredentials: true
}
});
As it turns out you cannot use "*" for the accepted origin when you are using "withCredentials" in the ajax call:
https://developer.mozilla.org/en/http_access_control
"Important note: when responding to a credentialed request, server
must specify a domain, and cannot use wild carding."
Currently in our development lab, this doesn't matter as we can hard code the requests to the IIS (foo) server URL.
The main problem now appears to be attempting POST requests (GET is working using the above configuration). When the browser attempts the POST process, it first sends an OPTIONS header to the server requesting allowed OPTIONS for the subsequent post. This is where we would like to see the headers we've configured in the CORS Support WCF extension being passed back, however we aren't getting that far; before the response comes back as "401 Unauthorized", I believe this is to do with the transport security binding configuration requesting NTLM, but I'm not sure.
Also, I'm not very experienced with this, but I haven't seen much information about POST using application/json content type as opposed to text/plain when performing cross domain requests.
I know that people will probably suggest JSONP as the one true solution, I'm not against different approaches, indeed I encourage anyone to suggest best practices as it would help others reading this question later. However, please attempt to answer the question before suggestion alternatives to it.
Many thanks in advance for anyone who contributes.
peteski
:)
UPDATE:
It appears that Chrome (20.x.x) doesn't suffer the problem of not negotiating NTLM to retrieve the OPTIONS header response from the server, but Firefox (13.0.1) does.
We've also noticed that someone has already posted a bug up on the Firefox forum, which we've added information to:
http://bugzilla.mozilla.org/show_bug.cgi?id=751552
Please vote for this bug to be fixed on the bugzilla site!
Using the following code, we can watch the network trace to see Firefox failing and Chrome working fine:
var url = "http://myWebServiceServer/InstantMessagingService/chat/message/send";
var data = '{ "remoteUserUri" : "sip:foo.bar#mydomain.com", "message" : "This is my message" }';
var request = new XMLHttpRequest();
request.open("POST", url, true);
request.withCredentials = true;
request.setRequestHeader("Content-Type", "application/json");
request.send(data);
console.log(request);
On a separate note, IE8 doesn't support the XMLHttpRequest for cross domain calls, favouring it's own magical XDomainRequest object, so we've got some work to do in changing the client side code to handle IE8 vs the world cases. (Thanks IE8).
/me crosses fingers that Mozilla fix the Firefox bug.
UPDATE 2:
After some digging it appears that IE8's XDomainRequest cannot be used to make cross domain requests where NTLM must be negotiated, this basically means that the security on our WCF binding can't be used thanks to limitations in a web browser.
http://blogs.msdn.com/b/ieinternals/archive/2010/05/13/xdomainrequest-restrictions-limitations-and-workarounds.aspx
"No authentication or cookies will be sent with the request"
So, I guess we've taken this as far as it is going to go for now.. It looks like we're going to have to create our own custom token authentication and pass it across to the WCF service in a cookie, or in IE8's case, POST it with the JSON. The WCF service will then have to handle decrypting the data and using that instead of the ServiceSecurityContext.Current.WindowsIdentity we previously had access to with NTLM auth.
I know you said you would rather have the problem itself addressed, but you may consider using a "reverse proxy."
I don't know what technologies you are using, but we use Apache web server and have a Java RESTful API running on a different server that required authentication. For a while, we messed with JSONP and CORS, but were not satisfied.
In the end, we setup an Apache Reverse Proxy and it worked miracles. The web browser believes it is communicating with its own domain and acts appropriately. The RESTful API doesn't know it is being used via a proxy. Therefore, everything just works. And Apache does all the magic.
Hopefully, all web servers have a feature like Apache's reverse proxy.
Here is some documentation on the feature: http://httpd.apache.org/docs/2.2/mod/mod_proxy.html
All we had to do is ensure the mod_proxy module was installed, then add the following lines to our Apache config file:
ProxyPass /restapi http://restfulserver.com/restapi
ProxyPassReverse /restapi http://restfulserver.com/restapi
Then restart the web server and voila!
I get some json data with ajax when the page DOM is loaded using jQuery like this:
$(document).ready(function(){
getData();
});
...where getData() is a simple jQuery ajax call, something like this:
function getData(){
$.ajax({cache: true, dataType: 'json', url: '/foo/bar'});
}
The Expires header for this request is set to some time in the future, so the next time I load the page, the ajax call should use the cached data. Firefox 3 does not.
But, if I instead ask for the data like this:
$(document).ready(function(){
setTimeout("getData()", 1);
});
Firefox does respect the Expires header, and uses the cache. Any ideas why this would be?
This page mentions that browsers may treat ajax calls that occur when a page loads differently from ajax calls that occur in response to a user UI event.
Edit: I forgot to include the http headers in my original post. I think the headers are fine, because the caching works as long as the request isn't made in an ajax call when the page loads. If I visit the url that the ajax call uses in my browser URL bar, caching works, and as I explain above, caching works if I add a little delay to the ajax call.
Request headers
Host 10.0.45.64:5004
User-Agent Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.9) Gecko/20100824 Firefox/3.6.9
Accept application/json, text/javascript, /
Accept-Language en-us,en;q=0.5
Accept-Encoding gzip,deflate
Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive 115
Connection keep-alive
X-Requested-With XMLHttpRequest
Cookie
Response headers
I set the Expires header to 1 week in the future so that users only need to refresh once a week.
Date Wed, 04 May 2011 15:32:04 GMT
Last-Modified Wed, 04 May 2011 15:32:03 GMT
Expires Wed, 11 May 2011 15:32:03 GMT
Content-Type text/javascript
Cache-Control Public
Connection close
Define an error handler in the $.ajax() call and inspect the response headers (using jqXHR.getAllResponseHeaders() where jqXHR is the jQuery Ajax object, status code, and responseText.length. You may find that the request is successful, but jQuery treats them as unsuccessful. I've recently had a similar issue with cached files and $.ajax(), and it turns out that sometimes when files are loaded when the browser is offline, or from a local file, return a status code of 0. Because the status doesn't fall in the range of success codes (200-300), jQuery considers the request to have failed. See this for what I did to fix this issue. Basically, in your error handler, you can check the responseText.length. If it is non-empty, consider the request successful and parse the JSON using JSON.parse(). BUT!!! you have to make sure on your server-side that invalid requests are empty.