jQuery.getJSON - Access-Control-Allow-Origin Issue - javascript

I'm jusing jQuery's $.getJSON() function to return a short set of JSON data.
I've got the JSON data sitting on a url such as example.com.
I didn't realize it, but as I was accessing that same url, the JSON data couldn't be loaded. I followed through the console and found that XMLHttpRequest couldn't load due to Access-Control-Allow-Origin.
Now, I've read through, a lot of sites that just said to use $.getJSON() and that would be the work around, but obviously it didn't work. Is there something I should change in the headers or in the function?
Help is greatly appreciated.

It's simple, use $.getJSON() function and in your URL just include
callback=?
as a parameter. That will convert the call to JSONP which is necessary to make cross-domain calls. More info: http://api.jquery.com/jQuery.getJSON/

You may well want to use JSON-P instead (see below). First a quick explanation.
The header you've mentioned is from the Cross Origin Resource Sharing standard. Beware that it is not supported by some browsers people actually use, and on other browsers (Microsoft's, sigh) it requires using a special object (XDomainRequest) rather than the standard XMLHttpRequest that jQuery uses. It also requires that you change server-side resources to explicitly allow the other origin (www.xxxx.com).
To get the JSON data you're requesting, you basically have three options:
If possible, you can be maximally-compatible by correcting the location of the files you're loading so they have the same origin as the document you're loading them into. (I assume you must be loading them via Ajax, hence the Same Origin Policy issue showing up.)
Use JSON-P, which isn't subject to the SOP. jQuery has built-in support for it in its ajax call (just set dataType to "jsonp" and jQuery will do all the client-side work). This requires server side changes, but not very big ones; basically whatever you have that's generating the JSON response just looks for a query string parameter called "callback" and wraps the JSON in JavaScript code that would call that function. E.g., if your current JSON response is:
{"weather": "Dreary start but soon brightening into a fine summer day."}
Your script would look for the "callback" query string parameter (let's say that the parameter's value is "jsop123") and wraps that JSON in the syntax for a JavaScript function call:
jsonp123({"weather": "Dreary start but soon brightening into a fine summer day."});
That's it. JSON-P is very broadly compatible (because it works via JavaScript script tags). JSON-P is only for GET, though, not POST (again because it works via script tags).
Use CORS (the mechanism related to the header you quoted). Details in the specification linked above, but basically:
A. The browser will send your server a "preflight" message using the OPTIONS HTTP verb (method). It will contain the various headers it would send with the GET or POST as well as the headers "Origin", "Access-Control-Request-Method" (e.g., GET or POST), and "Access-Control-Request-Headers" (the headers it wants to send).
B. Your PHP decides, based on that information, whether the request is okay and if so responds with the "Access-Control-Allow-Origin", "Access-Control-Allow-Methods", and "Access-Control-Allow-Headers" headers with the values it will allow. You don't send any body (page) with that response.
C. The browser will look at your response and see whether it's allowed to send you the actual GET or POST. If so, it will send that request, again with the "Origin" and various "Access-Control-Request-xyz" headers.
D. Your PHP examines those headers again to make sure they're still okay, and if so responds to the request.
In pseudo-code (I haven't done much PHP, so I'm not trying to do PHP syntax here):
// Find out what the request is asking for
corsOrigin = get_request_header("Origin")
corsMethod = get_request_header("Access-Control-Request-Method")
corsHeaders = get_request_header("Access-Control-Request-Headers")
if corsOrigin is null or "null" {
// Requests from a `file://` path seem to come through without an
// origin or with "null" (literally) as the origin.
// In my case, for testing, I wanted to allow those and so I output
// "*", but you may want to go another way.
corsOrigin = "*"
}
// Decide whether to accept that request with those headers
// If so:
// Respond with headers saying what's allowed (here we're just echoing what they
// asked for, except we may be using "*" [all] instead of the actual origin for
// the "Access-Control-Allow-Origin" one)
set_response_header("Access-Control-Allow-Origin", corsOrigin)
set_response_header("Access-Control-Allow-Methods", corsMethod)
set_response_header("Access-Control-Allow-Headers", corsHeaders)
if the HTTP request method is "OPTIONS" {
// Done, no body in response to OPTIONS
stop
}
// Process the GET or POST here; output the body of the response
Again stressing that this is pseudo-code.

Related

How to send http request with body property using fetch in javascript [duplicate]

I'm developing a new RESTful webservice for our application.
When doing a GET on certain entities, clients can request the contents of the entity.
If they want to add some parameters (for example sorting a list) they can add these parameters in the query string.
Alternatively I want people to be able to specify these parameters in the request body.
HTTP/1.1 does not seem to explicitly forbid this. This will allow them to specify more information, might make it easier to specify complex XML requests.
My questions:
Is this a good idea altogether?
Will HTTP clients have issues with using request bodies within a GET request?
https://www.rfc-editor.org/rfc/rfc2616
Roy Fielding's comment about including a body with a GET request.
Yes. In other words, any HTTP request message is allowed to contain a message body, and thus must parse messages with that in mind. Server semantics for GET, however, are restricted such that a body, if any, has no semantic meaning to the request. The requirements on parsing are separate from the requirements on method semantics.
So, yes, you can send a body with GET, and no, it is never useful to do so.
This is part of the layered design of HTTP/1.1 that will become clear again once the spec is partitioned (work in progress).
....Roy
Yes, you can send a request body with GET but it should not have any meaning. If you give it meaning by parsing it on the server and changing your response based on its contents, then you are ignoring this recommendation in the HTTP/1.1 spec, section 4.3:
...if the request method does not include defined semantics for an entity-body, then the message-body SHOULD be ignored when handling the request.
And the description of the GET method in the HTTP/1.1 spec, section 9.3:
The GET method means retrieve whatever information ([...]) is identified by the Request-URI.
which states that the request-body is not part of the identification of the resource in a GET request, only the request URI.
Update
The RFC2616 referenced as "HTTP/1.1 spec" is now obsolete. In 2014 it was replaced by RFCs 7230-7237. Quote "the message-body SHOULD be ignored when handling the request" has been deleted. It's now just "Request message framing is independent of method semantics, even if the method doesn't define any use for a message body" The 2nd quote "The GET method means retrieve whatever information ... is identified by the Request-URI" was deleted. - From a comment
From the HTTP 1.1 2014 Spec:
A payload within a GET request message has no defined semantics; sending a payload body on a GET request might cause some existing implementations to reject the request.
While you can do that, insofar as it isn't explicitly precluded by the HTTP specification, I would suggest avoiding it simply because people don't expect things to work that way. There are many phases in an HTTP request chain and while they "mostly" conform to the HTTP spec, the only thing you're assured is that they will behave as traditionally used by web browsers. (I'm thinking of things like transparent proxies, accelerators, A/V toolkits, etc.)
This is the spirit behind the Robustness Principle roughly "be liberal in what you accept, and conservative in what you send", you don't want to push the boundaries of a specification without good reason.
However, if you have a good reason, go for it.
You will likely encounter problems if you ever try to take advantage of caching. Proxies are not going to look in the GET body to see if the parameters have an impact on the response.
Elasticsearch accepts GET requests with a body. It even seems that this is the preferred way: Elasticsearch guide
Some client libraries (like the Ruby driver) can log the cry command to stdout in development mode and it is using this syntax extensively.
Neither restclient nor REST console support this but curl does.
The HTTP specification says in section 4.3
A message-body MUST NOT be included in a request if the specification of the request method (section 5.1.1) does not allow sending an entity-body in requests.
Section 5.1.1 redirects us to section 9.x for the various methods. None of them explicitly prohibit the inclusion of a message body. However...
Section 5.2 says
The exact resource identified by an Internet request is determined by examining both the Request-URI and the Host header field.
and Section 9.3 says
The GET method means retrieve whatever information (in the form of an entity) is identified by the Request-URI.
Which together suggest that when processing a GET request, a server is not required to examine anything other that the Request-URI and Host header field.
In summary, the HTTP spec doesn't prevent you from sending a message-body with GET but there is sufficient ambiguity that it wouldn't surprise me if it was not supported by all servers.
GET, with a body!?
Specification-wise you could, but, it's not a good idea to do so injudiciously, as we shall see.
RFC 7231 §4.3.1 states that a body "has no defined semantics", but that's not to say it is forbidden. If you attach a body to the request and what your server/app makes out of it is up to you. The RFC goes on to state that GET can be "a programmatic view on various database records". Obviously such view is many times tailored by a large number of input parameters, which are not always convenient or even safe to put in the query component of the request-target.
The good: I like the verbiage. It's clear that one read/get a resource without any observable side-effects on the server (the method is "safe"), and, the request can be repeated with the same intended effect regardless of the outcome of the first request (the method is "idempotent").
The bad: An early draft of HTTP/1.1 forbade GET to have a body, and - allegedly - some implementations will even up until today drop the body, ignore the body or reject the message. For example, a dumb HTTP cache may construct a cache key out of the request-target only, being oblivious to the presence or content of a body. An even dumber server could be so ignorant that it treats the body as a new request, which effectively is called "request smuggling" (which is the act of sending "a request to one device without the other device being aware of it" - source).
Due to what I believe is primarily a concern with inoperability amongst implementations, work in progress suggests to categorize a GET body as a "SHOULD NOT", "unless [the request] is made directly to an origin server that has previously indicated, in or out of band, that such a request has a purpose and will be adequately supported" (emphasis mine).
The fix: There's a few hacks that can be employed for some of the problems with this approach. For example, body-unaware caches can indirectly become body-aware simply by appending a hash derived from the body to the query component, or disable caching altogether by responding a cache-control: no-cache header from the server.
Alas when it comes to the request chain, one is often not in control of- or even aware, of all present and future HTTP intermediaries and how they will deal with a GET body. That's why this approach must be considered generally unreliable.
But POST, is not idempotent!
POST is an alternative. The POST request usually includes a message body (just for the record, body is not a requirement, see RFC 7230 §3.3.2). The very first use case example from RFC 7231 (§4.3.3) is "providing a block of data [...] to a data-handling process". So just like GET with a body, what happens with the body on the back-end side is up to you.
The good: Perhaps a more common method to apply when one wish to send a request body, for whatever purpose, and so, will likely yield the least amount of noise from your team members (some may still falsely believe that POST must create a resource).
Also, what we often pass parameters to is a search function operating upon constantly evolving data, and a POST response is only cacheable if explicit freshness information is provided in the response.
The bad: POST requests are not defined as idempotent, leading to request retry hesitancy. For example, on page reload, browsers are unwilling to resubmit an HTML form without prompting the user with a nonreadable cryptic message.
The fix: Well, just because POST is not defined to be idempotent doesn't mean it mustn't be. Indeed, RFC 7230 §6.3.1 writes: "a user agent that knows (through design or configuration) that a POST request to a given resource is safe can repeat that request automatically". So, unless your client is an HTML form, this is probably not a real problem.
QUERY is the holy grail
There's a proposal for a new method QUERY which does define semantics for a message body and defines the method as idempotent. See this.
Edit: As a side-note, I stumbled into this StackOverflow question after having discovered a codebase where they solely used PUT requests for server-side search functions. This were their idea to include a body with parameters and also be idempotent. Alas the problem with PUT is that the request body has very precise semantics. Specifically, the PUT "requests that the state of the target resource be created or replaced with the state [in the body]" (RFC 7231 §4.3.4). Clearly, this excludes PUT as a viable option.
You can either send a GET with a body or send a POST and give up RESTish religiosity (it's not so bad, 5 years ago there was only one member of that faith -- his comments linked above).
Neither are great decisions, but sending a GET body may prevent problems for some clients -- and some servers.
Doing a POST might have obstacles with some RESTish frameworks.
Julian Reschke suggested above using a non-standard HTTP header like "SEARCH" which could be an elegant solution, except that it's even less likely to be supported.
It might be most productive to list clients that can and cannot do each of the above.
Clients that cannot send a GET with body (that I know of):
XmlHTTPRequest Fiddler
Clients that can send a GET with body:
most browsers
Servers & libraries that can retrieve a body from GET:
Apache
PHP
Servers (and proxies) that strip a body from GET:
?
What you're trying to achieve has been done for a long time with a much more common method, and one that doesn't rely on using a payload with GET.
You can simply build your specific search mediatype, or if you want to be more RESTful, use something like OpenSearch, and POST the request to the URI the server instructed, say /search. The server can then generate the search result or build the final URI and redirect using a 303.
This has the advantage of following the traditional PRG method, helps cache intermediaries cache the results, etc.
That said, URIs are encoded anyway for anything that is not ASCII, and so are application/x-www-form-urlencoded and multipart/form-data. I'd recommend using this rather than creating yet another custom json format if your intention is to support ReSTful scenarios.
I put this question to the IETF HTTP WG. The comment from Roy Fielding (author of http/1.1 document in 1998) was that
"... an implementation would be broken to do anything other than to parse and discard that body if received"
RFC 7213 (HTTPbis) states:
"A payload within a GET request message has no defined semantics;"
It seems clear now that the intention was that semantic meaning on GET request bodies is prohibited, which means that the request body can't be used to affect the result.
There are proxies out there that will definitely break your request in various ways if you include a body on GET.
So in summary, don't do it.
From RFC 2616, section 4.3, "Message Body":
A server SHOULD read and forward a message-body on any request; if the
request method does not include defined semantics for an entity-body,
then the message-body SHOULD be ignored when handling the request.
That is, servers should always read any provided request body from the network (check Content-Length or read a chunked body, etc). Also, proxies should forward any such request body they receive. Then, if the RFC defines semantics for the body for the given method, the server can actually use the request body in generating a response. However, if the RFC does not define semantics for the body, then the server should ignore it.
This is in line with the quote from Fielding above.
Section 9.3, "GET", describes the semantics of the GET method, and doesn't mention request bodies. Therefore, a server should ignore any request body it receives on a GET request.
Which server will ignore it? – fijiaaron Aug 30 '12 at 21:27
Google for instance is doing worse than ignoring it, it will consider it an error!
Try it yourself with a simple netcat:
$ netcat www.google.com 80
GET / HTTP/1.1
Host: www.google.com
Content-length: 6
1234
(the 1234 content is followed by CR-LF, so that is a total of 6 bytes)
and you will get:
HTTP/1.1 400 Bad Request
Server: GFE/2.0
(....)
Error 400 (Bad Request)
400. That’s an error.
Your client has issued a malformed or illegal request. That’s all we know.
You do also get 400 Bad Request from Bing, Apple, etc... which are served by AkamaiGhost.
So I wouldn't advise using GET requests with a body entity.
According to XMLHttpRequest, it's not valid. From the standard:
4.5.6 The send() method
client . send([body = null])
Initiates the request. The optional argument provides the request
body. The argument is ignored if request method is GET or HEAD.
Throws an InvalidStateError exception if either state is not
opened or the send() flag is set.
The send(body) method must run these steps:
If state is not opened, throw an InvalidStateError exception.
If the send() flag is set, throw an InvalidStateError exception.
If the request method is GET or HEAD, set body to null.
If body is null, go to the next step.
Although, I don't think it should because GET request might need big body content.
So, if you rely on XMLHttpRequest of a browser, it's likely it won't work.
If you really want to send cachable JSON/XML body to web application the only reasonable place to put your data is query string encoded with RFC4648: Base 64 Encoding with URL and Filename Safe Alphabet. Of course you could just urlencode JSON and put is in URL param's value, but Base64 gives smaller result. Keep in mind that there are URL size restrictions, see What is the maximum length of a URL in different browsers? .
You may think that Base64's padding = character may be bad for URL's param value, however it seems not - see this discussion: http://mail.python.org/pipermail/python-bugs-list/2007-February/037195.html . However you shouldn't put encoded data without param name because encoded string with padding will be interpreted as param key with empty value.
I would use something like ?_b64=<encodeddata>.
I wouldn't advise this, it goes against standard practices, and doesn't offer that much in return. You want to keep the body for content, not options.
You have a list of options which are far better than using a request body with GET.
Let' assume you have categories and items for each category. Both to be identified by an id ("catid" / "itemid" for the sake of this example). You want to sort according to another parameter "sortby" in a specific "order". You want to pass parameters for "sortby" and "order":
You can:
Use query strings, e.g.
example.com/category/{catid}/item/{itemid}?sortby=itemname&order=asc
Use mod_rewrite (or similar) for paths:
example.com/category/{catid}/item/{itemid}/{sortby}/{order}
Use individual HTTP headers you pass with the request
Use a different method, e.g. POST, to retrieve a resource.
All have their downsides, but are far better than using a GET with a body.
What about nonconforming base64 encoded headers? "SOMETHINGAPP-PARAMS:sdfSD45fdg45/aS"
Length restrictions hm. Can't you make your POST handling distinguish between the meanings? If you want simple parameters like sorting, I don't see why this would be a problem. I guess it's certainty you're worried about.
I'm upset that REST as protocol doesn't support OOP and Get method is proof. As a solution, you can serialize your a DTO to JSON and then create a query string. On server side you'll able to deserialize the query string to the DTO.
Take a look on:
Message-based design in ServiceStack
Building RESTful Message Based Web Services with WCF
Message based approach can help you to solve Get method restriction. You'll able to send any DTO as with request body
Nelibur web service framework provides functionality which you can use
var client = new JsonServiceClient(Settings.Default.ServiceAddress);
var request = new GetClientRequest
{
Id = new Guid("2217239b0e-b35b-4d32-95c7-5db43e2bd573")
};
var response = client.Get<GetClientRequest, ClientResponse>(request);
as you can see, the GetClientRequest was encoded to the following query string
http://localhost/clients/GetWithResponse?type=GetClientRequest&data=%7B%22Id%22:%2217239b0e-b35b-4d32-95c7-5db43e2bd573%22%7D
IMHO you could just send the JSON encoded (ie. encodeURIComponent) in the URL, this way you do not violate the HTTP specs and get your JSON to the server.
For example, it works with Curl, Apache and PHP.
PHP file:
<?php
echo $_SERVER['REQUEST_METHOD'] . PHP_EOL;
echo file_get_contents('php://input') . PHP_EOL;
Console command:
$ curl -X GET -H "Content-Type: application/json" -d '{"the": "body"}' 'http://localhost/test/get.php'
Output:
GET
{"the": "body"}
Even if a popular tool use this, as cited frequently on this page, I think it is still quite a bad idea, being too exotic, despite not forbidden by the spec.
Many intermediate infrastructures may just reject such requests.
By example, forget about using some of the available CDN in front of your web site, like this one:
If a viewer GET request includes a body, CloudFront returns an HTTP status code 403 (Forbidden) to the viewer.
And yes, your client libraries may also not support emitting such requests, as reported in this comment.
If you want to allow a GET request with a body, a way is to support POST request with header "X-HTTP-Method-Override: GET". It is described here : https://en.wikipedia.org/wiki/List_of_HTTP_header_fields. This header means that while the method is POST, the request should be treated as if it is a GET. Body is allowed for POST, so you're sure nobody willl drop the payload of your GET requests.
This header is oftenly used to make PATCH or HEAD requests through some proxies that do not recognize those methods and replace them by GET (always fun to debug!).
An idea on an old question:
Add the full content on the body, and a short hash of the body on the querystring, so caching won't be a problem (the hash will change if body content is changed) and you'll be able to send tons of data when needed :)
Create a Requestfactory class
import java.net.URI;
import javax.annotation.PostConstruct;
import org.apache.http.client.methods.HttpEntityEnclosingRequestBase;
import org.apache.http.client.methods.HttpUriRequest;
import org.springframework.http.HttpMethod;
import org.springframework.http.client.HttpComponentsClientHttpRequestFactory;
import org.springframework.stereotype.Component;
import org.springframework.web.client.RestTemplate;
#Component
public class RequestFactory {
private RestTemplate restTemplate = new RestTemplate();
#PostConstruct
public void init() {
this.restTemplate.setRequestFactory(new HttpComponentsClientHttpRequestWithBodyFactory());
}
private static final class HttpComponentsClientHttpRequestWithBodyFactory extends HttpComponentsClientHttpRequestFactory {
#Override
protected HttpUriRequest createHttpUriRequest(HttpMethod httpMethod, URI uri) {
if (httpMethod == HttpMethod.GET) {
return new HttpGetRequestWithEntity(uri);
}
return super.createHttpUriRequest(httpMethod, uri);
}
}
private static final class HttpGetRequestWithEntity extends HttpEntityEnclosingRequestBase {
public HttpGetRequestWithEntity(final URI uri) {
super.setURI(uri);
}
#Override
public String getMethod() {
return HttpMethod.GET.name();
}
}
public RestTemplate getRestTemplate() {
return restTemplate;
}
}
and #Autowired where ever you require and use, Here is one sample code GET request with RequestBody
#RestController
#RequestMapping("/v1/API")
public class APIServiceController {
#Autowired
private RequestFactory requestFactory;
#RequestMapping(method = RequestMethod.GET, path = "/getData")
public ResponseEntity<APIResponse> getLicenses(#RequestBody APIRequest2 APIRequest){
APIResponse response = new APIResponse();
HttpHeaders headers = new HttpHeaders();
headers.setContentType(MediaType.APPLICATION_JSON);
Gson gson = new Gson();
try {
StringBuilder createPartUrl = new StringBuilder(PART_URL).append(PART_URL2);
HttpEntity<String> entity = new HttpEntity<String>(gson.toJson(APIRequest),headers);
ResponseEntity<APIResponse> storeViewResponse = requestFactory.getRestTemplate().exchange(createPartUrl.toString(), HttpMethod.GET, entity, APIResponse.class); //.getForObject(createLicenseUrl.toString(), APIResponse.class, entity);
if(storeViewResponse.hasBody()) {
response = storeViewResponse.getBody();
}
return new ResponseEntity<APIResponse>(response, HttpStatus.OK);
}catch (Exception e) {
e.printStackTrace();
return new ResponseEntity<APIResponse>(response, HttpStatus.INTERNAL_SERVER_ERROR);
}
}
}

AJAX request gets "No 'Access-Control-Allow-Origin' header is present on the requested resource" error

I attempt to send a GET request in a jQuery AJAX request.
$.ajax({
type: 'GET',
url: /* <the link as string> */,
dataType: 'text/html',
success: function() { alert("Success"); },
error: function() { alert("Error"); },
});
However, whatever I've tried, I got XMLHttpRequest cannot load <page>. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:7776' is therefore not allowed access.
I tried everything, from adding header : {} definitions to the AJAX request to setting dataType to JSONP, or even text/plain, using simple AJAX instead of jQuery, even downloading a plugin that enables CORS - but nothing could help.
And the same happens if I attempt to reach any other sites.
Any ideas for a proper and simple solution? Is there any at all?
This is by design. You can't make an arbitrary HTTP request to another server using XMLHttpRequest unless that server allows it by putting out an Access-Control-Allow-Origin header for the requesting host.
https://developer.mozilla.org/en-US/docs/Web/HTTP/Access_control_CORS
You could retrieve it in a script tag (there isn't the same restriction on scripts and images and stylesheets), but unless the content returned is a script, it won't do you much good.
Here's a tutorial on CORS:
http://www.bennadel.com/blog/2327-cross-origin-resource-sharing-cors-ajax-requests-between-jquery-and-node-js.htm
This is all done to protect the end user. Assuming that an image is actually an image, a stylesheet is just a stylesheet and a script is just a script, requesting those resources from another server can't really do any harm.
But in general, cross-origin requests can do really bad things. Say that you, Zoltan, are using coolsharks.com. Say also that you are logged into mybank.com and there is a cookie for mybank.com in your browser. Now, suppose that coolsharks.com sends an AJAX request to mybank.com, asking to transfer all your money into another account. Because you have a mybank.com cookie stored, they successfully complete the request. And all of this happens without your knowledge, because no page reload occurred. This is the danger of allowing general cross-site AJAX requests.
If you want to perform cross-site requests, you have two options:
Get the server you are making the request to to either
a. Admit you by putting out a Access-Control-Allow-Origin header that includes you (or *)
b. Provide you with a JSONP API.
or
Write your own browser that doesn't follow the standards and has no restrictions.
In (1), you must have the cooperation of the server you are making requests to, and in (2), you must have control over the end user's browser. If you can't fulfill (1) or (2), you're pretty much out of luck.
However, there is a third option (pointed out by charlietfl). You can make the request from a server that you do control and then pass the result back to your page. E.g.
<script>
$.ajax({
type: 'GET',
url: '/proxyAjax.php?url=http%3A%2F%2Fstackoverflow.com%2F10m',
dataType: 'text/html',
success: function() { alert("Success"); },
error: function() { alert("Error"); }
});
</script>
And then on your server, at its most simple:
<?php
// proxyAjax.php
// ... validation of params
// and checking of url against whitelist would happen here ...
// assume that $url now contains "http://stackoverflow.com/10m"
echo file_get_contents($url);
Of course, this method may run into other issues:
Does the site you are a proxy for require the correct referrer or a certain IP address?
Do cookies need to be passed through to the target server?
Does your whitelist sufficiently protect you from making arbitrary requests?
Which headers (e.g. modify time, etc) will you be passing back to the browser as your server received them and which ones will you omit or change?
Will your server be implicated as having made a request that was unlawful (since you are acting as a proxy)?
I'm sure there are others. But if none of those issues prevent it, this third method could work quite well.
you can ask the developers of that domain if they would set the appropriate header for you, this restriction is only for javascript, basically you can request the ressource from your server with php or whatever and the javascript requests the data from your domain then
Old question, but I'm not seeing this solution, which worked for me, anywhere. So hoping this can be helpful for someone.
First, remember that it makes no sense to try modifying the headers of the request to get around a cross-origin resource request. If that were all it took, it would be easy for malicious users to then circumvent this security measure.
Cross-origin requests in this context are only possible if the partner site's server allows it through their response headers.
I got this to work in Django without any CORS middleware by setting the following headers on the response:
response["Access-Control-Allow-Origin"] = "requesting_site.com"
response["Access-Control-Allow-Methods"] = "GET"
response["Access-Control-Allow-Headers"] = "requesting_site.com"
Most answers on here seem to mention the first one, but not the second two. I've just confirmed they are all required. You'll want to modify as needed for your framework or request method (GET, POST, OPTION).
p.s. You can try "*" instead of "requesting_site.com" for initial development just to get it working, but it would be a security hole to allow every site access. Once working, you can restrict it for your requesting site only to make sure you don't have any formatting typos.

JSON/JSONP how to use for(;;); in the respose body

I can't seem to figure out a way to ignore the for(;;); in the response body of my cross domain JSONP requests. I am doing this on my own servers, nothing else going on here. I am trying to include that for(;;); inside the response body of my callback as such:
_callbacks_.callback(for(;;);[jsondata....]);
but how can I remove it from the response body before the JS code gets parsed? I am using the Google Closure Library btw.
Ok I think I figured it out.
The reason why the for(;;); is there is to prevent cross-domain data requests of certain information. So basically if you have information you are trying to protect you go through a normal Ajax JSON channel and if you are storing data on multiple servers you deal with it on server level.
JSONP requests are actually a remote script inclusion, which means whatever the server outputs is actual Javascript code, so if you have a for(;;); before your _callbacks_.callback(); the code will be executed on the origin domain on request success. If it's an infinite for loop, it will obviously jam the page.
So the normal implementation method is the following:
Send a normal Ajax request to a file located on the same server.
Perform the server level stuff and send requests to external servers via encrypted CURL.
Add security to the server response(a for(;;); or while(1); or throw(1); followed by a <prevent eval statements> string.
Get the response as a text string.
Remove your security implementations from the string.
Convert the string(which is now a "JSON string") to a JS Object/Array etc with a standard JSON parser.
Do whatever you want to do with the data.
Just thought I should put this out here in case someone else will Google it in the future, as I didn't find proper information by Google-ing. This should help prevent cross domain request forgery.

Difference jsonp and simple get request (cross domain)

I have to send (and receive) certain data to a server using JQuery and JSON.
Works so far, but not cross domain, and it has to be cross domain.
I looked how to solve this and found JSONP. As far as I see, using JSONP I have to send the callback and the data using GET (JQuery allows to use "POST" as method, but when I inspect web traffic I see it is sending actually GET and everyting as parameter).
JSONP also requires changes in the server, because they are expecting a POST request with JSON data, and they have to implement something to process the JSONP GET request.
So I'm wondering what's the difference between this and sending the data as key value parameters in GET request?
Is the difference the possibility to use a callback? Or what exactly?
Sorry a bit lost... thanks in advance
JSONP is not a form submission. It is a way of telling the server via a GET request how to generate the content for a script tag. The data returned is a payload of JavaScript (not just JSON!) with a function call to a callback which you (by convention) reference in the GET request.
JSONP works because it is a hack that doesn't use AJAX. It isn't AJAX and you should not confuse it for such because it does not use a XMLHttpRequest at any point to send the data. That is how it gets around the Same Origin Policy.
Depending on the browsers you have to support, you can implement Cross-Origin Resource Sharing headers on the server side which will let you use normal AJAX calls across trusted domains. Most browsers (IE8, Firefox 3.5+, etc.) will support CORS.
Another solution you can use if you don't want to use CORS or JSONP is to write a PHP script or Java servlet that will act as a proxy. That's as simple as opening a new connection from the script, copying all of the incoming parameters from your AJAX code onto the request and then dumping the response back at the end of your script.
I found an answer that worked for me with the cross-domain issue and JSON (not JSONP).
I simply used:
header('Access-Control-Allow-Origin: *');
inside my json file (file.php) and called it like this:
var serviceURL = 'http://your-domain.com/your/json/location.php'
$.getJSON(serviceURL,function (data) {
var entries = data;
//do your stuff here using your entries in json
});
BTW: this is a receiving process, not sending.

Submit cross domain ajax POST request

I swear I saw an article about this at one point but can not find it...
How can I perform a jQuery ajax request of type POST on another domain? Must be accomplished without a proxy. Is this possible?
Yes you can POST all you want, even $.post() works...but you won't get a response back.
This works, the other domain will get the POST:
$.post("http://othersite.com/somePage.php", { thing: "value" }, function(data) {
//data will always be null
});
But the response, data in the above example, will be null due to the same-origin policy.
All the options I've experimented with:
1) PORK: http://www.schizofreend.nl/Pork.Iframe/Examples/ Creates an iframe and submits the post there, then reads the response. Still requires same base domain per
request (i.e. www.foo.com can request
data from www2.foo.com, but not from
www.google.com) . Also requires you to
fiddle with the document.domain
property, which causes adverse side
effects. And there's a pervasive problem in all the major browsers where reloading the page basically shuffles the cached contents of all iframes on the page if any of them are dynamically written. Your response data will show up in the box where an ad is supposed to be.
2) flxhr: http://flxhr.flensed.com/ Can even be used to mask jQuery's built-in ajax so you don't even notice it. Requires flash though, so iPhone is out
3) jsonp: Doesn't work if you're posting a lot of data. boo.
4) chunked jsonp: When your jsonp request is too big, break the query string up into manageable chunks and send multiple get requests. Reconstruct them on the server. This is helpful but breaks down if you're load balancing users between servers.
5) CORS: http://www.w3.org/TR/cors/ doesn't work in older browsers (IE7, IE6, Firefox 2, etc)
So we currently do the following algorithm:
If request is small enough, use JSONP
If not small enough, but user has flash, use FlXHR
Else use chunked JSONP
Spend one afternoon writing that up and you'll be able to use it for good. Adding CORS to our algorithm might be good for faster iPhone support.
If you have control over the code running at the other domain, just let it return an appropriate Access-Control-Allow-Origin header in the response. See also HTTP Access-Control at MDC.
If you want a fire and forget POST where you don't care about the response then just submit a form to a hidden iframe. This requires a Transitional Doctype.
<form method="POST" action="http://example.com/" target="name_of_iframe">
If you want to parse the response, then using a proxy if the only real option.
If you are desperate, and control the remote site, then you can:
Submit a form as above
Set a cookie in the response (which might be blocked before the iframe could cause the cookie to be considered '3rd party' (i.e. likely to be advertising tracking).
Wait long enough for the response to come back
Dynamically generate a script element with the src pointing to the remote site
Use JSON-P in the response and take advantage of the data previously stored in the cookie
This approach is subject to race conditions and generally ugly. Proxing the data through the current domain is a much better approach.
If you need to know that the POST was successful, and don't have control over the remote server:
$.ajax({
type:"POST",
url:"http://www.somesite.com/submit",
data:'firstname=test&lastname=person&email=test#test.com',
complete: function(response){
if(response.status == 0 && response.statusText == "success")
{
/* CORS POST was successful */
}
else
{
/* Show error message */
}
}
});
If there was a problem with the submission then response.statusText should equal "error".
Note: some remote servers will send the HTTP header Access-Control-Allow-Origin: *, which will result in a 200 OK HTTP status code response. In that case, ajax will execute the success handler, and this method is not needed. To look at the response just do console.log(JSON.stringify(response)); or use FireBug's 'Net' panel.

Categories