The users of my website are seeing intermittent ERR_SSL_PROTOCOL_ERROR when making cross domain requests to api.flickr.com
By intermittent I mean that I've seen this happen 4 times out of ~1200 requests to the api yesterday.
Failed to load resource: net::ERR_SSL_PROTOCOL_ERROR https://api.flickr.com/services/rest/?method=flickr.photos.getInfo&api_key=.....
My site is and AngularJS application running on Google App Engine and is exclusivley avalable on HTTPS.
sslchecker shows that my site's certificate & certificate chain is installed correctly. Well, I think it looks ok!
sslchecker for api.flickr.com shows that ROOT 1 of the certificate chain is missing. Is that the problem? Is there any way around that for me?
Any other ideas? Is the problem that our certificates are issues by different authorities maybe?
Edit - Some other possibly relevant info gleaned from google analytics
Have seen it happen for different OSes - Android, iOS, Windows
Different browsers - Android, Chrome, Safari
Different Network Domains
Persistent SSL Protocol Errors may be caused by problems like
the destination server expects a different protocol (e.g. SSLv1, SSLv2, SSLv3)
a violation of a security policy (e.g. some servers don't honor certificate requests made from client)
Firewall impedance filtering / ciphering
Intermittent SSL Protocol Errors are very hard to diagnose. They can be the result of expired session, expired key, connectivity hiccup, lost packets, etc
Even worse, they can be caused by Server Side issues like date-time sync, server connection pool full, etc.
Best practice is to re-send the request: because such issues are often a temporary glitch, and usually succeed at 2nd attempt.
Flickr switched their API to SSL-only on June 27th, 2014 (a little under a year). Their Forum has blown up with SSL related problems since then.
In the past few months many users have reported (check thread) sporadic SSL Protocol Errors.
These Protocol Errors appear across all device types (laptops, desktops, mobile, Linux, Windows, etc) and usually an immediate re-try is successful. The commonality and highly infrequent nature of these problems indicates there is some issue on the host side completely unrelated to anything on the client.
Since a re-fresh or 2nd attempt is usually successful, I suggest trapping the error, and making 1-3 more attempts:
var promise = flickrService.get(...);
promise.success(function (data, status, headers, config) {
// Big Party
})
.error(function(data, status, headers, config) {
if (status == 107) {
promise = flickrService.get(...);
promise.success(function (data, status, headers, config) {
// Big Party
})
.error(function (data, status, headers, config) {
AlertService.RaiseErrorAlert("Flickr temporarily unavailable.Please try again later");
});
}
});
If you continue to get a "Protocol Error", then inform the user that Flickr is temporarily unavailable and to try again later.
if you run into this error and you are testing localhost endpoint just make sure you use http instead of https as your url.
eg: http://localhost:8080/ not https://localhost:8080/
This might be the answer, but i'm guessing that this is probably not a client issue, so i would suggest you to update your api's server with that line added in the header :
Access-Control-Allow-Origin: https://api.flickr.com/*
This should fix the troubles some of your users are facing.
Related
We have been encountering inconsistent client errors with a single-page JavaScript application making fetch requests. Of note, they are all same-origin requests.
let request = new Request(url, options);
...
window.fetch(request)
.then(response => response.json())
.then(data => ...)
.catch(error => ...)
Around 5% of the promises are rejecting with the following error despite the server and the browser receiving a 200 OK response:
TypeError: Failed to fetch
I'm stumped... All of my searches lead to discussions about CORS errors. That doesn't seem to apply given these are all same-origin requests. What is causing the fetch to throw the TypeError?
I can confirm using the Network tab in Chrome DevTools that the fetch request completes with a 200 OK response and valid JSON. I can also confirm that the URLs are same-origin. I can also confirm that there are no CORS pre-flight requests. I have reproduced this issue on Chrome 66 and Safari 11.1. However, we've received a stream of error reports from a mix of Chrome and Safari versions, both desktop and mobile.
EDIT:
This does not appear to be a duplicate of the linked question as we are not sending CORS requests, not setting mode: "no-cors", and not setting the Access-Control-Allow-Origin header.
Additionally, I re-ran tests with the mode: 'same-origin' option set explicitly. The requests are (still) successful; however, we (still) receive the intermittent TypeError.
I know that this is an old issue, but after searching the entire evening I want to share my findings so you can spend your time better.
My web app also worked well for most users but from time to time visitors received the error mentioned in the question. I'm not using any complicated infrastructure (reverse proxy etc.) setup nor do I communicate with services on a different domain/protocol/port. I'm just sending a POST request to a PHP-File on the same server where the React app is served from.
The short answer: My problem was that I've sent the request to the backend by using an absolute URL, like https://my-fancy-domain.com/funky_service.php. After changing this to a relative path like /funky-service.php the issue was gone.
My explanation: Most users come to the site without www in the URL, but some users actually do type this part in their address bars (www.my-fancy...). It turned out that the www is part of the origin, so when these users submit the form and send post requests to https://my-fancy... it's technically another origin. This is why the browser expects CORS headers and sometimes even sends an OPTIONS preflight request. When you use a relative path in your JavaScript-Code the post request will also include the www-part (uses the origin from the address bar) -> same-origin -> no CORS hassle. As it only affects visitors that come with the www to your site it also explains the fact that it worked for most users even with the absolute URL.
Also important to know: The request fails in the browser/ JavaScript-Code but is actually sent to the backend (very ugly!).
Let me know if you need more information. Actually, it is very simple but hard to explain (and to find)
The issue could be with the response you are receiving from back-end. If it was working fine on the server then the problem could be with the response headers. Check the Access-Control-Allow-Origin (ACAO) in the response headers. Usually react's fetch API will throw fail to fetch even after receiving response when the response headers' ACAO and the origin of request won't match.
Ref: Getting "TypeError: failed to fetch" when the request hasn't actually failed
I'm seeing a strange issue (only in production of course) where some $.post requests are failing intermittently with the error:
{"readyState":0,"status":0,"statusText":"error"}
I've looked into this and in most cases there seem to be two possible issues:
1) Cross domain errors. This shouldn't be an issue since the chrome extension is set up to be able to access the domain, additionally since it only fails sometimes this seems like it shouldn't be the issue
2) not calling e.preventDefault(). The request is being made from the background process so i don't think this is the issue. Again the fact that it is intermittent makes this unlikely.
Additionally it seems UNLIKELY that its purely network related since the only way we know about these issues are posting to an endpoint on our server to retrieve the errors...i.e. if the computer was offline, the request sent in the failblock should fail as well.
Are there any other known issues that could cause the query to fail with that error?
UPDATE
Here's some pseudo code for the way the request is made:
$.post(this._baseUrl + "/my_endpoint", JSON.stringify(data))
.done(function(res){
//SUCCESS!
})
.fail(function(error, textStatus, errorThrown) {
//{"readyState":0,"status":0,"statusText":"error"}
}.bind(this));
I have a site running on https, which is trying to reach a windows service that is running as an http server, using an http localhost address, via AJAX. However, this is returning an "Access is denied" error. It works fine when calling from http, but that is not an option beyond testing. We are also limited to using Internet Explorer (9+) only.
I have set the "Allow mixed content" security setting to "Enable" for the respective zone, but it is still getting blocked.
The AJAX call looks like this:
$.ajax({
url: 'http://localhost:5923/somefunction',
data: {
sid: sid,
aid: aid
},
success: function (ret) {
//...
},
error: function (error, status, errThrown) {
alert(errThrown);
}
});
I know modifying the windows service to function over https is the best solution long term, but does anyone have any suggestions for IE settings that would allow mixed active content, or any other interim fixes?
Thanks in advance.
You need to enable cross-origin access so go to Tools->Internet Options->Security tab, click on “Custom Level” button for the zone of your choice. Go to Miscellaneous -> Access data sources across domains setting and select “Enable” option.
Is this code blocked by SOP (Same Origin Policy)?
If so, set "crossDomain" setting to "true".
http://api.jquery.com/jquery.ajax/
I've posted before on this subject, but after a year of getting on with other things, I've managed to get into a pickle once again. I'll try and give a brief overview of the scenario and the current attempts to make things work:
IIS web server hosting HTML, JS etc. on host: iis.mycompany.com (referred to as foo)
WCF RESTful web services hosted via a Windows Service on host: wcf.mycompany.com (referred to as bar)
The Javascript served from foo works by making RESTful ajax calls (GET or POST depending on the action) to the WCF services on bar, obviously these are cross domain calls as they aren't on the same host.
The Javascript uses the jQuery (1.7.2) framework to manipulate the DOM and perform ajax calls to bar, the expected content type for POSTS is JSON, and the response from GETS is expected to be JSON too (application/json).
Bar has it's WCF services configured using TransportCredentialOnly as the security mode and the transport client credentail type is NTLM, so only authed users to contact the services.
CORS Support has been added to bar's WCF services using an extension to WCF:
http://blogs.msdn.com/b/carlosfigueira/archive/2012/05/15/implementing-cors-support-in-wcf.aspx
We have added additional headers and modfied some that the post already contained based on numerous internet articles:
property.Headers.Add("Access-Control-Allow-Headers", "Accept, Content-Type");
property.Headers.Add("Access-Control-Allow-Methods", "POST, GET, OPTIONS");
property.Headers.Add("Access-Control-Max-Age", "172800");
property.Headers.Add("Access-Control-Allow-Origin", "http://iis.mycompany.com");
property.Headers.Add("Access-Control-Allow-Credentials", "true");
property.Headers.Add("Content-type", "application/json");
Sites giving information on enabling CORS suggest that the Access-Control-Allow-Origin response header should be set to "*" however, this is not possible in our case as we make jQuery ajax calls using the following setup:
$.ajaxSetup({
cache: "false",
crossDomain: true,
xhrFields: {
withCredentials: true
}
});
As it turns out you cannot use "*" for the accepted origin when you are using "withCredentials" in the ajax call:
https://developer.mozilla.org/en/http_access_control
"Important note: when responding to a credentialed request, server
must specify a domain, and cannot use wild carding."
Currently in our development lab, this doesn't matter as we can hard code the requests to the IIS (foo) server URL.
The main problem now appears to be attempting POST requests (GET is working using the above configuration). When the browser attempts the POST process, it first sends an OPTIONS header to the server requesting allowed OPTIONS for the subsequent post. This is where we would like to see the headers we've configured in the CORS Support WCF extension being passed back, however we aren't getting that far; before the response comes back as "401 Unauthorized", I believe this is to do with the transport security binding configuration requesting NTLM, but I'm not sure.
Also, I'm not very experienced with this, but I haven't seen much information about POST using application/json content type as opposed to text/plain when performing cross domain requests.
I know that people will probably suggest JSONP as the one true solution, I'm not against different approaches, indeed I encourage anyone to suggest best practices as it would help others reading this question later. However, please attempt to answer the question before suggestion alternatives to it.
Many thanks in advance for anyone who contributes.
peteski
:)
UPDATE:
It appears that Chrome (20.x.x) doesn't suffer the problem of not negotiating NTLM to retrieve the OPTIONS header response from the server, but Firefox (13.0.1) does.
We've also noticed that someone has already posted a bug up on the Firefox forum, which we've added information to:
http://bugzilla.mozilla.org/show_bug.cgi?id=751552
Please vote for this bug to be fixed on the bugzilla site!
Using the following code, we can watch the network trace to see Firefox failing and Chrome working fine:
var url = "http://myWebServiceServer/InstantMessagingService/chat/message/send";
var data = '{ "remoteUserUri" : "sip:foo.bar#mydomain.com", "message" : "This is my message" }';
var request = new XMLHttpRequest();
request.open("POST", url, true);
request.withCredentials = true;
request.setRequestHeader("Content-Type", "application/json");
request.send(data);
console.log(request);
On a separate note, IE8 doesn't support the XMLHttpRequest for cross domain calls, favouring it's own magical XDomainRequest object, so we've got some work to do in changing the client side code to handle IE8 vs the world cases. (Thanks IE8).
/me crosses fingers that Mozilla fix the Firefox bug.
UPDATE 2:
After some digging it appears that IE8's XDomainRequest cannot be used to make cross domain requests where NTLM must be negotiated, this basically means that the security on our WCF binding can't be used thanks to limitations in a web browser.
http://blogs.msdn.com/b/ieinternals/archive/2010/05/13/xdomainrequest-restrictions-limitations-and-workarounds.aspx
"No authentication or cookies will be sent with the request"
So, I guess we've taken this as far as it is going to go for now.. It looks like we're going to have to create our own custom token authentication and pass it across to the WCF service in a cookie, or in IE8's case, POST it with the JSON. The WCF service will then have to handle decrypting the data and using that instead of the ServiceSecurityContext.Current.WindowsIdentity we previously had access to with NTLM auth.
I know you said you would rather have the problem itself addressed, but you may consider using a "reverse proxy."
I don't know what technologies you are using, but we use Apache web server and have a Java RESTful API running on a different server that required authentication. For a while, we messed with JSONP and CORS, but were not satisfied.
In the end, we setup an Apache Reverse Proxy and it worked miracles. The web browser believes it is communicating with its own domain and acts appropriately. The RESTful API doesn't know it is being used via a proxy. Therefore, everything just works. And Apache does all the magic.
Hopefully, all web servers have a feature like Apache's reverse proxy.
Here is some documentation on the feature: http://httpd.apache.org/docs/2.2/mod/mod_proxy.html
All we had to do is ensure the mod_proxy module was installed, then add the following lines to our Apache config file:
ProxyPass /restapi http://restfulserver.com/restapi
ProxyPassReverse /restapi http://restfulserver.com/restapi
Then restart the web server and voila!
I am building an AJAX application to query an OData endpoint. I've been doing some testing with the Netflix OData feed and found something I don't get:
When I make an .ajax() request to a url (e.g. http://odata.netflix.com/v1/Catalog/Titles) I get the error: "Origin null is not allowed by Access-Control-Allow-Origin". However when I put the same url into my browser the request goes through and I get a response.
What is the fundamental difference here that I'm not getting? How is the browser bypassing the Same Origin Policy?
I also used JSONP for Netflix's OData. It seems to work fine for my application. I have posted the code and explaination under my blog http://bit.ly/95HXLM
Some sample fragment below as well:
49. // Make JSONP call to Netflix
50. $.ajax({
51. dataType: "jsonp",
52. url: query,
53. jsonpCallback: "callback",
54. success: callback
55. });
56. });
57.
58. function callback(result) {
59. // unwrap result
60. var movies = result.d.results;
61.
62. $("#movieTemplateContainer").empty();
63. $("#movieTemplate").tmpl(movies).appendTo("#movieTemplateContainer");
64. }
The same origin policy applies to HTTP requests issued from within code loaded with pages from remote sites. That code is disallowed by the machine from issuing new requests for content from different domains, under the assumption that you, the user in control, were OK with fetching content from haxors.r.us, but you wouldn't want that site to issue HTTP requests to bankofamerica.com without your say-so. However, the browser should allow you, the user in control, to issue HTTP requests to anywhere. Indeed, with Humanity fading in the shadow of the Machine, I demand it. I demand it!
You can make requests to that URL from your server, and then pass along the response to your code on the client (after any sort of filtering or extraction your server code may choose to do). Alternatively, Netflix may support a JSONP API, which would allow your client-side code to issue GET requests as script fetches, with results to be interpreted as Javascript code.
Also it should be noted that this policy has nothing at all to do with jQuery itself. It's a basic security rule on the XMLHttpRequest mechanism.