Are the responses/payloads of previously made HTTP requests accessible programmatically via Javascript?
I’d like know, if in the same way hackers can use XSS to access cookies/localStorage stores in the browser, can they access data from previously made HTTP requests (since the browser DevTools has the previous requests listed and visible in the network tab).
They are only accessible if code runs before or during the request that programatically saves the response. For example, one could overwrite window.fetch and save (but pass through) all requests and responses, or do the same for XMLHttpRequest, or save the result of a request normally inside a .then or in an onload handler.
Devtools does have access to prior requests, but devtools has access to many things that can't be done via JavaScript - this is one of them.
Related
Is there a mechanism in a service worker, which I am unaware, which can allow me to know from which page a fetch request is fired from?
Example:
I have an HTML page uploaded to my website at /aPageUploadedByUser/onMySite/index.html, I do want to make sure none of the authorization headers gets passed to any fetch request made by /aPageUploadedByUser/onMySite/index.html to any source.
If my service worker does somehow know the originating page of this request, I can modulate them for safety.
To answer your question (but with a caveat that this is not a good idea!), there are two general approaches, each with their own drawbacks:
The Referer: request header is traditionally set to the URL of the web page that made a request. However, this header may not always be set for all types of requests.
A FetchEvent has a clientId property, and that value can be passed to clients.get() to obtain a reference to the Client that created the request. The Client, in turn, has a url property. The caveats here are that clients.get() is asynchronous (meaning you can't use its result to determine whether or not to call event.respondWith()), and that the URL of the Client may have changed in the interval between the request being made and when you read the url property.
With those approaches outlined, the bigger problem is that using a service worker in this way is not a safe practice. The first time a user visits your origin, they won't have a service worker installed. And then on follow-up visits, if a user "hard" reloads the page with the shift key, the page won't be controlled by a service worker either. You can't rely on a service worker's fetch event handler to implement any sort of critical validation.
I have an html page that is meant to be used in an iframe, this page makes an ajax request based on a received message (using postMessage from the parent). I can see this request using the Firefox's Network Monitor even though it is happening in the iframe. I assume all browsers have similar capabilities.
The request contains sensitive information that shouldn't be logged/saved. Is there any way to prevent this request from ever being seen (by the browser or through any other method)?
An API key has to be entered as a GET parameter and these are easily seen in the URL. I could try using ajax to call a page, which in turn uses jQuery.get(); to fetch me the things I need, but is it possible for someone with more knowledge of a browser's inspector to find their way to the js variables of a page called via ajax?
If yes, then how do I protect the API key? The tutorials only explain how to use the API itself, as if the rest is common knowledge.
Anything in the browser space is essentially public.
So either you restrict the info available with the key to information you don't mid being seen, or you put an intermediary in the process - i.e. a server that does the look ups on the protected routes and passes back public info
Some notes on GET requests:
GET requests can be cached.
GET requests remain in the browser history.
GET requests can be bookmarked.
GET requests should never be used when dealing with sensitive data.
GET requests have length restrictions.
GET requests should be used only to retrieve data.
Source W3Schools
If I'm loading arbitrary external javascript code in a browser setting, is it possible to ensure it can't make the browser run make any ajax calls or network requests?
Can you prevent any resource calls? - No. (haven't explored the 'extension' route though)
Since even an <img src='any valid url'> creates a resource request which your code cannot prevent.
Can you prevent ajax calls? - Yes, to an extent.
Assuming that you want to ensure that any third party libraries shouldn't make any arbitrary ajax calls (cross domain), you will simply ensure that you don't enable CORS in your web server.
Your own application code can make ajax calls since they are in your domain only. However, you can filter those calls on server to check for specific properties like purpose, credentials, etc
It may be worth exploring google caja (haven't tried that myself)
Is it possible to obtain data from user open id (for example such google one https://www.google.com/accounts/o8/id) via pure JS calls (not using server side at all)?
If you'd be able to send XHR requests to other domains, it would be theoretically possible.
However, since browsers generally enforce same-origin policy, it's not. Also, if you do manage to send a request to another domain, you'd need to be able to parse both the returned content, and response headers (especially the Location and X-XRDS-Location).
However, it's pretty much pointless to try to implement OpenID in javascript, unless you are sure that your users don't have access to a debugger. If they do, they can modify the value of any variable, including the one where you store the user's identity, effectively making the system insecure.