$http.get('/someUrl').
success(function(data, status, headers, config) {
// this callback will be called asynchronously
// when the response is available
}).
In the above code, I want to hide the header information. Using the browser inspect method we can view the header information.
It's not possible to hide requests.
A couple of more additions (Not sure why this was downvoted, but I'll attempt to be more clear).
Javascript is client-side, which means it get's run from the client's machine. The end user will be able to inspect the code, and be able to see what it's attempting to do. Even if the code is obfuscated, it can still be converted or read and they will be able to see how it's making the request (Thus being able to interpolate what request headers would be sent.)
What if the client is using a custom browser? For example, one they wrote himself? You make a request to your server, and it's going to respond with your javascript/HTML. Then, their javascript parser is going to attempt to make the request, which they can then capture the request that is attempting to be made.
In chrome, web requests are logged for ease of use to the end developer. This is not something that can be disabled or turned off, because as I stated in my previous paragraph would only make it appear as if it's hidden, and wouldn't be effective at stopping someone who knows anything about the HTTP protocol.
And finally, even if this were possible, someone could monitor the traffic between the client and the server, and inspect network traffic to see what was sent and received (HTTP is a fully documented RFC protocol for interchanging web requests, and is essentially a giant string of headers and easily view-able.) In short, it's not possible.
Javascript is client-side, which means it get's run from the client's machine. The end user will be able to inspect the code, and be able to see what it's attempting to do. Even if the code is obfuscated, it can still be converted or read and they will be able to see how it's making the request
Related
In my angular application if I go to dev tools network tab I will be able to see the response and request coming from the back end.
Do anyone know how to hide or mask this data, is this possible if I do the server-side rendering?
Requests will be shown.
This cannot be stopped, the application is making requests and this will be logged to the network tab by the browser, as mentioned in the comments, if there are security concenrns you should be handling this a different way. Do not send data to the client that they should not be allowed access to in the first place.
To try and ensure security run over HTTPS on the off chance to data gets intercepted, that way it will not be usable data. Most data, as mentioned in the comments, will be provided by the user. Meaning in should not need to be hidden within the network tab.
Worst case scenario, someone physically sits at their computer and reads what is in the network tab, but this is a scenario that cant be accounted for when developing applications. You could base64 encode data that is being sent to and from so it is less readable to anyone who should see the network tab. Here are some resources to have a look through related to the question.
HTTPS summerised // base64 encode // Angular's security section
I try to wrap my head around how to really secure ajax calls of any kind that are publicly available.
Let’s say the JavaScript on a public page (so no user authentication of any kind) contains an AJAX call to a PHP script (REST API or just a script, it doesn’t matter) that does a lot of heavy lifting. So any user can just look into the source code, find the AJAX call, rebuild and execute it, and execute it again a million times in a second and DDoS your site that way - not so great. At first I thought a HTTP_REFERER check could be helpful, but as any header field, also this is manipulable (just use a curl request) so the gain of security wouldn’t be too high.
The next approach was about a combination of using session ids, cookies, etc. to build some kind of access key for every page viewer and when someone exceeds the limit the AJAX call would run into an error. Sounds great so far, but just by cleaning the cookies, etc. everything will be reseted. So also no real solution. But, of course! Use the IP! Great idea! Users in public networks, that use only one IP for internet access will be totally happy, if one miscreant will block the service for all of them by abusing the call... not. So, also no great solution.
So, I’m really stuck here and can’t think of any great answer for my problem.
I also thought about API keys, or something alike. But that is an information that is also extractable from the JavaScript source. So how to prevent other servers using your service in a proxy kind of manner serving your data to their users? (e.g. you implemented the GMaps API in your website (or any other API) and someone uses your script accessing the API with your key)
tl;dr
Is there any good way to really secure your publicly viewable AJAX calls from abusing them for DDoSing your site, presenting your data on other sites, etc.
I think you're overthinking what AJAX is. When your site makes an ajax request, server side, it's the same as any other page request (even if some scripts are more process intensive). You need to protect your entire site, and not just specific scripts. If your server does not have any DDoS protection, it can be attacked through any page. Look into services like CloudFare
As #Sage Mentioned it is similar to normal http request. You can use normal authentication as the http headers/cookie information will be passed on to the server every time you make ajax call. For clear view you can look into developer console on browser. Its the same as exposing you website root url. Just make sure you have authentications checks for ajax calls too.
I am looking for a tool that would run in a browser (any browser will do) and show me where each HTTP request originated (HTML source file and line, Javascript, or whatever else).
A bit of background. There is a third-party Web application that can be accessed either directly or through a content-modifying proxy. In the former case it works, in the latter it doesn't. My task is to figure out why the proxy breaks the app, and fix whatever problem there is (normally the proxy should only modifications that do not affect functionality).
I have narrowed it down to a single HTTP request. When accessed directly, the browser issues a GET to one particular address, say http://example.com/foobar.html. When accessed through the proxy, there's no such request. This foobar.html contains an important part of the application, so it won't function without it. Supposedly the proxy breaks some code that ought to issue this request. The problem is that I cannot find this code, and so cannot figure out what exactly is broken. There's nothing that looks remotely like foobar in the entire application.
The application in question is a jumble of obfuscated Javascript that generates other avascript and/or HTML that may contain more Javascript etc. Somewhere along the road it probably generates, piece by piece, some iframe src=... or whatever via document.write, and this chunk of HTML references the needed http://example.com/foobar.html.
So what I need is the ability to tell the browser: "See this address, http://example.com/foobar.html? Whenever there is a request to this address, stop and show me what are you doing!" Hopefully this will let me narrow down my search somewhat more.
I couldn't find such functionality in firebug or venkman. Am I missing something? Is there any other tool that would let me do it?
I see the Referer header of the request in question, but the referring file is very large and obfuscated. So far, I was not able to make anything meaningful out of it.
FireBug then the Network Tab...
Fiddler - it allows to view and search the HTTP sessions. Be sure to decode sessions when searching.
I use jQuery for AJAX. My question is simple - why cache AJAX? At work and in every tutorial I read, they always say to set caching to false. What happens if you don't, will the server "store" such requests and get "clogged up"? I can find no good answer anywhere - just links telling you how to set caching to false!
It's not that the server stores requests (though they may do some caching, especially higher volume sites, like SO does for anonymous users).
The issue is that the browser will store the response it gets if instructed to (or in IE's case, even when it's not instructed to). Basically you set cache: false if you don't want to user's browser to show stale data it fetched X minutes ago for example.
If it helps, look at what cache: false does, it appends _=190237921749817243 as a query string pair (random number, the actual one is the current time, so it's always....current). This forces the browser to make the request to the server for data again, since it doesn't know what that query string means, it may be a different page...and since it can't know or be sure, it has to fetch again.
The server won't cache the requests, the browser will. Remember that browsers are built to display pages quickly, so they have a cache that maps URLs to the results last returned by those URLs. Ajax requests are URLs returning results, so they could also be cached.
But usually, Ajax requests are meant to do something, you don't want to skip them ever, even if they look like the same URL as a previous request.
If the browser cached Ajax requests, you'd have stale responses, and server actions being skipped.
If you don't turn it off you'll have issues trying to figure why you AJAX works but your functions aren't responding as you'd like them to. Forced re-validation at the header level is probably the best way to gain a cache-less assimilation of the data being AJAX'd in.
Here's a hypothetical scenario. Say you want the user to be able to click any word on your page and see a tooltip with the definition for that word. The definition is not going to change, so it's fine to cache it.
The main problem with caching requests in any kind of dynamic environment is that you'll get stale data back some of the time. And it can be unpredictable when you'll get a 'fresh' pull vs. a cached pull.
If you're pulling static content via AJAX, you could maybe leave caching on, but how sure are you that you'll never want to change that fetched content?
The problem is, as always, Internet Explorer. IE will usually cache the whole request. So, if you are repeatedly firing the same AJAX request then IE will only do it once and always show the first result (even though subsequent requests could return different results).
The browser caches the information, not the server. The point in using Ajax is usually because you're going to be getting information that changes. If there's a part of a website or something you know isn't going to change, you don't bother with it more than once (in which case, caching is ok), that's the beauty of Ajax. Since you should only be dealing with information that may be changing, you want to get the new information. Therefore, you don't want the browser to cache.
For example, Gmail uses Ajax. If caching was simply left on you wouldn't see your new e-mail for quite awhile, which would be bad.
I apologize that there is a similar question already but I'd like to ask it more broadly.
Is there any way at all to determine on the client side of a web application if requesting a resource will return a 401 status code and cause the browser to display an ugly authentication dialog?
Or, is there any way at all to load an mp3 audio resource in flash which fails invisibly in the case of a 401 status code rather than letting the browser show an ugly dialog?
The Adobe Air run-time will suppress the authentication if I set the "authenticate" property of the URLRequest object but this property is not in the Flash run-time. Any solution which works on the client will do. An XMLHttpRequest is not likely to work as the resources in questions will be at different domains.
It is important to fail invisibly because the application will have a list of many audio resources to try and it makes no sense to bother the user to try and authenticate for one when there are many others available. It is important that the solution work on the client because the mp3's in question come from various servers outside my control.
I'm having the same problem with the twitter api - any protected user requires the client to authenticate.
The only solution that I could come up with was to load the pages serverside and return a list of the urls with their http response code.
"Is there any way at all to determine on the client side of a web application if requesting a resource will return a 401 status code and cause the browser to display an ugly authentication dialog?"
No, not in general. The 401 response is the only standard way for the server to indicate that authentication is necessary.
Just wrap your access to the resource that might potentially require authentication to an Ajax call. You can catch the response code, and use javascript to do whatever you want (ie. play that sound). If the response code is however alright, then use javascript to forward user to the resource.
Most likely this approach will generate slightly more load on server (you might have to resort to loading the same resource several times in some circumstances), but it should work. Any good tutorial about how to use XMLHttpRequest should contain all you need. Take a look at for instance http://www.xul.fr/en-xml-ajax.html
If you are using URLRequest to get the files, then you are running across more than just elegant error handling, you are running into a fundamental difference in the Flash and AIR run-times.
If using the URLRequest object to retrieve files you are going to get a security error from Flash on every request to every server that has not set a policy file to allow these sort of requests. AIR allows these requests since it basically IS the client. This makes sense since it's the difference between installing an application and visiting a web page.
I hate to provide the non-answer, but if you can't make a server-side call, and you are hitting a range of "not-known" servers, it's going to be a tough road to hoe.
But maybe I misunderstand, are you just trying to Link to the files and prevent the user from getting bad links, or are you trying to actually load the files?