I was wondering if there was a way I can get a json file (currently using $.getJSON) without showing the get url in the 'view source'
Not really. I mean, you could derive it from something else via a complex function, but even then, it's there, just obscured.
And if you did, someone could just as easily snoop the file path using the browser's built-in diagnostic/debugging tools as they could from View Source (the "Network" tab in Chrome, for instance — all major modern browsers have debugging tools built in now). Here's me snooping on the path Stack Overflow uses to give details of upvotes/downvotes (for those with enough rep to see the breakdown):
Or they could use the debugger (see the "Scripts" tab) to inspect the string variable the calculation ended up with. Etc. Basically, if the browser knows enough to be able to retrieve the resource, the user can find out what that path was.
The only thing I can think of is to use a plug-in, like Flash or Java, to retrieve the resource and then display it. That would raise the bar a little (the path would still be accessible to anyone with a network analyzer or proxy).
Nope. Security through obscurity is never a chance.
You could make harder to reuse it by adding some referer check on the server side, though..
Or by using some "signature" token, etc.. (if you want to prevent people using it as a webservice).
Related
I want to make the Right-Click don't work in my website or give a error that says: Protected Content! The reason I want to do this is because I don't want others to see my Source Code. I know that you can make the Right-Click to not work but I am not pretty sure about F12. If there is no way to make the F12 key to not work is there any way to hide the Source Code form others? I saw a similar website today. If you right click on this website you get this:
F12 works in this website but the Source Code is hidden anyway. How can I archive similar results? Thanks for your time :)
Answering the question overly honesty:
First you must avoid publishing the site on the Internet. Make it available only on your private machine(s) you have total control of. Make sure there are no USB ports exposed to users etc. Also, no internet access of any kind. They may just download some hacker tools this way. If you do not need text input, even better, keyboard can be used to type in some hacker tools as a source code and this way steal your precious sources.
Next make a custom build of a browser. You may want to use tools like Electron instead of generic browsers this way you will end with app that runs only your website and has no developers tools nor address bar nor anything other that may be used to gain access to your precious source.
Install Linux, create new user account with minimal privileges (no write access anywhere) and let it use X without any window manager. Only your electron app with your precious website and no menus that could be used to access some hacker tools like text editor that may reveal your precious source code. Also, configure the account to have complex random password so that users do not start another session in text mode and see your source code.
Remember that hackers may use means like timing attacks, side channels or other hacky means of stealing your code. To prevent that cover walls of the room you store your computer in with a metal grid to make a Faraday cage. Check all people entering and deny them bringing any electronic devices with them. Same for analog photo cameras or paper notebooks. Better safe than sorry: they may reconstruct your site source code based on how it looks like.
Or just accept the hard truth nobody cares about your website source code. There is plenty of places you may copy paste your code from and your website is not the most interesting one. And if you do that to prevent hackers, you have to write secure code (and test/audit it), not to hide it.
Short answer: Browsers, which render your website, are a client-side technology, and there is no way you can control who is going to see or not see your source code.
Long(er) answer:
Browsers download your website, together with it's source code the website onto users computer. Which means they can manipulate it however they see fit. There are some scripts that can ban right click or other types of interactions, but if you try to stop developers from inspecting code (and if they are ispecting, it's a good bet they are developers) they will find a way even if you block f12 or right click. You can always download website, use crawler, open in notepad, etc. etc.
You may want to investigate minifying and/or uglyfying HTML code, but it's no cryptography - again, if someone wants, they will find a way to undo that.
Also, I'm curious, why would you want to do that?
You can do this using window events but still there are ways to read your code.
For example fetching js without execution or disabling js in browser for a moment.
window.addEventListener('keydown', e => {
if (e.key === 'F12') // detect f12
e.preventDefault()
})
window.addEventListener('contextmenu', e => e.preventDefault())
Essentially I want to store a variable in the client that I don't want people viewing or changing.
In the following code example:
(function () {
var foo = 'bar';
})();
Can anybody use tools or the browser to access and/or (more importantly) change the value of foo? Links to more information or tools that might do this would be appreciated. I'll be researching more in the mean time.
Thanks in advance
Yes they can modify the values of foo. As a general rule, if you don't want the client to manipulate the value, don't give them access to it (I.e. put user id's or this type of information in the DOM or client side). You may have to do a bit of state management research, encrypted cookies, sessions or if you're using ASP.NET the ViewState/ViewBag etc.
It is possible to inject javascript into any page, and from there you can manipulate every javascript object/variable on the page. Therefore any data that the javascript is receiving should be encrypted (if security is your concern).
To give you a little hint, try open your developer tools. In chrome, Control-Shift-I. Click the scripts tab, then you will see all the variables the scripts is using. It is possible to double-click anywhere within the script and add/remove pieces of code.
F.Y.I if you are using Firefox, I highly recommend Firebug. It surpasses chrome's dev tools, but I find chrome faster.. At least on my slow laptop (Ubuntu FTW).
Hope this helped
This question is extremely similar to the fully-updated version of the question asked here: How to call a JavaScript function from one frame to another in Chrome/Webkit with file protocol — unfortunately, that question was never actually answered.
I have an HTML page that contains an SVG image in an iframe. The SVG exports a JavaScript API that allows it to do useful things (reset to zoomed and centered, display at "actual size"). Below the iframe, I've put buttons that the user can click on that call through to the functions defined in the SVG.
My code looks like this:
function reset() {
document.getElementByID('iframe').contentWindow.reset();
}
It works perfectly in Safari, Firefox, and even IE 9 (which supports SVGs - hooray!). But on Chrome, it fails: the debugger informs me that:
Property 'reset' of object [object DOMWindow] is not a function.
And indeed, there does seem to be truth to that: even though 'contentWindow' is of type DOMWindow, it has no methods or fields (at least, not that the debugger will show me). Even asking for its 'document' field fails (yields null).
The rub appears to be the use of the file:// protocol to transfer both the containing HTML and the contained SVG. As noted in the question I referenced above, Chrome produces the following error when the attempt to access 'contentWindow' is made:
Attempt to access frame with URL file://[...]/contained.svg from frame with URL file://[...]/container.html. Domains, protocols and ports must match.
In general, I think security is great; this looks like a security-inspired restriction. But here, it seems to have gone too far: these are files on the user's filesystem, after all, and in my case, are even in the same directory.
Hosting the code is not an option - it must reside on the user's machine. I'd hate to have to tell people "just don't use Chrome - it has silly notions of security."
Is there no way to work around this restriction?
Of course there is no way :) These file protocols are meant to be explicitly called by the user. There is absolutely no way for a web application to allow that, as you have seen.
The only way to do that is if you "as a user" allowed that to happen, if so, you can enable that by adding the following command line parameter:
// By default, file:// URIs cannot read other file:// URIs. This is an
// override for developers who need the old behavior for testing.
--allow-file-access-from-files
So open up Chrome with: chrome.exe --allow-file-access-from-files this is used for development.
Thanks to the information offered by #Mohamed Mansour, I was able to find more details on this issue.
The rationale for Chrome's behavior is to prevent a maliciously-crafted page from, through JavaScript and internal frames, accessing the contents of your file system without your knowledge and upload data to the Internet [Chromium bug 4197, Chromium bug 47416].
It is unfortunate, from my point of view, that the Chromium team chose to take things as far as they did. Gecko is a bit more subtle in whacking this mole: it limits cross-page scripts to same-subdirectories [Mozilla bug 230606, Same-origin policy for file protocol]. The result is much less surprising for users and developers and has generated much, much less angst than has arisen over Chrome's behavior — read Chromium bug 47416 in particular to see what I mean.
Because of this behavior, I've had to modify my "website" — which cannot be hosted on the Internet and must reside on local-users' machines — so that it throws up a dialog box telling users to switch browsers. It's really too bad — I'd like to support Chrome but just can't expect my users to relaunch it with an obscure command-line option whenever they want to run my "website."
I'm posting my findings here in case anyone else stumbles across my question when Chrome seems to mysteriously not work for them and also to encourage anyone who reads this to consider starring Chromium bug 47416. The developers have made it painfully clear that they are unwilling to consider changing Chrome's behavior unless it's clear people really care about the issue. Being told "I've had to tell users not to use Chrome" hasn't been enough encouragement.
I am trying to reverse-engineer a website I don't own, figuring out how some dumb "encryption" works, in order to be able to carry out some operations automatically, by taking the functionality outside the browser.
One of the files is of particular interest, let's call it javascript.js. It is linked in the HTML document like this
<script src="/javascript.js" type="text/javascript"></script>
I have
deobfuscated javascript.js
pretty-printed its code
My question is now, considering that I'm using venkman and firefox, how to replace the on-site obfuscated javascript.js with my own pretty-printed code, in order to learn how it works.
Any other tool beside venkman should do, as long as I can still step through the deobfuscated code.
Additional question (just in case I may come cross this related situation):
How to do the same if the javascript.js would be emdedded inline in the html code like <script>code</script>?
For those of you wondering about how legal this is, my question is not the first about reverse-engineering on SO: https://stackoverflow.com/questions/tagged/reverse-engineering
Apparently there's no problem with those questions, why should there be one with mine?
My objective is to understand the code AND my question is about the TOOLS, as in "where to point and click" or which tool could help me (if venkman cannot).
You could also always use an intercepting proxy (something like Paros) which will allow you to replace any part of the response any way you like. So when the browser requests the JS file, you can catch the response in Paros, replace the content with your version, and you're done. I often use Paros for other things where I need that interception or observation point, and it's pretty simple and quite numerous in its possible applications. It's basically just a matter of running it and setting your browser proxy settings to use a proxy at localhost on the port Paros is listening on. You can then tell Paros to actually stop and allow you to edit the request or response just by checking a couple of boxes. Hope that helps.
This is going to be very difficult, if not impossible, to do without using browser debugging / extension features like GreaseMonkey or Chrome's Extension API. The reason being that if you don't get involved in the page load sequence, the obfuscated code will already have been run, setting up JavaScript objects, event handlers, etc., etc. You'd have to ensure that your new script replaced those objects and event handlers, which would be complicated and difficult.
With GreaseMonkey or Chrome Extensions or similar on whatever browser you're using, I'd expect it to be possible to detect the page loading script X and replace it with your local script Y. These things run at that level, they get involved in the process.
But despite your goals being aboveboard, debugging on someone else's site is a bad idea. If you introduce a bug through the deobfuscation process, or in the process of trying to understand the code, well that may at least waste time at the other end. I wouldn't be happy with people trying to do it on a site I was running. (That said, a site should be able to handle anything a client throws at it, because you can't trust anything coming from the client side.)
Instead of debugging on their site, I'd probably do my best to record (via Firebug or Chrome/Safari's Dev Tools, etc.) a sample ajax interaction, and then set up a dummy page on my own local server that would simply echo that interaction, playback style. Then you can experiment to your heart's content without risking throwing weird stuff at the site in question. I'd consider it unethical for me to play around in that way with someone else's site, whether they should be able to handle it or not.
Way 1:
Export the web page that uses the code to your drive (I know for sure Opera, Firefox and Chrome supports this - ctrl+s - make sure to save all content). They download all linked content (css, scripts, images), and fix the url's so the downloaded ones are loaded instead. Then replace the javascript file you want to debug and open the downloaded html in a browser, say firefox with firebug, and start debugging. This should work unless the page is heavily ajaxified.
Way 2:
I've managed to get this working in Google Chrome (v8.0.552.215 - I need to update BTW) on a page that has no jQuery (for example w3c.org) - try it yourself, just copy paste it in the address bar and wait for the page to disappear :)
javascript:(eval("var script=document.createElement('script');script.src='http://code.jquery.com/jquery-1.4.4.min.js'; document.getElementsByTagName('head')[0].appendChild(script);window.setTimeout(\"$('body').fadeOut(5000);\", 2000)"));
The script shows up in the scripts section of the console (CTRL+SHIFT+J) and you can set breakpoints. So something like this should work (feel free to modify):
javascript:(eval("for (var allsuspects=document.getElementsByTagName('script'), i=allsuspects.length, oldfile=prompt('Remove script src:'); oldfile && i>=0; i--) if (allsuspects[i] && allsuspects[i].getAttribute('src')!=null && allsuspects[i].getAttribute('src').indexOf(oldfile)!=-1) allsuspects[i].parentNode.removeChild(allsuspects[i]);var script=document.createElement('script');script.src = prompt('Inject script src:');document.getElementsByTagName('head')[0].appendChild(script);"));
The script expanded and explained:
for (var allsuspects=document.getElementsByTagName('script'), i=allsuspects.length, oldfile=prompt('Remove script src:'); oldfile && i>=0; i--)
if (allsuspects[i] && allsuspects[i].getAttribute('src')!=null && allsuspects[i].getAttribute('src').indexOf(oldfile)!=-1)
allsuspects[i].parentNode.removeChild(allsuspects[i]); // remove old script
var script=document.createElement('script'); // inject new script
script.src = prompt('Inject script src:');
document.getElementsByTagName('head')[0].appendChild(script);
The script works only in Chrome (maybe in Safari too?). I've tried Firefox, IE and Opera, but none of them worked. I would guess that there might also be an issue if the file is not available online (if you use you use the 'file://').
UPDATE: also works in Chrome v8.0.552.224
I'm looking for a browser extension (Firefox, Chrome) allowing to replace a Javascript file on a live Web site to do some testing/hacking.
Basically, it should take a URL and load another one instead (locally or on a HTTP development server).
Any idea?
Try http://www.fiddler2.com/fiddler2/version.asp
It does that and much more. But it's not a browser extension.
I think this is a task for a personal proxy. You can sniff traffic on the proxy and apply rules to modify requests/content
The Opera browser has similar functionality:
View source code of the page (Ctrl+U).
Make some changes. Or paste and replace the entire file.
Press Apply Changes in the toolbar (Ctrl+R).
For editing linked resources (such as javascript or CSS files), use the following approach:
Open the linked resource in a new tab.
View "source code" of the resource (Ctrl+U).
Make some changes.
Press Apply Changes in the toolbar (Ctrl+R).
Return to the tab with the webpage and realod (Ctrl+R).
Alternatives:
Using chrome you can change code on the fly (Developer tools -> Sources tab) and just save it (command + s)
Use the LiveReload app that actually attaches an extension (that kind of does what you want) http://livereload.com/
This may not be the "exact" answer to your question, yet I almost sure one of those will do what you want to do.
Not sure if this helps or not, but I just encountered a chrome plugin called Resource Override, which sounds like it does something similar. Im trying out the Fiddler which someone else mentioned, but I think i'm also going to try this one at some point. https://chrome.google.com/webstore/detail/resource-override/pkoacgokdfckfpndoffpifphamojphii?hl=en
You can intercept and block requests in browsers. For example in Chrome you can use the beforeload event check if it's a JS (event.target is script tag or event.url ends in .js) call event.preventDefault() and then load your own script instead.
I'm pretty sure there's a similar way to do this in FF.
There is https everywhere which lets you define rules for url rewrites. This should work on all request, including script requests.
Tamper data might do the job, but I don't know how automated/permanent you can set it up.
And there is also an extension called redirector. I didn't test that one. Potentially it only works on the address bar.
Update:
That is unfortunate. In that case probably a proxy is you only way. What about a firefox extension that is a proxy, like Foxyproxy
ColBeseder correctly brings up Fiddler (http://www.fiddler2.com/fiddler2/version.asp) as a solution to your issue.
Fiddler is perfectly capable of handling and decrypting HTTPS traffic as well - see the documentation on the page for how to configure it.
To directly answer the OP question, you can use the autoresponder feature in Fiddler to hack your production JS for testing.
Enable the autoresponder tab in Fiddler, making sure to leave pass through for unmatched requests checked, entering the URL of the JS files you want to substitute as the pattern. Select the response file from your local filesystem, and go to town!
See http://yuiblog.com/blog/2008/06/27/fiddler/ (bottom of article is most relevant) for an example.
You should probably consider robohydra, since it is specifically developed for your case. They do not support https yet, but they are open to including it in the future.
Disclaimer: I'm the author of the software :-)
A different approach that might suit your usecase better is to use a RoboHydra-based development proxy. The idea here would be that you want to keep ALL Javascript files in your machine, and use another server simply as a backend. It's great for eg. front-end developers that don't want to have the whole backend installed in their machines.
You can see the documentation, tutorials and such at http://robohydra.org/, and have an article describing exactly that usecase at http://dev.opera.com/articles/view/robohydra-a-new-testing-tool-for-client-server-interactions/.
However, as of now it can't proxy to HTTPS URLs, but that should be a trivial change that I intend to do soon anyway.
How about Greasemonkey?
That should be the thing you're searching for!