Consistently on Stack Overflow, the question of how to redirect a user upon input or after values have been altered is raised, and in PHP, the standard answer is to modify the headers to achieve this.
However, in PHP, headers must be altered at the beginning before any html or text is posted.
So regularly I post my own personal solution, which is as follows:
"End the PHP tag when you would like to perform the redirect and write the following:
<script>
location.replace("Whatever.php");
</script>
And then finally pick up your PHP tag once more to complete your code."
Everytime I have posted that, the answer has been downvoted to oblivion almost immediately. However, to my knowledge, it works, in it's entirety.
So my question is, why is this downvoted so heavily? What is it about using this solution that is so frowned upon?
Example of Full Usage (excluding escape functions):
if(isset($_GET['role'])){ //If value set in GET
$role = $_GET['role']
if($role == "Admin"){ ?> // Drop PHP tag
<script>location.replace("AdminPage.php");</script> //Solution
<?php else{ ?>
<script>location.replace("StandardPage.php");</script> //Solution
<?php } } ?> //Pick up Tag final time and Close isset.
Just because it does work doesn't mean you shouldn't prefer the more fitting way to do it. So why is a header re-direct preferable, you might ask?
JavaScript can be disabled, and if you redirect because of needed authorization (it looks like you're redirecting based on that in your example), someone could just disable JavaScript and see the "secret stuff" on that page, maybe even trigger an action via a link (deletion of data), if the backend is missing the authorization-check.
If using a header redirect, that will happen as soon as the client has fetched the headers. No content will have to be transfered (it still might in some cases, but using exit; right after the Location header will prevent that). That means less overhead and faster reaction times for the end user.
Firstly, a client can just turnoff the Javascript. So then and there your code is broken immediately.
Don't give client to any chance to alter or break your work flow.
Secondly bad practice because unmanageable. Keep it separate style, scripts and php codes.
Related
I know it's impossible to hide source code but, for example, if I have to link a JavaScript file from my CDN to a web page and I don't want the people to know the location and/or content of this script, is this possible?
For example, to link a script from a website, we use:
<script type="text/javascript" src="http://somedomain.example/scriptxyz.js">
</script>
Now, is possible to hide from the user where the script comes from, or hide the script content and still use it on a web page?
For example, by saving it in my private CDN that needs password to access files, would that work? If not, what would work to get what I want?
Good question with a simple answer: you can't!
JavaScript is a client-side programming language, therefore it works on the client's machine, so you can't actually hide anything from the client.
Obfuscating your code is a good solution, but it's not enough, because, although it is hard, someone could decipher your code and "steal" your script.
There are a few ways of making your code hard to be stolen, but as I said nothing is bullet-proof.
Off the top of my head, one idea is to restrict access to your external js files from outside the page you embed your code in. In that case, if you have
<script type="text/javascript" src="myJs.js"></script>
and someone tries to access the myJs.js file in browser, he shouldn't be granted any access to the script source.
For example, if your page is written in PHP, you can include the script via the include function and let the script decide if it's safe" to return it's source.
In this example, you'll need the external "js" (written in PHP) file myJs.php:
<?php
$URL = $_SERVER['SERVER_NAME'].$_SERVER['REQUEST_URI'];
if ($URL != "my-domain.example/my-page.php")
die("/\*sry, no acces rights\*/");
?>
// your obfuscated script goes here
that would be included in your main page my-page.php:
<script type="text/javascript">
<?php include "myJs.php"; ?>;
</script>
This way, only the browser could see the js file contents.
Another interesting idea is that at the end of your script, you delete the contents of your dom script element, so that after the browser evaluates your code, the code disappears:
<script id="erasable" type="text/javascript">
//your code goes here
document.getElementById('erasable').innerHTML = "";
</script>
These are all just simple hacks that cannot, and I can't stress this enough: cannot, fully protect your js code, but they can sure piss off someone who is trying to "steal" your code.
Update:
I recently came across a very interesting article written by Patrick Weid on how to hide your js code, and he reveals a different approach: you can encode your source code into an image! Sure, that's not bullet proof either, but it's another fence that you could build around your code.
The idea behind this approach is that most browsers can use the canvas element to do pixel manipulation on images. And since the canvas pixel is represented by 4 values (rgba), each pixel can have a value in the range of 0-255. That means that you can store a character (actual it's ascii code) in every pixel. The rest of the encoding/decoding is trivial.
The only thing you can do is obfuscate your code to make it more difficult to read. No matter what you do, if you want the javascript to execute in their browser they'll have to have the code.
Just off the top of my head, you could do something like this (if you can create server-side scripts, which it sounds like you can):
Instead of loading the script like normal, send an AJAX request to a PHP page (it could be anything; I just use it myself). Have the PHP locate the file (maybe on a non-public part of the server), open it with file_get_contents, and return (read: echo) the contents as a string.
When this string returns to the JavaScript, have it create a new script tag, populate its innerHTML with the code you just received, and attach the tag to the page. (You might have trouble with this; innerHTML may not be what you need, but you can experiment.)
If you do this a lot, you might even want to set up a PHP page that accepts a GET variable with the script's name, so that you can dynamically grab different scripts using the same PHP. (Maybe you could use POST instead, to make it just a little harder for other people to see what you're doing. I don't know.)
EDIT: I thought you were only trying to hide the location of the script. This obviously wouldn't help much if you're trying to hide the script itself.
Google Closure Compiler, YUI compressor, Minify, /Packer/... etc, are options for compressing/obfuscating your JS codes. But none of them can help you from hiding your code from the users.
Anyone with decent knowledge can easily decode/de-obfuscate your code using tools like JS Beautifier. You name it.
So the answer is, you can always make your code harder to read/decode, but for sure there is no way to hide.
Forget it, this is not doable.
No matter what you try it will not work. All a user needs to do to discover your code and it's location is to look in the net tab in firebug or use fiddler to see what requests are being made.
From my knowledge, this is not possible.
Your browser has to have access to JS files to be able to execute them. If the browser has access, then browser's user also has access.
If you password protect your JS files, then the browser won't be able to access them, defeating the purpose of having JS in the first place.
I think the only way is to put required data on the server and allow only logged-in user to access the data as required (you can also make some calculations server side). This wont protect your javascript code but make it unoperatable without the server side code
I agree with everyone else here: With JS on the client, the cat is out of the bag and there is nothing completely foolproof that can be done.
Having said that; in some cases I do this to put some hurdles in the way of those who want to take a look at the code. This is how the algorithm works (roughly)
The server creates 3 hashed and salted values. One for the current timestamp, and the other two for each of the next 2 seconds. These values are sent over to the client via Ajax to the client as a comma delimited string; from my PHP module. In some cases, I think you can hard-bake these values into a script section of HTML when the page is formed, and delete that script tag once the use of the hashes is over The server is CORS protected and does all the usual SERVER_NAME etc check (which is not much of a protection but at least provides some modicum of resistance to script kiddies).
Also it would be nice, if the the server checks if there was indeed an authenticated user's client doing this
The client then sends the same 3 hashed values back to the server thru an ajax call to fetch the actual JS that I need. The server checks the hashes against the current time stamp there... The three values ensure that the data is being sent within the 3 second window to account for latency between the browser and the server
The server needs to be convinced that one of the hashes is
matched correctly; and if so it would send over the crucial JS back
to the client. This is a simple, crude "One time use Password"
without the need for any database at the back end.
This means, that any hacker has only the 3 second window period since the generation of the first set of hashes to get to the actual JS code.
The entire client code can be inside an IIFE function so some of the variables inside the client are even more harder to read from the Inspector console
This is not any deep solution: A determined hacker can register, get an account and then ask the server to generate the first three hashes; by doing tricks to go around Ajax and CORS; and then make the client perform the second call to get to the actual code -- but it is a reasonable amount of work.
Moreover, if the Salt used by the server is based on the login credentials; the server may be able to detect who is that user who tried to retreive the sensitive JS (The server needs to do some more additional work regarding the behaviour of the user AFTER the sensitive JS was retreived, and block the person if the person, say for example, did not do some other activity which was expected)
An old, crude version of this was done for a hackathon here: http://planwithin.com/demo/tadr.html That wil not work in case the server detects too much latency, and it goes beyond the 3 second window period
As I said in the comment I left on gion_13 answer before (please read), you really can't. Not with javascript.
If you don't want the code to be available client-side (= stealable without great efforts),
my suggestion would be to make use of PHP (ASP,Python,Perl,Ruby,JSP + Java-Servlets) that is processed server-side and only the results of the computation/code execution are served to the user. Or, if you prefer, even Flash or a Java-Applet that let client-side computation/code execution but are compiled and thus harder to reverse-engine (not impossible thus).
Just my 2 cents.
You can also set up a mime type for application/JavaScript to run as PHP, .NET, Java, or whatever language you're using. I've done this for dynamic CSS files in the past.
I know that this is the wrong time to be answering this question but i just thought of something
i know it might be stressful but atleast it might still work
Now the trick is to create a lot of server side encoding scripts, they have to be decodable(for example a script that replaces all vowels with numbers and add the letter 'a' to every consonant so that the word 'bat' becomes ba1ta) then create a script that will randomize between the encoding scripts and create a cookie with the name of the encoding script being used (quick tip: try not to use the actual name of the encoding script for the cookie for example if our cookie is name 'encoding_script_being_used' and the randomizing script chooses an encoding script named MD10 try not to use MD10 as the value of the cookie but 'encoding_script4567656' just to prevent guessing) then after the cookie has been created another script will check for the cookie named 'encoding_script_being_used' and get the value, then it will determine what encoding script is being used.
Now the reason for randomizing between the encoding scripts was that the server side language will randomize which script to use to decode your javascript.js and then create a session or cookie to know which encoding scripts was used
then the server side language will also encode your javascript .js and put it as a cookie
so now let me summarize with an example
PHP randomizes between a list of encoding scripts and encrypts javascript.js then it create a cookie telling the client side language which encoding script was used then client side language decodes the javascript.js cookie(which is obviously encoded)
so people can't steal your code
but i would not advise this because
it is a long process
It is too stressful
use nwjs i think helpful it can compile to bin then you can use it to make win,mac and linux application
This method partially works if you do not want to expose the most sensible part of your algorithm.
Create WebAssembly modules (.wasm), import them, and expose only your JS, etc... workflow. In this way the algorithm is protected since it is extremely difficult to revert assembly code into a more human readable format.
After having produced the wasm module and imported correclty, you can use your code as you normallt do:
<body id="wasm-example">
<script type="module">
import init from "./pkg/glue_code.js";
init().then(() => {
console.log("WASM Loaded");
});
</script>
</body>
Say i have a form with which user inputs some information and is submited to server using php and in PHP code i have say
$data = $_POST['data'];
// or
$data = strip_tags(#$_POST['data']);
I want to know of the strip_tags() is enough to stop javascript injection through html forms. If not how else can this be prevented. I have read here.
And also say i input javascript:void(document.bgColor="blue") in the browser address bar, this changes the whole site background color to blue. How can javascript injection through the address bar be prevented.
Thanks.
i suggest to use htmlspecialchars when ever you want to output something to browser
echo htmlspecialchars($data, ENT_QUOTES, 'UTF-8');
checkout this
For question 2, I'm not sure if that's even possible to prevent. It's not something I've ever considered before. It sounds like you're trying to prevent executing any javascript that wasn't included by you on the page, which would also mean blocking the devtools in the browser from executing anything in the console. This could potentially be hostile to your users, e.g. if they wanted to use a bookmarklet from Instapaper.
For 1, ultimately your goal is to avoid including this injected javascript from the form when you generate a new page. When you output the data from the form, you can wrap it in htmlspecialchars.
It's depend which output you are trying to get.
In some cases , you'll want to leave the HTML tags including script tags ,but you want that those elements will not run when you output them, in that case you should use htmlspecialchars($_POST['data']), (It's suggested to define also utf8 as the third parameter).
But if you want to remove entierly the tags than strip_tags will prevent XSS
One function cannot fully protect you from script injection. Consider the following program:
<?php
if(isset($_POST['height']))
$height=htmlspecialchars($_POST['height'], ENT_QUOTES, 'UTF-8');
else $height=200;
if(isset($_POST['width']))
$height=htmlspecialchars($_POST['width'], ENT_QUOTES, 'UTF-8');
else $width=300;
echo("
<!DOCTYPE html>
<html>
<body>
<iframe src='whatever' height=$height width=$width>
</iframe>
</body>
</html>
");
The input is sanitized, but javascript will still be executed through a simple injection vector like:
300 onload=alert(String.fromCharCode(88)+String.fromCharCode(83)+String.fromCharCode(83))
You still need to quote your attributes or you are vulnerable like this example.
Another semi-common injection vector exists when user input is echoed into javascript comments, and you can inject new lines or close the comment. I blame it on the 'this shit doesn't work as it should, but let's keep it around in a comment'-style of development.
Note: The XSS protection of many browsers will not run my simple example. If you want to try it use one without protection, or find a vector that defeats it (not sure if there is one for e.g. Chrome).
I'm trying to rationalize a website, and I have many links on it to the same document, so I want to create a JavaScript that return the URL of this document. This way, i could update the document and only have to change the URL in the function, not in all the occurrences of the link (it's a professional and internal website, with many links to official documents, that get updated often, out of my control, and each time i get to update links, i realize a while after that i forgot some, even by searching in all html files. the site is messy, was poorly written by many people, and that's why i'm trying to simplify)
My first idea was to use link, but everyone says it's a bad practice. i still don't see why, and I don't like to use the onclick as it doesn't work with middle click, and i want to let users decide how they open the doc.
Also, I want to use link to redirect to a specific page.
on top of this, what i tried so far is not working like I intend, so i would need some help, whether to come up with a better solution, or to make this work!
here is my js, with different versions:
function F_link_PDF() {
// i was pretty sure this would work
return "http://www.example.com/presentation.pdf" ;
}
function F_link_PDF_2() {
document.write("http://www.example.com/presentation.pdf");
}
function F_link_PDF_3() {
// i don't like this solution, as it doesn't open as user intended to
location.href = "http://www.example.com/presentation.pdf" ;
}
this example is for a pdf document, but i could also need this for html, doc, ppt...
and finally, i started with js because i'm used to, but I could also use other languages, php, asp, if someone says it's a better option
thanks in advance!
The hack way: Go about using JavaScript, however you run into potential issues with browsers not running it.
The better way: Use mod_rewrite / .htaccess to redirect previous (expired) requests to the new location of the resource. You could also use FallbackResource and provide a .php file that could provide the new resource based on criteria (you now have the power of PHP to decide where the Location header should go).
The best way1: Place those document references in a database table somewhere and reference them in the page using the table's current value. This creates a single place of "truth" and allows you to update the site from a global perspective. You could also, at a later date, provide search, tag, display a list, etc.
1 Not implying it's the abosolute best, but it is certainly a better way than updating hard-coded references.
A server side programming language like php is a better option.
Here's example code that helps:
<?php
$link="http://www.example.com/files/document.pdf";
if ($_GET['PAGE'] == "downloads")
{
?>
This is a download page where you can download our flyer.
<?php
echo "Download PDF";
}
if ($_GET['PAGE'] == "specials")
{
?>
This is our store specials page. check them out. a link to the flyer is below.
<?php
echo "Download PDF";
}
?>
The code isn't 100% perfect since some text needs adjusting but what it does is it takes a parameter PAGE and sees that it is "downloads" or "specials" and if it is, it loads the appropriate page and adds the link to the download file. If you try both pages, the link to the download is exactly the same.
If the above php script is saved as index.php, then you can call each page with:
index.php?PAGE=specials for the specials page
index.php?PAGE=downloads for the download page
Once that works, then you can add another "if" section for another page to create but the most important line in each section is the last line of...
echo "Download PDF";
...because it's taking a variable thats usable in every case in the script.
An advantage with using server side method is that people can view the site even with javascript disabled.
i'm delegating some application logic client side (javascipt).
How can i switch to server side only if javascript is disabled.
eg
if(javascript.isdisabled)
{
//execute server code
}
You do it the other way around. You write HTML that works with server side code, then layer JavaScript over the top that stops the HTTP request that would trigger the server side code.
The specifics depend on exactly what you want to do, but will usually involve calling the Event.preventDefault() method having bound an event listener.
For example, given a form:
function calc(evt) {
var form = this;
alert(+this.elements.first.value + +this.elements.second.value);
evt.preventDefault()
}
var form = document.querySelector('form');
form.addEventListener('submit', calc);
See also: Progressive Enhancement and Unobtrusive JavaScript
Server code executes first, then is sent client side. And there's no good way to determine server side whether JS is turned on.
So purely: no. You simply don't know if JS is enabled until the point where the server already done serving that request.
Solutions are as follows:
1) Do it manually (not recommended)
As long as this is not happening on the user's first page view, you could determine on the first page view whether or not JS is enabled, and then tell all future requests to the server that information manually. One way to accomplish that would be to have all links have a query var telling the server to execute logic, but after the page loads remove that var via JS (which obviously will only happen if there is JS).
So a link would look like https://blah.com/my-page?serverexecute=1 in the page, then once the page loads JS (if it's enabled) can remove the var so it's just https://blah.com/my-page. Then the server would only executed your logic if the query var serverexecute is present and set to 1.
But this would be very non-standard and, frankly, weird. The more normal way to do this is:
2) Reverse your thinking (recommended)
As said in another answer: progressive enhancement. This is the norm. You serve a page with the expectation that no other scripting be needed (i.e. do what has to be done server side) and then use JS as enhancement on top of that only.
3) Don't cater to non-JS (also recommended, personally anyway)
JS availability is an insanely high percentage. It is considered a norm, and you'd be surprised how many sites don't actually work without it.
Note that I'm not saying "just let it break silently", but rather show a message at the top (or wherever is relevant) saying that the site or part of the site may not function correctly without JS (this can be done via noscript tags quite easily).
A notable example of this is none other than Facebook. I just tried going to facebook with JS disabled. I wasn't logged in to anything, so I got to the signup page, and above the form it noted:
"JavaScript is disabled on your browser.
Please enable JavaScript on your browser or upgrade to a JavaScript-capable browser to register for Facebook."
They didn't even make the effort to stop the form from showing...just, in essence, told me "by the way, it won't work".
Unless there's some very specific requirement that means you absolutely need non-JS (beyond the normal general "let's be accessible" concept) I personally believe there is currently absolutely no reason to spend any effort catering to non-JS users beyond the courtesy of a noscript letting them know that you're not catering to them.
You could think of redirecting users without javascript to a special page, that includes server-side logic you've mentioned, like so:
<head>
<noscript>
<meta http-equiv="refresh" content="0; url=http://example.com/without-js" />
</noscript>
</head>
The page may be exactly same page featuring query string that will tell server to perform the logic you've mentioned.
Another possible approach to consider is explained in "Detect if JavaScript is enabled in ASPX" article.
Is it possible to use jQuery/Javascript to see if a webpages source code is altered by a visitor and if so redirect them?
And by altered, I mean if they open firebug or something and edit anything on the page once its finished loading?
This seems like a hack to prevent people from messing with your forms.
This is most definitely not the right way to make your site more secure; security must always come from the server-side or, if everything is done via the front-end, in a way that can only hurt the user who is currently signed in.
Even if you did succeed in implementing this using JavaScript, the first thing I would do is disable exactly that :) or just disable JavaScript, use wget, inspect the code first, then write a curl work-around, etc.
Even if there is a way to do that, the visitor can still edit this verification, so this is pointless.
yes. at the loading store the innerhtml of the html element in a string.
then set an interval every second to check if the current html matches the stored var.