I want to have a "control panel" on a website, and when a button is pressed, I want it to run a command on the server (my computer). The panel is to run different python scripts I wrote (one script for each button), and I want to run the panel on my Mac, my iPod touch, and my wii. The best way I see for this is a website, since they all have browsers. Is there a javascript or something to run a command on my computer whenever the button is pressed?
EDIT: I heard AJAX might work for server-based things like this, but I have no idea how to do that. Is there like a 'system' block or something I can use?
Here are three options:
Have each button submit a form with the name of the script in a hidden field. The server will receive the form parameters and can then branch off to run the appropriate script.
Have each button hooked to it's own unique URL and use javascript on the button click to just set window.location to that new URL. Your server will receive that URL and can decide which script to run based on the URL. You could even just use a link on the web page with no javascript.
Use Ajax to issue a unique URL to your server. This is essentially the same (from the server's point of view) as the previous two options. The main difference is that the web browser doesn't change what URL it's pointing to. The ajax call just directs the server to do something and return some data which the host web page can then do whatever it wants with.
On the client side (the browser), you can do it with the simplest approach. Just an html form. javascript would make it nicer for validation and to do ajax calls so the page doesnt have to refresh. But your main focus is handling it on the server. You could receive the form request in the language of your choice. If you are already running python, you could write a super fast cgi python script. Look at the cgi module for python. You would need to put this into the apache server on osx if thats where you will host it.
Unfortunately, your question about exactly how to write it is beyond the scope of a simple answer. But google for how to write and html form, or look at maybe jquery to build a quick form that can make ajax calls easily.
Then search for how to use the python cgi module and receive POST requests.
Javascript is basically for doing work in the browser (usually to render something nice for the end user to look at). What you want (as others have said already) is a way to connect an HTML form action to an action on the webserver "back end". And this is exactly (as RobG has pointed out) what CGI is for. An alternative to CGI which is quite popular with Apache users is mod_python - the difference is basically whether the "back end" operation runs as a standalone process (CGI) or inside a webserver process (mod_python), but for most basic applications your server side scripts don't need to care. And if you're in a shared hosting environment you may not have a choice - ask your sysadmin (or read your hosting service docs) to learn how best to run CGI scripts in this case.
Caveats:
You will probably need fairly elevated webserver admin access & expertise in order to get everything set up the way you want. You will at least need to be able (both in the sense of permissions and technical understanding) to view your webserver logs, edit your webserver configs and bounce (restart) your http service.
Whatever "back end" operations you want done will be done with the permissions/privileges of the webserver, which may not be the same as the permissions/privileges of the user account which you normally use to perform these operations. There are various ways around this (using custom daemons and/or sudo operations), but you really need to have a clear understanding with the webserver sysadmin (if the webserver is exposed to the Big Bad Internet) about how this is going to work before you deploy anything, otherwise you run the very real risk (especially if you are a noob) of making it possible for hackers to exploit your "command gateway" to hack the webserver.
Of course if you're just doing all this for fun on your personal laptop (there is an OSX tag on the question, after all), then you are the webserver sysadmin, and you're free to hack away and happily shoot yourself in the foot repeatedly while learning everything you need to know along the way, which is fine as long as you're not on a network. In this case, you may find this tutorial to be useful.
Related
I have a website builder which allows users to drag and drop HTML blocks (img, div, etc...) into the page. They can save it. Once they save it, they can view the page.
I also allow custom code like JavaScript. Would it be safe to have their page be displayed on another server on a subdomain (mypage.example.com) but still fetched from the same database as the main server, or does it not matter to put it on the same server as the main server?
As far as I know, they cannot execute any PHP code since I will be using echo to display the page content.
Thanks for help!
That depends on your setup. If you allow them to run custom JavaScript, they can probably steal session tokens from other users, which could be used to steal other accounts. I would recommend reading about XSS (Cross-Site-Scripting).
In short: XSS is the vulnerability to inject code into a site, which will run on other peoples computers.
It wouldn't make sense to give you a strict tutorial on how to do this at this point, because every system is different and needs different configuration to be attack-resistant.
Letting users put code somewhere is always a risk!
there is no need for another server, but you do need another domain to prevent Cross Site Scripting attaks on your main page. and no, a subdomain may not be sufficient, put it on another domain altogether to be on the safe side. (luckily domains can be acquired for free if you're ok with a .tk domain)
Would it be safe to have their page be displayed on another server on a subdomain
even a subdomain could be dangerous, just put it on another domain altogether, and you'll be safe.
or does it not matter to put it on the same server as the main server?
you can have it on the same server. btw, did you know that with shared webhosting services (like GoDaddy, hostgator, etc) there's thousands of websites sharing a single physical server?
also, DO NOT listen to the people saying you need to sanitize or filter the HTML, that is NOT true. there is no need to filter out anything, in my opinion, that is corruption of data. don't do that to your users, there's no need to do it. (at least i can't think of any)
As far as I know, they cannot execute any PHP code since I will be using echo to display the page content.
correct. if you were doing include("file"); or eval($code); then they could execute server-sided code, but as long as you're just doing echo $code;, they won't be able to execute server-side code, that's not a security issue.
I try to wrap my head around how to really secure ajax calls of any kind that are publicly available.
Let’s say the JavaScript on a public page (so no user authentication of any kind) contains an AJAX call to a PHP script (REST API or just a script, it doesn’t matter) that does a lot of heavy lifting. So any user can just look into the source code, find the AJAX call, rebuild and execute it, and execute it again a million times in a second and DDoS your site that way - not so great. At first I thought a HTTP_REFERER check could be helpful, but as any header field, also this is manipulable (just use a curl request) so the gain of security wouldn’t be too high.
The next approach was about a combination of using session ids, cookies, etc. to build some kind of access key for every page viewer and when someone exceeds the limit the AJAX call would run into an error. Sounds great so far, but just by cleaning the cookies, etc. everything will be reseted. So also no real solution. But, of course! Use the IP! Great idea! Users in public networks, that use only one IP for internet access will be totally happy, if one miscreant will block the service for all of them by abusing the call... not. So, also no great solution.
So, I’m really stuck here and can’t think of any great answer for my problem.
I also thought about API keys, or something alike. But that is an information that is also extractable from the JavaScript source. So how to prevent other servers using your service in a proxy kind of manner serving your data to their users? (e.g. you implemented the GMaps API in your website (or any other API) and someone uses your script accessing the API with your key)
tl;dr
Is there any good way to really secure your publicly viewable AJAX calls from abusing them for DDoSing your site, presenting your data on other sites, etc.
I think you're overthinking what AJAX is. When your site makes an ajax request, server side, it's the same as any other page request (even if some scripts are more process intensive). You need to protect your entire site, and not just specific scripts. If your server does not have any DDoS protection, it can be attacked through any page. Look into services like CloudFare
As #Sage Mentioned it is similar to normal http request. You can use normal authentication as the http headers/cookie information will be passed on to the server every time you make ajax call. For clear view you can look into developer console on browser. Its the same as exposing you website root url. Just make sure you have authentications checks for ajax calls too.
How can I run a batch file on the client side? An exe file? Just to open pre-installed program in client side?
[Edit]
Regarding ActiveX, I tried
var activeXObj = new ActiveXObject("Shell.Application");
activeXObj.ShellExecute("C:\\WINDOWS\\NOTEPAD.EXE", "", "", "open", "1");
but this doesn't work. Any suggestions?
From Javascript? You can't. It's a security risk. Think about it - would you want every website to be able to run programs on your PC?
You mean launch an external program thru a browser window using JavaScript? No way you can do that! That's a goddamn security black hole!
<script language="javascript" type="text/javascript">
function RunEXE(prog) {
var oShell = new ActiveXObject("WScript.Shell");
oShell.Run('"' + prog + '"', 1);
}
</script>
If you really have control on the client, then you may want to install some remote daemon service on the client side, like SSH.
PS. Invoke it through your "server-code", however.
Updated:
Don't be discouraged. You can absolutely do that in safe manner.
First you need a daemon service on the client that will handle the task of invoking your application. Personally, I'd rather build simple rpc-server as windows-service with C++ or Delphi; but many other kinds of server could also do the job (SSH, Apache, Telnet)
Then make a web pages that allow the user to "register" their services with proper authentication to invoke that service (password, security key)
When you want to invoke your application from web-page on the client that's already registered, make ajax call (xmlhttprequest) to your server.
The server should validate the requesting IP address with registered information.
Then make a remote command invokation to the client with the registered information.
There can be some networking situation that this scheme might not work. However, if you really have control on the execution environment then there always be some workarounds.
Redirect the client to http://yourserver/batchfile.bat. Under some browsers, this will prompt the user to run the batch file.
If the problem is the batch file is being displayed in the browser you need to set Content-Type and Content-Disposition in the HTTP header so the user is prompted to Save (or Run) the file rather than have the browser display it.
You won't be able to run the file without an OK from the user but this shouldn't be a problem.
Have a look at this question for a little more detail.
Basically, you can't. If you need to launch something on the client side, you'll need another mechanism altogether, presumably one with some security built in. A previous poster mentioned psexec (http://technet.microsoft.com/en-us/sysinternals/bb897553.aspx), which will obviously only work if you have appropriate permissions on the target system, and which is completely outside the browser.
Basically, what you are asking for is a BIG, BIG, problem if you were able to do it easily.
You might look into ActiveX, but I don't know what limits there are on an ActiveX object these days (I know there ARE limits, but perhaps you can work within them).
It's not allowed directly, for security reason. Besides all other answers, there is another idea.
You can build a localhost rest service in your pre-installed program and use javascript to call it with command or data, well assuming that you write the the pre-installed program and the service is supposed to be running when you call. This solution works in some scenarios.
After experimenting with scripts on facebook.com, I noticed that for a couple of minutes it didn't allow me to connect to facebook.com. This happened more than one time so I suspect that Facebook doesn't like to mess with their code using userscripts.
The Stack Overflow question Can a website know if I am running a userscript? answers the multiple ways a website can detect scripts.
Now I want to know how I can fool a website to not detect any scripts I may be running.
Is there a way to know what my userscript HTML changes trigger on the website?
For Firefox and Chrome there is no magic bullet to prevent detection, it is a process and an art...
The 3 principle ways that a site would detect a script are:
Out of sequence or too-frequent AJAX requests.
Changes to the page's code (HTML, global JS, or even CSS).
Unnatural mouse or keyboard action.
See "Can a website know if I am running a userscript?" for more information.
Some countermeasures:
For AJAX requests that your script makes:
Use Wireshark, or similar, to analyze the traffic of normal AJAX calls. Make sure that you send the same type of calls, in the same order.
Make sure that your requests show the same headers and data as the page's.
Do not send AJAX requests faster than the page does normally or faster than a user could reasonably trigger.
Consider adding random delays to your script's requests, so that they are not uniformly timed.
Beware of constantly changing fields in the page's requests. Be sure that you know how to generate the correct value; don't just cut and paste.
For changes that you make to the page's code:
The page can handle these either entirely locally, with it's own javascript, or it can also send back status or code fragments to the server.
Block and replace the page's javascript. Tools like NoScript, RequestPolicy, firewalls and proxies can be used to block the JS.
Then use Greasemonkey to add your edited version of the page's JS. Greasemonkey will run even if all of a page's JS is disabled.
If the page phones status home, and that status changes in response to your script's changes, Intercept and replace those messages. (This is not needed if you used the previous method.) You will need a custom firewall, proxy, or router to do this.
For events that you trigger (mouse clicks, filling out fields, etc.):
If the page checks for this, give it what it looks for. For example, instead of a click event, you may need a mouseover, mousedown, mouseup, mouseout sequence.
If that is not enough, then the countermeasures are the same as for changes made to the page's code (replace JS, intercept status messages).
Important!
In the rare event that you are battling a site that tries to thwart userscripts, it is like a war. The webmaster can adapt to what you are doing, and the environment can constantly change. So scripting for such a site is an ongoing process.
Scripts are executed on user's side - your side. Therefore site's server code cannot know if you run anything or not.
There's two things to note however:
Site can deliver some script to be run on your computer that will inspect its environment and will report back if necessary. Those are most often targeted to detect some specific script that site owner don't want to have in its environment.
Site author can inspect your requests and analyze if they don't match patterns that would match regular communication from site supplied forms or AJAX. Missing keys or keys not matching hardcoded order in query, headers that identify automation clients or missing browser headers and things like that. Hitting throttling limits if you do requests automatically much faster than human user should fits this category too.
Otherwise if you generate exactly the same responses and don't leave traces in globally accessible environment, it is impossible for site to tell if you're using userscript with either server or client side checks.
Theoretically JS runs in the browser, then after the first download can be easily copied and made to run directly from the local, without going through the remote server. Because I need to sell an application * js (pay-as-you-use) I need to check each request and make it available ONLY if required by that particular site and, of course, only if he paid.
It doesn't work. As soon as someone downloaded a copy of the JavaScript file, he or she can always save a copy of it and even redistribute it.
Thus you cannot protect the JavaScript itself - but assuming you rely on some client-server interaction (i.e. AJAX), the server would not respond to requests coming from non-authorized sources, thus rendering the client-side worthless.
If you need to protect your business logic, don't put it into JavaScript. Alternatively, sue everybody who uses your scripts without having obtained a license (not sure if this is practical, though ...).
I wouldn't make the JS file that you plan to sell available directly on a URL like
yourdomain.com/yourfile.js
I would offer it on a URL like
yourdomain.com/getfile
Where /getfile is a URL that is processed by a PHP/Java etc server-side language where you can check whatever credentials you need to check, be it requesting domain name, IP address, some token or something else.
if your application is made in java you can use a ServletFilter to check if the request is valid (if the IP is correct, or maybe you can use a ticket like the facebook, twitter, whatyouwant rest API), and if isn't valid don't show nothing
if you aren't using java I think that something similar can be made with every programming language
It may be a little more trouble than it's worth. Yes, you could require clients to provide a token and whitelist certain domains, etc. But they can still open any site that uses that particular JavaScript -- even someone else's -- and just Save As... .
A better bet is controlling the script's interaction with your server. If it makes any AJAX calls a server you control, then take that chance to authenticate. If it doesn't depend on data from you in that way, I think you'll just have to face the problem that anyone dedicated enough will be able to download your script and will be able to use it with a little bit of playing around.
Your best bet is, in addition to the above, keep track of domains that have paid and search every once in a while to find if anyone's taking your code.