Does ajax increase or decrease security? - javascript

I am creating a website which until now is pure PHP. I was thinking that since very few people do not have JavaScript enabled (which I wonder why!) maybe I should create my website as a fully PHP site without any AJAX. Am I thinking wrong?
Just to be sure, if I implement some AJAX would it increase the risk of my site getting breached?
Should I be even worried about this and start using AJAX?

AJAX itself will not increase or decrease the security of your site, at least if its implementation is elaborate. The client (browser) will have turned JavaScript on or off. If it is turned on, there may be more insecurities on the client side, but this won't affect your server and hence your site.
Nevertheless, you should of course implement your AJAX entry points securely (this server side files that are accessed by AJAX requests). Two of the most important rules of thumb you should keep in mind are:
Treat every user input (whether coming in via AJAX or not) as potentially "evil" and therefore validate it thoroughly and use database escaping, ... Do NOT rely on client-side validation only!
A good website should offer all the possibilities accessible with javascript enabled also without it. Surely, this is not always possible, but one should try it at least.
I would suggest using a framework (depending on what background technology you are using, like PHP, Java, Perl) supporting AJAX, which will make the whole thing much easier for you. Also, you should maybe search for something like "securing AJAX applications" to get more detailed information on the topic.

Ajax is not inherently secure or insecure.
It does however open up 'opportunities' for insecure code. A mistake I commonly see is as follows:
The user is authenticated in code-behind when the page is loaded
A user id or other credentials are rendered out and picked up by JavaScript
These (totally unauthenticated) credentials are sent back over the wire by Ajax and not checked server side. Very easily hacked even with a simple tool like Fiddler!
You must apply the same level of security to your web methods/web API as you would elsewhere in your site. If you do, Ajax should be no more or less secure than 'regular' web pages. Things to watch out for include:
Proper authentication and authorisation checks in your web services or API
Anti-SQL injection measures
HTTPS if transmitting personal or sensitive data
Ajax makes websites more responsive and is pervasive across the modern web. Take care with security, and degrade gracefully for non-JS-enabled visitors if you expect a significant number of visitors to have JavaScript turned off or if any lost visitor is not acceptable to you, and you should have nothing to fear.

I am creating a website which until now is pure PHP. I was thinking
that since very few people do not have JavaScript enabled (which I
wonder why!) maybe I should create my website as a fully PHP site
without any AJAX. Am I thinking wrong?
I would say most people do have JavaScript enabled. Probably 2% at most have it disabled according to this post from 2012.
Just to be sure, if I implement some AJAX would it increase the risk
of my site getting breached?
Yes, technically it does. More code = more chance of bugs, and security bugs will be a subset of these.
You will also be increasing the attack surface of your application, as you will be generally be implementing more server-side handlers for user actions to be executed asynchronously.
There is also more chance of client side bugs being prevalent such as XSS (particularly more chance of DOM based XSS bugs sneaking in there).
Should I be even worried about this and start using AJAX?
You should be "rightfully concerned" rather than worried. The increased risk is considered acceptable by most sites, even high security systems such as banking applications. If implemented correctly it is possible for your site to remain "just as secure" as it was without AJAX.
As with anything web-based, pay particular attention to the OWASP Top 10 and follow secure coding practices to minimise the risks. It is always advisable to get a security assessment carried out by an external company to pickup anything you've missed, although this can be expensive.

Related

Is there any difference between making DOM on the server/client side? (speed perspective) [duplicate]

I've done some web-based projects, and most of the difficulties I've met with (questions, confusions) could be figured out with help. But I still have an important question, even after asking some experienced developers: When functionality can be implemented with both server-side code and client-side scripting (JavaScript), which one should be preferred?
A simple example:
To render a dynamic html page, I can format the page in server-side code (PHP, python) and use Ajax to fetch the formatted page and render it directly (more logic on server-side, less on client-side).
I can also use Ajax to fetch the data (not formatted, JSON) and use client-side scripting to format the page and render it with more processing (the server gets the data from a DB or other source, and returns it to the client with JSON or XML. More logic on client-side and less on server).
So how can I decide which one is better? Which one offers better performance? Why? Which one is more user-friendly?
With browsers' JS engines evolving, JS can be interpreted in less time, so should I prefer client-side scripting?
On the other hand, with hardware evolving, server performance is growing and the cost of sever-side logic will decrease, so should I prefer server-side scripting?
EDIT:
With the answers, I want to give a brief summary.
Pros of client-side logic:
Better user experience (faster).
Less network bandwidth (lower cost).
Increased scalability (reduced server load).
Pros of server-side logic:
Security issues.
Better availability and accessibility (mobile devices and old browsers).
Better SEO.
Easily expandable (can add more servers, but can't make the browser faster).
It seems that we need to balance these two approaches when facing a specific scenario. But how? What's the best practice?
I will use client-side logic except in the following conditions:
Security critical.
Special groups (JavaScript disabled, mobile devices, and others).
In many cases, I'm afraid the best answer is both.
As Ricebowl stated, never trust the client. However, I feel that it's almost always a problem if you do trust the client. If your application is worth writing, it's worth properly securing. If anyone can break it by writing their own client and passing data you don't expect, that's a bad thing. For that reason, you need to validate on the server.
Unfortunately if you validate everything on the server, that often leaves the user with a poor user experience. They may fill out a form only to find that a number of things they entered are incorrect. This may have worked for "Internet 1.0", but people's expectations are higher on today's Internet.
This potentially leaves you writing quite a bit of redundant code, and maintaining it in two or more places (some of the definitions such as maximum lengths also need to be maintained in the data tier). For reasonably large applications, I tend to solve this issue using code generation. Personally I use a UML modeling tool (Sparx System's Enterprise Architect) to model the "input rules" of the system, then make use of partial classes (I'm usually working in .NET) to code generate the validation logic. You can achieve a similar thing by coding your rules in a format such as XML and deriving a number of checks from that XML file (input length, input mask, etc.) on both the client and server tier.
Probably not what you wanted to hear, but if you want to do it right, you need to enforce rules on both tiers.
I tend to prefer server-side logic. My reasons are fairly simple:
I don't trust the client; this may or not be a true problem, but it's habitual
Server-side reduces the volume per transaction (though it does increase the number of transactions)
Server-side means that I can be fairly sure about what logic is taking place (I don't have to worry about the Javascript engine available to the client's browser)
There are probably more -and better- reasons, but these are the ones at the top of my mind right now. If I think of more I'll add them, or up-vote those that come up with them before I do.
Edited, valya comments that using client-side logic (using Ajax/JSON) allows for the (easier) creation of an API. This may well be true, but I can only half-agree (which is why I've not up-voted that answer yet).
My notion of server-side logic is to that which retrieves the data, and organises it; if I've got this right the logic is the 'controller' (C in MVC). And this is then passed to the 'view.' I tend to use the controller to get the data, and then the 'view' deals with presenting it to the user/client. So I don't see that client/server distinctions are necessarily relevant to the argument of creating an API, basically: horses for courses. :)
...also, as a hobbyist, I recognise that I may have a slightly twisted usage of MVC, so I'm willing to stand corrected on that point. But I still keep the presentation separate from the logic. And that separation is the plus point so far as APIs go.
I generally implement as much as reasonable client-side. The only exceptions that would make me go server-side would be to resolve the following:
Trust issues
Anyone is capable of debugging JavaScript and reading password's, etc. No-brainer here.
Performance issues
JavaScript engines are evolving fast so this is becoming less of an issue, but we're still in an IE-dominated world, so things will slow down when you deal with large sets of data.
Language issues
JavaScript is weakly-typed language and it makes a lot of assumptions of your code. This can cause you to employ spooky workarounds in order to get things working the way they should on certain browsers. I avoid this type of thing like the plague.
From your question, it sounds like you're simply trying to load values into a form. Barring any of the issues above, you have 3 options:
Pure client-side
The disadvantage is that your users' loading time would double (one load for the blank form, another load for the data). However, subsequent updates to the form would not require a refresh of the page. Users will like this if there will be a lot of data fetching from the server loading into the same form.
Pure server-side
The advantage is that your page would load with the data. However, subsequent updates to the data would require refreshes to all/significant portions of the page.
Server-client hybrid
You would have the best of both worlds, however you would need to create two data extraction points, causing your code to bloat slightly.
There are trade-offs with each option so you will have to weigh them and decide which one offers you the most benefit.
One consideration I have not heard mentioned was network bandwidth. To give a specific example, an app I was involved with was all done server-side and resulted in 200Mb web page being sent to the client (it was impossible to do less without major major re-design of a bunch of apps); resulting in 2-5 minute page load time.
When we re-implemented this by sending the JSON-encoded data from the server and have local JS generate the page, the main benefit was that the data sent shrunk to 20Mb, resulting in:
HTTP response size: 200Mb+ => 20Mb+ (with corresponding bandwidth savings!)
Time to load the page: 2-5mins => 20 secs (10-15 of which are taken up by DB query that was optimized to hell an further).
IE process size: 200MB+ => 80MB+
Mind you, the last 2 points were mainly due to the fact that server side had to use crappy tables-within-tables tree implementation, whereas going to client side allowed us to redesign the view layer to use much more lightweight page. But my main point was network bandwidth savings.
I'd like to give my two cents on this subject.
I'm generally in favor of the server-side approach, and here is why.
More SEO friendly. Google cannot execute Javascript, therefor all that content will be invisible to search engines
Performance is more controllable. User experience is always variable with SOA due to the fact that you're relying almost entirely on the users browser and machine to render things. Even though your server might be performing well, a user with a slow machine will think your site is the culprit.
Arguably, the server-side approach is more easily maintained and readable.
I've written several systems using both approaches, and in my experience, server-side is the way. However, that's not to say I don't use AJAX. All of the modern systems I've built incorporate both components.
Hope this helps.
I built a RESTful web application where all CRUD functionalities are available in the absence of JavaScript, in other words, all AJAX effects are strictly progressive enhancements.
I believe with enough dedication, most web applications can be designed this way, thus eroding many of the server logic vs client logic "differences", such as security, expandability, raised in your question because in both cases, the request is routed to the same controller, of which the business logic is all the same until the last mile, where JSON/XML, instead of the full page HTML, is returned for those XHR.
Only in few cases where the AJAXified application is so vastly more advanced than its static counterpart, GMail being the best example coming to my mind, then one needs to create two versions and separate them completely (Kudos to Google!).
I know this post is old, but I wanted to comment.
In my experience, the best approach is using a combination of client-side and server-side. Yes, Angular JS and similar frameworks are popular now and they've made it easier to develop web applications that are light weight, have improved performance, and work on most web servers. BUT, the major requirement in enterprise applications is displaying report data which can encompass 500+ records on one page. With pages that return large lists of data, Users often want functionality that will make this huge list easy to filter, search, and perform other interactive features. Because IE 11 and earlier IE browsers are are the "browser of choice"at most companies, you have to be aware that these browsers still have compatibility issues using modern JavaScript, HTML5, and CSS3. Often, the requirement is to make a site or application compatible on all browsers. This requires adding shivs or using prototypes which, with the code included to create a client-side application, adds to page load on the browser.
All of this will reduce performance and can cause the dreaded IE error "A script on this page is causing Internet Explorer to run slowly" forcing the User to choose if they want to continue running the script or not...creating bad User experiences.
Determine the complexity of the application and what the user wants now and could want in the future based on their preferences in their existing applications. If this is a simple site or app with little-to-medium data, use JavaScript Framework. But, if they want to incorporate accessibility; SEO; or need to display large amounts of data, use server-side code to render data and client-side code sparingly. In both cases, use a tool like Fiddler or Chrome Developer tools to check page load and response times and use best practices to optimize code.
Checkout MVC apps developed with ASP.NET Core.
At this stage the client side technology is leading the way, with the advent of many client side libraries like Backbone, Knockout, Spine and then with addition of client side templates like JSrender , mustache etc, client side development has become much easy.
so, If my requirement is to go for interactive app, I will surely go for client side.
In case you have more static html content then yes go for server side.
I did some experiments using both, I must say Server side is comparatively easier to implement then client side.
As far as performance is concerned. Read this you will understand server side performance scores.
http://engineering.twitter.com/2012/05/improving-performance-on-twittercom.html
I think the second variant is better. For example, If you implement something like 'skins' later, you will thank yourself for not formatting html on server :)
It also keeps a difference between view and controller. Ajax data is often produced by controller, so let it just return data, not html.
If you're going to create an API later, you'll need to make a very few changes in your code
Also, 'Naked' data is more cachable than HTML, i think. For example, if you add some style to links, you'll need to reformat all html.. or add one line to your js. And it isn't as big as html (in bytes).
But If many heavy scripts are needed to format data, It isn't to cool ask users' browsers to format it.
As long as you don't need to send a lot of data to the client to allow it to do the work, client side will give you a more scalable system, as you are distrubuting the load to the clients rather than hammering your server to do everything.
On the flip side, if you need to process a lot of data to produce a tiny amount of html to send to the client, or if optimisations can be made to use the server's work to support many clients at once (e.g. process the data once and send the resulting html to all the clients), then it may be more efficient use of resources to do the work on ther server.
If you do it in Ajax :
You'll have to consider accessibility issues (search about web accessibility in google) for disabled people, but also for old browsers, those who doesn't have JavaScript, bots (like google bot), etc.
You'll have to flirt with "progressive enhancement" wich is not simple to do if you never worked a lot with JavaScript. In short, you'll have to make your app work with old browsers and those that doesn't have JavaScript (some mobile for example) or if it's disable.
But if time and money is not an issue, I'd go with progressive enhancement.
But also consider the "Back button". I hate it when I'm browsing a 100% AJAX website that renders your back button useless.
Good luck!
2018 answer, with the existence of Node.js
Since Node.js allows you to deploy Javascript logic on the server, you can now re-use the validation on both server and client side.
Make sure you setup or restructure the data so that you can re-use the validation without changing any code.

Processing a search-function at the server or at the SPA?

Hi fellow programmers!
I am currently working on a SPA project, where the main model class is a plain old User.
Example of the schema.
User = {'name': 'christopher', 'age': '21', 'nationality': 'Denmark'};
For my question, i don't seek any code or examples.
I am implementing a search-function for searching through all the users stored on the server.
So my application is going to serve the users containing whatever the user wrote in the search-field, after the user has hit submit on the search-button, and then i should decide following choices:
Call a 'get-all-users'-request to the server, and then make the filtering in the SPA, after getting ALL users.
OR
Implement this search-function at the server-side that filters everything and serves it in the result of the request.
Thanks in advance!
Up until a couple of years ago, the general rule was to place as much logic as possible on the server side. Now, with better JS technology and browser engines more it is possible (and in some cases desirable) to place logic on the client side.
Pros for server-side logic:
Security. Anyone can read your Javascript (even if it is minified)
Performance. Browsers will go slower when you are dealing with large datasets.
Browsers. You will have to deal with various numbers of browser. And although new js and css libraries has eliminated much of 'the old problems', there are a surprisingly large number of people still using old versions of IE.
Scalability. You can increase your server(s) processing capabilities (especially if it is virtual), but you can not make your user's browsers go faster.
Pros for client-side logic:
Better user experience as you have better responsiveness (do not need a round trip over the network for every interaction) and better/rich user interfaces (E.g. Angular.js with bootstrap)
Scalability, as the users' browsers are doing all the work it saves your server processing.
Lower network cost, as you do not have to send the same amount of data.
These are just from the top of my head. In your case, I think I would have placed as much logic as possible on the server side. You are less likely to freeze the browsers with heavy data processing and you could increase you servers capabilities if necessary.
As a general rule, you shouldn't transfer data over the network unless you have to. In your case, getting all users when your customer only wanted a small subset would be a terrible misuse of their bandwidth.

Is it considered bad form to execute javascript returned by an AJAX call?

I'm modifying an existing web application that features the ability to administrate users who are able to log into the system. When modifying a user's details via a dialog, update data is sent to the server via AJAX. A few lines of javascript to then update the current page to reflect these changes is returned with the intention of being executed. This strikes me as poor form - isn't executing remotely acquired JS dangerous?
If I were to modify this, I would have the AJAX call that sends the updated information then call another function that gets the latest data from the server via AJAX (or just refresh the page, if I am feeling lazy). Is there any advantage (mainly security, but from an architectural perspective as well) to making this change, or am I being anal?
Assuming we're talking about eval used on non-json.
People will tell you all sorts of things, most of it has some basis in reality. I'd say one reason that is really understandable: it will make the code a nightmare to maintain and it will be very hard to trace bugs.
There are security concerns, a lot of people like to jump on the "javascript is the clients problem" bandwagon. I say if it comes from your site, it's your problem too.
In the end, there is no good reason I can think of to eval javascript from the server. Pass data from the server, and write the javascript on the client-side to react to that data.
All JS executed by the browser is remotely acquired.
The server that returned the JS/JSON via AJAX is the same server that returned the HTML that did the AJAX call in the first place.
It if's possible to do something bad, it can be done whether you eval the result of the AJAX call or not.
Personally, I don't see the issue. Sure, people say things such as "It allows code execution client-side" however if the potential attacker is able to affect that, then that's your problem - not the ability to modify the code.
Seriously, I think you have far more pressing concerns than that. I'd personally spend that 10 minutes or so reviewing your code and looking for flaws instead of working on an alternative to eval(). I think it'll improve your security a fair bit more.
Mike Samuel mentions MITM. I don't see why. If you're susceptible to a MITM attack then chances are that code can be injected straight into the initial HTML page (again, sure, slightly higher risk but is it really worth worrying about? Your choice.)
If a trusted developer wrote all of it and you protect it the way you do the rest of your HTML page, then no.
But even if it is JavaScript written by trusted developers, if it is served over HTTP, then an attacker can modify it in-flight because HTTP over free wireless is often susceptible to MITM.
This can be used to effectively install a keylogger in the current browser window to steal user passwords, redirect them to phishing pages, etc.
The attack might work like this:
Web page does a GET to http://example.com/foo.js.
Attacker modifies foo.js mid-flight to add JavaScript that does window.addEventListener("keypress", /* a keylogger that sends all keys to evil.com cross domain by image loading tricks */)
Browser loads modified JavaScript.
User enters a password in an <input type=password>.
Evil wins.
Since HTTPS (assuming no mixed content) is not susceptible to MITM, it is not vulnerable to this attack.
You probably don't want to just call another function after you send the data update because you could then display information that isn't true if an update fails. At least with the current model, your service can tailor the javascript based on whether or not the update was successful. You may want to consider having the service just return a true/false and have the call back function handle the updating of the UI.
Sort answer: Yes
Long answer: You should just send data for both security reasons and to keep your implementations separate.
Improperly sanitized user-submitted content or advertisements could inject code and get it run. Granted, this would have to be a targeted attack, but hey maybe you're working for a startup that's going to be the next Google or a forum system that's going to be the next vBulliten. Even if you have 10 users, security holes are still holes and are still bad for you and your users. Also, bad security advice left lying around SO will lead others to make bad decisions.
Do you really want to have to make sure the code you generate on the fly and send to the client is correct in all possible cases? What if someone else works on just one half of the system? Are they going to know every variable name to avoid stomping on? Just sending data is the best way to keep a 'small' change in your client/server communication from breaking everything in ways that aren't obvious and will take forever to debug.

JavaScript Security Concerns?

I'm working on an eCommerce application that's using quite a bit of JavaScript and jQuery. Everything is checked server-side before anything is processed, but there has been a lot of news lately regarding web-based break-ins via JavaScript. I was just wondering what type of things I can do to reduce this threat and if there were any good websites that had information on this.
Thanks in advance!
Here is a good link to read about XSS:
https://www.owasp.org/index.php/XSS_%28Cross_Site_Scripting%29_Prevention_Cheat_Sheet
Basically, what a "hacker" is trying to do when they use XSS is get the user to think your site is asking them for something secure, when really the "hacker" is asking (by injecting script into your site) and is sending that data to somewhere unsecure. Comes in many flavors.
The usual prevention measures are to sanitize you data (so a "hacker" can't inject scripts via data entry). Basically, anything that is dynamically created or comes from users (even your own content people) should be encoded so that script, etc cannot be executed.
Many people seem confused about the goals of XSS. If you think that your server and the data on it are the only thing to protect, you are wrong. XSS is often directed at the user, rather than the server, the "hacker" is trying to steal from the user, not the server (or it's owners). Stealing from the user may in turn result in stealing from the server (got the users credentials, now go buy stuff impersonating the user).
Check out Google Gruyere. It's a tutorial on a fake web site with a lot of security holes that you can exploit in order to better understand common security problems in web applications. It goes over XSS and other problems that can occur in JavaScript, as well as some server-side problems.

SproutCore Security and Authentication concerns

I've been trying to learn a little about SproutCore, following the "Todos" tutorial, and I have a couple of questions that haven't been able to find online.
SproutCore is supposed to move all of the business logic to the client. How is that not insecure? A malicious user could easily tamper with the code (since it's all on the client) and change the way the app behaves. How am I wrong here?
SproutCore uses "DataStores", and some of them can be remote. How can I avoid that a malicious user does not interact with the backend on his own? Using some sort of API key wouldn't work since the code is on the client side. Is there some sort of convention here? Any ideas? This really bugs me.
Thanks in advance!
PS: Anyone thinks Cappuccino is a better alternative? I decided to go with SproutCore because the documentation on Cappuccino seemed pretty bad, although SproutCore's doesn't get any better.
Ian
your concerns are valid. The thing is, they apply to all client side code, no matter what framework. So:
Web applications are complicated things. Moving processing to the client is a good thing, because it speeds up the responsiveness of the application. However, it is imperative that the server validate all data inputs, just like in any other web application.
Additionally, all web applications should use the well known authentication/authorization paradigms that are prevalent in system security. Authentication means you must verify that the user is who they say they are, and they can use the system, with Authorization means that the server must verify that the user can do what they are trying e.g. can they create a new data entry, or edit an existing one. It is good design to not present users with UI options that they are not allowed to perform, but you should not rely on that.
All web applications must do those things.
With respect to the 'interacting with the back end' concern: Again, all web applications have this concern. You can open up firebug/webkit, and look at all the the xhr requests that RIAs use in their operations, and mimic them to try to do something on that system. Again, this concern is dealt with by the authentication/authorization checks that you must implement. Anybody can use any webclient to send a request to the server. It is up to the developer to validate that request.
The DataSources in SproutCore are just an abstraction around how SC apps interact with the server. At the end of the day, however, all SC is doing is making XHR requests to the server, like any other RIA.

Categories