Every now and then I hear an opinion that having the same URL for non-Ajax and Ajax action is bad.
On my app, I'm having forms that are sent with Ajax for better user experience. For people who disable JavaScript, my forms work too. Same goes with some of my links. I used to have the same URL for both and just use appropriate content and Content-Type, according to whether it's an Ajax call or not. This caused problem with Google Chrome: Laravel 5 and weird bug: curly braces on back
My question now is - is this REALLY bad idea to have the same URL for Ajax and non-Ajax actions? It's painful to make two separate URLs for each of those actions. Or maybe is there a good workaround to manage caching? In theory, one header can change the behavior entirely, so I don't see why should I create extra layer of my app and force the same thing to have separate URL.
Please share your opinions.
HTTP is flexible and allows you to design the resources the way you want. You design the APIs and designing comes to personal preferences. But in this case, having one resource that responds to different types of request is absolutely fine. This is why the HTTP headers like Content-type exists.
And for the caching you can use HTTP Etag header. It's a caching header that forces the client to validate the cached resources before using them.
The ETag or entity tag is part of HTTP, the protocol for the World Wide Web. It is one of several mechanisms that HTTP provides for web cache validation, which allows a client to make conditional requests. This allows caches to be more efficient, and saves bandwidth, as a web server does not need to send a full response if the content has not changed
Related
I'm studying web security.
Form Security During studying, I wonder questions.
Can I send a post without a form?
It does not ask to transfer data to the server without post transmission.
When I send a 'post', I ask if I should go through the 'form'.
You can use XMLHttpRequest to send requests of all types (POST, GET, ...)
More information here.
POSTing in HTML without a FORM didn't work in the browsers I tried it in. Since the FORM element specifies the URL to access, there isn't really complete basis to perform such an operation.
However using curl or a custom HTTP client, it is definitely possible to construct & send handcrafted requests (POST, GET, PUT, DELETE, others) to an HTTP server. These are independent of any HTML pages the server might offer -- the client need never perform a GET -- and can be constructed completely arbitrarily.
For example, a request may specify parameters (eg "order.total", "customer.id", "developmentMode=true") which the web application never offered in the HTML or expected to be received. This can be a potential security hole if eg. automatic binding frameworks are used, and bindable fields should be carefully controlled when using such.
Applications must be robust against such requests as a basic principle of web security.
Google Chrome has an extension called "Postman" that allows sending HTTP POST requests: https://chrome.google.com/webstore/detail/postman/fhbjgbiflinjbdggehcddcbncdddomop?hl=en
I did just learn about the details of CSRF-prevention. In our application, all "writing" requests are done using XHR. Not a single form is actually submitted in the whole page, everything is done via XHR.
For this scenario, Wikipedia suggests Cookie-to-Header Token. There, some random value is stored in a cookie during login (or at some other point in time). When making an XHR-request, this value is then copied to a custom http-header (e.g. "X-csrf-token="), which is then checked by the server.
Now I am wondering, if the random value is actually necessary at all in this scenario. I think it should be enough to just set a custom header like "X-anti-csrf=true". Seems a lot more stable than dragging a random value around. But does this open any security issues?
It depends on how many risky assumptions you want to make.
you have properly configured your CORS headers,
and the user's browser respects them,
so there is no way a malicious site can send an XHR to your domain,
which is the only way to send custom headers with a request
If you believe in all that, sure, a fixed custom header will work.
If you remove any of the assumptions, then your method fails.
If you make the header impossible to guess, then you don't need to make those assumptions. You're still relying on the assumption that the value of that header can't be intercepted and duplicated by a third party (there's TLS for that).
I have an API (1) on which I have build an web application with its own AJAX API (2). The reason for this is not to expose the source API.
However, the web application uses AJAX (through JQuery) go get new data from its AJAX API, the data retrieved is currently in XML.
Lately I have secured the main API (1) with an authorization algorithm. However, I would like to secure the web application as well so it cannot be parsed. Currently it is being parsed to get the hash used to call the AJAX API, which returns XML.
My question: How can I improve the security and decrease the possibility of others able to parse my web application.
The only ideas I have are: stop sending XML, but send HTML instead. Use flash (yet, this is not an option).
I understand that since the site is public, and no login can be implemented, it can be hard to refuse access to bots (non legitimate users). Also, Flash is not an option... it never is ;)
edit
The Web Application I am referring to: https://bikemap.appified.net/
This is somewhat of an odd request; you wish to lock down a system that your own web application depends on to work. This is almost always a recipe for disaster.
Web applications should always expect to be sidelined, so the real security must come from the server; tarpitting, session tokens, throttling, etc.
If that's already in place, I don't see any reason why should jump through hoops in your own web application to give robots a tougher time ... unless you really want to separate humans from robots ;-)
One way to reduce the refactoring pain on your side is to wrap the $.ajax function in a piece of code that could sign the outgoing requests (or somehow add fields to it) ... then minify / obscurify that code and hope it won't get decoded so fast.
I'm building an API and want Ajax to be able to interact with it. The API needs to allow inserting, updating, and deletion of data. Is it a good idea to allow any of these operations via GET?
For example: http://api.domain.com/insert_person/?name=joe
My original plan way to use GET for my "getting" methods (basically, just a simple DB query) and POST for add, edit, and delete. Problem is JS same-origin policy which would make it hard for Ajax to interact with my API. There is a jQuery workaround for GET (via JSONP).
Suggestions?
In a word: NO
GET should always be used only for retrieving information and should never have side effects, ever.
This is a best practice across just about every web api out there and has to do with both the intent of the verb as well as how existing software expects things to behave.
If you're trying to get around the same origin policy, GET via JSONP is the only possible front-end solution. If you've got control of the back end you can setup a proxy service that is on the same domain as the page, but relays to and from the API service.
If you're going to go down the JSONP GET path, make sure you read up on XSS and CSRF.
Add another layer of to handle your code and interact with your database (different domain).
You would still use POST and you can make a request to your db in the server side, using what ever language your are working with, example php will use curl.(to make request to a different domain)
If you allow to interact with your db using get, then anyone can simply type the url with the commands they want, so yes avoid it .
As others have pointed out, GET should not be used for actions with side effects like inserting, updating and deleting.
To allow cross-origin use of your API, look into Cross-Origin Resource Sharing, although it's currently only partially supported by browsers.
I'm working on a JavaScript library and I would like anybody using it can make request to my server.
Because this I have added the access-control-allow-origin,method headers to my server responses.
Thigs works fine but my is question is: Is that secure for my server? there is any other implication I can take into account?
Thanks a lot.
It's as secure as the code on your server is. If you allow people to send an AJAX request that can drop a table, then no it's not secure. But if you follow best practices for website/scripting security it should be as safe as handling any other request your server normally would.
Can anonymous users make changes to your server (e.g. incrementing a vote counter, post a comment, delete a post, etc)? If so, does it matter if a website you don't control makes some or all of their users make use of this feature of your site? Do the access control headers allow remote XHR to make those requests? If so, you have a problem.
Can known users make changes to your server? If so, does it matter if a website you don't control makes some or all of their users who are also your users make use of this feature of your site? Do the access control headers allow remote XHR to make those requests? Do the access control headers allow authentication methods (such as cookies) through? If so, you have a problem.
In short:
Can a user do something potentially undesirable on your site?
Do your access-control headers prevent third party websites from making users do those undesirable things?