SPA - Separating the client from the server - javascript

I want to build a single page application using Ember.js in the client, and Sails.js for a REST API.
I would like to completely separate the client from server, and was thinking to host all the client assest (css, img and index.html) in a CDN or s3, while the server will probably be hosted on Horoku.
How do I avoid cross-domain problem? using a CNAME maybe?
Is this common practice?
What tools are available for such a deployment process?
Thanks!

You can use Cross-Origin Resource Sharing (CORS) for this. CORS is a way for web servers to let web browsers know which third-party domains (origins) are allowed to access their content. So basically, you want to ensure that whatever CDN you use supports CORS headers, so that you can tell the CDN the domain of the server that will be loading the resources.
Here's an article on turning CORS on for Amazon S3. It's based around using a CDN to server web fonts, but the concept applies equally to all protected files (i.e. everything besides CSS, images and Javascript files loaded in the HTML).
You could also use another Sails server as your CDN, as it supports CORS out of the box (docs here). It would probably take some tweaking to do all the fancy caching that high-end CDNs do, but it can be done!

Related

Flaws in Django-React apps

I am building a Web app, powered by react in frontend and Django in the backend.
It uses Django REST framework for providing raw data to react
But when it came to data transfer, I got scared.. I was told to use CORS headers and use a WHITELIST list to mention the allowed urls which can access the data.
Also, when I need to feed an entry in database, I need to make further allowances.
So is it safe to go like this?
I think one can easily steal data midway.
You would need to set up CORS only if your React and Django apps are on different domains/ports or use different protocols.
If that's the case, whitelisting the origin of the React app in Django is the proper (only) way to go. It's not specific to Django - that's a browser feature.
Actually, I would be worried if a backend framework accepts cross-origin requests without explicit set up.
I guess your current issue comes from node and django dev servers running on different ports. If that's true and both apps will be accessible on the same domain when deployed, you can just allow all cross-origin requests when DEBUG=True
Read more on the topic on MDN

Serving static resources with Flask - running afoul of Same-origin policy

I’m having trouble serving static files (image assets, etc.) for a small game I’m working on in Phaser. I’m using flask-socketio on the server (and socket.io on the client-side) for networking which is why I’m trying to get this working under Flask. As far as I can tell, I must use Flask to serve the static resources because otherwise I run into the problem of the Same-origin policy.
Indeed, I tried serving the assets with nginx on a different port but I got this message in my browser console (Safari in this case): SecurityError: DOM Exception 18: An attempt was made to break through the security policy of the user agent.
I looked in the Flask documentation on how to serve static files and it said to use “url_for.” However, that only works for HTML template files. I’m trying to load the static resources inside my javascript code using the Phaser engine like so (just for example):
this.load.image('player', 'assets/player.png’); //this.load.image('player’, url);
where I cannot obviously use ‘url_for’ since it’s not a template file but javascript code.
So my question is, how do I serve my static resources so that I don’t violate the same-origin policy?
Is there another secure way to serve static resources in Flask besides using ‘url_for’?
Or should I be using nginx as a reverse proxy? In the flask-socketio documentation I found this nginx configuration snippet: Flask-SocketIO documentation (please scroll down to where it says "Using nginx as a WebSocket Reverse Proxy”)
Regarding #2, I don’t quite understand how that should work. If I should be doing #2, can someone kindly explain how I should configure nginx if Flask is listening on port 5000? Where in that snippet do I configure the path to my static assets on the filesystem? And in my javascript code, what url path do I use to reference the assets?
Normally, one would set up Nginx (or some other general web server) on the "main" port, and then configure it to forward certain requests to your application server (in this case, Flask) on a secondary port which is invisible/unknown to the client browser. Flask would provide the result to Nginx which would then forward the result to the user.
This is called a reverse-proxy, and Nginx is widely considered a good choice for this setup. In this way, all files are served to the client by Nginx, so the client doesn't notice that some of them actually come from your application server.
This is good from an architectural standpoint, because it isolates your webapp (somewhat) from the client, and allows it to conserve resources, e.g. by not serving static files and by having Nginx cache some of the webapp's results when it makes sense to do so.
If you're doing development, this may seem like a lot of overhead; but for production it makes a lot more sense. However, it is a good idea to have your dev environment mimic your prod environment as closely as possible.

What's the best practice for developing AJAX apps locally and deploying them on the fly?

I'm new to AJAX development. Due to same-origin policy, the most inconvenient thing for me so far is to modify the host information string (such as absolute URLs) in JavaScript files every time whenever I try to deploy the local files to the remote. I thought about writing a shell script for doing this but it seems awkward and not flexible. What's the best practice for doing this?
EDIT:
What if I wanna debug the remote AJAX app instead?
Add Access-Control-Allow-Origin: * header to your response. It's depends on what backend or server side you are using. There are some reference:
HTTP Access Control
Enable CORS in Apache
Website about "enable cross-origin resource sharing"

Why would we not use JavaScript library on a CDN if the webpage is using SSL (https)?

For JavaScript libraries such as jQuery or YUI3, either Google or Yahoo are hosting the scripts on their CDN, and a YUI 3 Cookbook paragraph says:
perhaps your pages use SSL, in which case loading remote resources is
a bad idea, as it exposes your users’ secure information to the remote
site
I can only see that the CDN site must be well trusted, or else malicious JavaScript can be running on www.mycompany.com's webpages. But assuming the CDN sites (Google and Yahoo) are well trusted, why would an SSL webpage not want to include those JavaScript library on a CDN -- how can it "expose your users' secure information to the remote site" as described in the book?
Loading external Javascript libraries via SSL onto an encrypted webpage can be seen as betraying a user's trust, as the information the user provides to the website is no longer, theoretically, between just them and the secure website. Furthermore, in the event of an external library becoming compromised, the information passed to the website itself could be compromised as well.
Ryan Grove, a YUI3 developer, has elaborated upon this in detail here.
In short,
[...] you’re letting FooCo execute any JavaScript it wants on your website. You’re loading that JavaScript securely over SSL, so the browser isn’t displaying any scary warnings, but now your users aren’t just communicating with buygadgets.example.com. Now they’re also communicating with cdn.foolib.com, and since cdn.foolib.com can run JavaScript on your pages, they can also see any information the user reads or enters on those pages.
Of course, whether or not you decide to pull external executable code over SSL is relative to how important security is to your particular use case, and there are varying opinions on this subject..
It depends if the CDN has a secure version of the resource you're requesting. Google seems to be better at this than Yahoo! from what I've seen.
You can use protocol-less references to CDN resources like below:
Works from http or https:
<script type="text/javascript"
src="//ajax.googleapis.com/ajax/libs/jquery/1.8.3/jquery.min.js"></script>
Works from http only:
<link rel="stylesheet"
type="text/css"
href="//yui.yahooapis.com/3.8.0/build/cssreset/cssreset-min.css" />
You can also do conditional loading of scripts from a CDN and fall back to local versions:
<script src="//ajax.googleapis.com/ajax/libs/jqueryui/1.8.14/jquery-ui.min.js">
</script>
<script>
!window.jQuery.ui && document.write(
unescape('%3Cscript src="/scripts/jquery-ui-1.8.14.min.js"%3E%3C/script%3E'))
</script>
It means the continent on your website is both from a secured server and from an insecure server. Furthermore it's possible to send data to a secured and unsecured server (cdn site). It really is a means to secure your site, if you are suing SSL then it stands to reason to serve all your resources with SSL as well.
Having said all this most CDNs can serve these resources through a SSL connection (including google).

Three.JS, Amazon S3, and access control origin errors for hosted JS files

Background
We use the Javascript library Three.JS for visualizing models stored up on Amazon S3.
I use the JSONLoader for all of my models. Other formats lack the toolchain support our team needs, and common formats like COLLADA or OBJ seem to be second-class citizens as far as the included loader libraries go (they are found, for instance, in the source tree under "examples"... the JSONLoader is in the core loaders folder).
I have large model files, and so store them and their associated assets up on Amazon S3 storage, where bandwidth and space are relatively cheap. The intent is that the web app using Three.JS loads models from our storage on Amazon, and everything is okay.
Problem
Unfortunately, the models are Javascript files ("modelBlah.js", for example) and when they are loaded by the JSONLoader any sane browser immediately pouts about the fact that we're violating the same-origin policy for scripting--e.g., we're loading and attempting to evaluate scripts from a different domain than the calling script (which is the main harness for the app).
So, it would seem that we've flown in the face of many years of web security best practices.
Solutions looked at so far
Host the models ourselves? We're using Heroku for now, and ideally we'd like to use a service specifically billed as "Big Buckets of Bits and Bandwidth" instead of doing it ourselves.
Use DNAME records to spoof where the resources come from? Unfortunately, this doesn't seem sufficient to fool browsers, as the subdomain used for the media hosting would still enrage the browser security.
Use CORS, specifically Access-Control-Allow-Origin headers? Brief skimming of Amazon S3 doesn't seem to allow this, though I am hopefully mistaken. Even so, would that be sufficient?
Any ideas?
You can now finally use CORS on Amazon: http://docs.amazonwebservices.com/AmazonS3/latest/dev/cors.html
S3 does not yet allow you to set CORs. To work around this exact problem, I ended up running an EC2 instance to act as a proxy server for downloading the models. The proxy (currently just running node) grabs the file from S3, sets the CORs header and passes it down to the application. There are a lot of options for the node setup including the knox or bufferjs.
You definitely need CORS and I think S3 allows it. Otherwise, this weekend I had to setup a bucket on Google's Cloud Storage with CORS enabled and it was fairly easy (with gsutil).

Categories