How to serve server page inside cloudfront and s3 static files? - javascript

I serve my website using cloudfront s3 (static files). https://example.com
When I open particular url I want to get into the server, but stay on the same domain.
for example
https://example.com/index.html -> serve from cloudfront and s3
https://example.com/app.js -> serve from cloudfront and s3
https://example.com/foo -> go direct to my server (https://api.example.com/api/foo) and foo will return html content and in the browser is will stay and display the https://example.com/foo url.
Is something can be possible to do with s3 and cloudfront? if so what the configure I need to be done to achieve this?

You can setup a Cache behaviour for URL path which should go to Dynamic website
And a Default cache behaviour (A catch everything else) which should go to S3.
When you create a new distribution, you specify settings for the default cache behavior, which automatically forwards all requests to the origin(for you its S3) that you specify when you create the distribution. After you create a distribution, you can create additional cache behaviors that define how CloudFront responds when it receives a request for objects that match a path pattern (in your case /foo)
If you are doing it through CloudFormation then see AWS::CloudFront::Distribution CacheBehavior - AWS CloudFormation.
From Console
And Then

Related

Building a flask + heroku app, s3 hosting for static assets (js and images currently not loading)

I'm building a flask app called Neuroethics_Behavioral_Task.
I also have an s3 bucket called neuroethics-task. In the root directory, I uploaded a file called experiment.js and an image, test.png.
I followed the instructions in these two parts of Heroku's documentation about s3:
https://devcenter.heroku.com/articles/s3
https://devcenter.heroku.com/articles/s3-upload-python
The first link says the following about how to access assets you've uploaded to s3
After assets are uploaded, you can refer to their public URLs (such as http://s3.amazonaws.com/bucketname/filename) in your application’s code. These files will now be served directly from S3, freeing up your application to serve only dynamic requests.
So I have this line in the header of one of the html templates.
<script href="http://s3.amazonaws.com/neuroethics-task/experiment.js"></script>
I also tried to include the image from copying the path directly on s3 (which is different from the heroku docs). Here's that line.
<img src='s3://neuroethics-task/test.png'
The issue is that nothing happens when I access the web page that's supposed to use the javascript from experiment.js currently when I'm trying to use the Flask application LOCALLY.
I suspect that maybe things will work if I push to heroku... But I need to get a local debugged solution up and running first and foremost. So i need to figure out how to correctly reference these files.
I had gotten error messages before when I used src= and when I had variants of the url's prefix. But now, nothing happens when I get to the webpage that's supposed to load experiment.js. experiment.js uses a javascript framework called JsPsych that basically works like a static application -- no redirects occur from jspsych. You have to create an html template for flask's sake, but all you have to do for that template is include the reference to the experiment.js file.
Since experiment.js just isn't loading yet, and since there's no other html on that template because all of it is within experiment.js, nothing happens.
I have my environmental variables set:
$ export AWS_ACCESS_KEY_ID=jhdfshfjskdhfj
$ export AWS_SECRET_ACCESS_KEY=jlsjfklksfjlfh
I'm not sure about what permissions settings I need on s3. For my bucket, I have
Block public access to buckets and objects granted through new access control lists (ACLs) -- Off
Block public access to buckets and objects granted through any access control lists (ACLs)-- Off
Block public access to buckets and objects granted through new public bucket or access point policies -- On
Block public and cross-account access to buckets and objects through any public bucket or access point policies -- On
So... what's going wrong here? I just want my javascript to load at least.

web security: What happens when Content-Disposition is not supplied?

I am allowing users to download files from my application. For that I am explicitly setting "Content-Disposition" as "inline" or "attachment" based on the type of file. This is kinda manual right now. So, for pdf files i set it to "inline" but for html files I set it to "attachment".
Is there a way to automatically decide the value of "Content-Disposition" in express based on file type ?
If I do not send a "Content-Disposition" header, it seems to me currently that the request is treated like it has "Content-Disposition: inline" . Is this observation correct, or is there something more to it?
If by default browser tries to execute/preview the files (based on point 2), what does it mean for security when you allow downloading html files which can execute javascript?
Is there a way to automatically decide the value of "Content-Disposition" in express based on file type ?
You could write middleware that inspects the response and modifies it.
If I do not send a "Content-Disposition" header, it seems to me currently that the request is treated like it has "Content-Disposition: inline" . Is this observation correct, or is there something more to it?
See MDN which says: "The first parameter in the HTTP context is either inline (default value, indicating it can be displayed inside the Web page, or as the Web page)…"
If by default browser tries to execute/preview the files (based on point 2), what does it mean for security when you allow downloading html files which can execute javascript?
Not a lot unless you are serving up JavaScript that you (the website author) do not trust.
If you need to serve HTML documents which might contain JavaScript you don't trust then serve them from a different origin (to use the Same Origin Policy to sandbox them) and/or implement a Content Security Policy to ban them from executing JavaScript.

Add language path into browser address bar

I'm looking for information about adding extra information into browser address bar. for example language path.
So what type of code should I look for if I wanna change browser address from mysite.com/index.php to mysite.com/EN/index.php
but at the same time, I don't have to make an extra folder for each language file what I add for the website.
It depends on which web server is hosting the application. If apache, a way to get this is using AliasMatch directive. See https://httpd.apache.org/docs/2.4/mod/mod_alias.html#aliasmatch. This applies to httpd.conf (global apache configuration file) or .htaccess (local apache configuration file) and requires mod_alias.
Example:
AliasMatch "^/(EN|FR|PT)/(.*)" "/local/path/$2"
will accept /EN/... /FR/... and /PT/... .
or
AliasMatch "^/([A-Z]{2})/(.*)" "/local/path/$2"
will accept any two upper case letters as prefix.
After getting this working, in order to determine which language to show, you should check $_SERVER[‘REQUEST_URI’] variable in your PHP script.
Just go to the public folder in the hosting and create a new folder in it named EN and copy index.php to the folder EN and try the path mysite.com/EN/index.php .It should work.
You will probably need to handle this on the server side. When a request is made to mysite.com/EN/index.php, the server checks the request URL for the language part EN and serves a webpage with the corresponding language from wherever it resides in the file structure.

Javascript React Single Page Application + Amazon S3: Permalinks issue

I am using an S3 bucket as static web solution to host a single page React application.
The react application works fine when I hit the root domain s3-bucket.amazon.com and the HTML5 history api works fine every time I click on a link the new url looks fine: _http://s3-bucket.amazon.com/entities/entity_id_
The problem happens when I use permalinks to access the application. Let's assume I am typing the same url (_http://s3-bucket.amazon.com/entities/entity_id_) in the browser I will get the following error from Amazon S3:
404 Not Found
Code: NoSuchKey
Message: The specified key does not exist.
Key: users
RequestId: 3A4B65F49D07B42C
HostId: j2KCK4tvw9DVQzOHaViZPwT7+piVFfVT3lRs2pr2knhdKag7mROEfqPRLUHGeD/TZoFJBI4/maA=
Is it possible to make Amazon S3 to work nicely with permalinks and HTML5 history api? Maybe it can act as proxy?
Thank you
Solution using AWS CloudFront:
Step 1: Go to CloudFront
Click on distribution id
Go to the Error page tab
Click on Create Custom Error Response
Step 2: Fill the form as
HTTP Error Code: 404
TTL: 0
Custom Error Response: Yes
Response Page Path: /index.html
HTTP Response Code: 200
source: https://medium.com/#nikhilkapoor17/deployment-of-spa-without-location-strategy-61a190a11dfc
Sadly S3 does not support the wildcard routing required for single page apps (basically you want everything after the host in the url to serve index.html, but preserve the route.
So www.blah.com/hello/world would actually serve www.blah.com/index.html but pass the route to the single page app.
The good news is you can do this with a CDN such as Fastly (Varnish) in front of S3. Setting up a rule such as:
if(req.url.ext == "" ) {
set req.url = "/index.html";
}
This will redirect all non asset requests (anything without a file extension) to index.html on the domain.
I have no experience running SPA on Amazon S3, but this seems to be a problem of url-rewriting.
It is one thing do have the history (html5) api rewrite your url when running your application / site.
But allowing rewritten urls to be accessible when refreshing or cold-surfing to your site definitely needs url-rewriting on a server level.
I'm thinking web.confg (IIS), .htaccess (Apache) or nginx site configuration.
It seems the same question already got asked some time ago: https://serverfault.com/questions/650704/url-rewriting-in-amazon-s3
Specify the same name for the index and error files on "Static website hosting" properties. See linked image 1.
Old question but simplest way would be to use hash routing. So instead of mydomain.com/somepage/subpage it would be mydomain.com/#/somepage/subpage

Force download through markup or JS

Lets assume I have a file on a CDN (Cloud Files from Rackspace) and a static html page with a link to that file. Is there any way I can force download this file (to prevent it from opening in the browser -- for mp3s for example)?
We could make our server read the file and set the corresponding header to:
header("Content-Type: application/force-download")
but we have about 5 million downloads per month so we would rather let the CDN take care of that.
Any ideas?
There’s no way to do this in HTML or JavaScript. There is now! (Ish. See #BruceAldrige’s answer below.)
The HTTP Content-Disposition header is what tells browsers to download the files, and that’s sent by the server. You have to configure the CDN to send that header with whichever files you want to browser to download instead of display.
Unhelpfully, I’m entirely unfamiliar with Rackspace’s Cloud Files service, so I don’t know if they allow this, nor how to do it. Just found a page from December 2009 that suggests not thought, sadly:
Cloud Files cannot serve a file with the 'Content-Disposition: attachment' HTTP header. Therefore, a download link that would work perfectly in any other service may result in the browser rendering the file directly. This was confirmed by Rackspace engineers. :-(
http://drupal.org/node/656714
I know that you can with Amazon’s CloudFront service, as it’s backed by S3 (see e.g. http://blog.cloudberrylab.com/2009/06/how-to-set-custom-http-headers-for.html)
You can use the download attribute:
<a href="http..." download></a>
https://stackoverflow.com/a/11024735/21460
However, it’s not currently supported by Safari (7) or IE (11).
Yes, you can do this through the cloudfiles API. Using the method stream allows you to stream the contents of files in - setting your own headers etc.
A crazy idea: download via XMLHttpRequest and serve a data: URL with the content type you want? :P

Categories