Using an AWS S3 bucket with a web-app - javascript

I have created EC2 instances, with a load balancer and auto scaling as described in the documentation for AWS from the following link:
http://docs.aws.amazon.com/gettingstarted/latest/wah-linux/getting-started-application-server.html
I would like to store all of the user images and files on a S3 bucket, but I'm not sure of the best way to connect it to my web-app.
The web-app has an API coded in PHP, which would need to upload files to the bucket. The front end is coded in JavaScript which would need to be able to access the files. These files will be on every page, and range from user images to documents.
Currently all the media is loaded locally, and nothing is stored on the S3 bucket. The front end and the API are both stored on EC2 instances and would need to access the S3 bucket contents.
What is the most efficient way to load the data stored on the S3 bucket? I have looked into the AWS SDK for JavaScript, but it doesn't seem to be the best way of getting the files. Here is the code that seems relevant:
var s3 = new AWS.S3();
var params = {Bucket: 'myBucket', Key: 'myKey'};
s3.getSignedUrl('getObject', params, function (err, url) {
console.log("The URL is", url);
});
If this is the way to go, would I have to do this for each image?
Thank you

Related

Custom domain does not work in aws with serverless-next.js?

I am facing the same issue as Github issue but for a different use case. I am trying to add multiple instances of the application: Example code here.
With the domain added I am getting the same error:
Domain www.app.com was not found in your AWS account.
So to achieve the multiple instances, I tried a hack changing the env-prod file to bucket="prod-appname". This gets deployed, but when I add an env-stage file to bucket="stage-appname", this creates a new bucket, but deploys it to the same CloudFront URL. Is there a way to fix any of them so that I can achieve multiple instances?
Thanks in advance
nextjsapp:
component: "#sls-next/serverless-component#1.18.0"
inputs:
bucketName: < S3 bucket name >
use this in servereless.yml and s3 bucket should be created in us-esat-1 (N.virginia)
and then deploy then a cloud front is created where the bucket is your s3 bucket after note the ID of cloud front and change the serverless.yml as
nextjsapp:
component: "#sls-next/serverless-component#1.18.0"
inputs:
bucketName: < S3 bucket name >
cloudfront:
distributionId:

Unable to display an image using Amazon S3, why?

With meteor, so in javascript, I try to display an image from AMAZON S3.
In the src property of the image, I put the access url to the mage provided by the AWS console, and it works if my image is public.
If I put it not public in AMAZON S3, it does not work.
So I ask S3 to send me the correct URL and I provided this URL to the src property of my image:
var s3 = new AWS.S3();
var params = {Bucket: 'mybucket', Key: 'photoPage06_4.jpg', Expires: 6000};
var url = s3.getSignedUrl('getObject', params);
console.log("URL", url);
document.getElementById('photoTest').src =url;
And there I have the error:
Failed to load resource: the server responded with a status of 400
(Bad Request)
Does somebody have an idea?
This is the correct behaviour. If an image is set as private on S3, you can't view it publicly. This means you won't be able to view the image by setting it as the src property of an image tag or by pasting it into the address bar in your browser.
This is intentional since many people and companies use it to back up files, share files internally, etc. So they need the option to keep their files private. The way they create and access their private files is through the AWS APIs, not directly over the web.
So to answer your question, you need to make any images you want to display on your site public in S3.

Upload images to S3 doesn't work

I'm trying to upload files to S3 without having to send to my server. I've a endpoint which gives me signed S3 URL where I can make PUT requests to store files to my bucket.
I tried to do couple of things on JavaScript side which didn't work. (I'm not using amazon's SDK, and prefer not to, because I'm looking for simple file upload and nothing more than that)
Here's what I'm trying to do currently in JavaScript:
uploadToS3 = () => {
let file = this.state.files[0];
let formData = new FormData();
formData.append('Content-Type', file.type);
formData.append('file', file);
let xhr = new XMLHttpRequest();
xhr.open('put', this.signed_url, true);
xhr.send(formData)
};
I tried bunch of options, I prefer using fetch because I don't really care for upload progress since these are just images. I used xhr code from somewhere to try out like above. These do make network calls and seem like they should work but they don't.
Here's what happens: An object is created on S3, when I go to public URL, they get downloaded and when I use image viewer to open them, they say it's not valid JPG.
I'm thinking I'm not doing the upload correctly.
Here's how I do in postman:
Notice I have correct signed URL and I've attached binary image file to the request. And added a header stating content type is image/jpeg as shown below:
When I login to S3 and go to my bucket, I can see an image and I can go to it's public URL and view in browser. This works perfect and is exactly what I want, now I don't know how I could achieve the same on JavaScript.
PS: I even tried to click on code on postman, it doesn't generate file code for me.
The problem here starts with xhr.send(formData).
When you PUT a file in S3 you don't use any form structures at all, you just send the raw object bytes in the request body.
Content-Type: and other metadata goes in the request headers, not in form data in the body.
In this case, if you download your uploaded file and view it with a text editor, the problem should be very apparent once you see what your code is actually sending to S3, which S3 then obediently stores and serves up on subsequent requests.
Note that S3 does have support for browser-based form POST uploads, but when doing so the signing process is significantly different, requiring you to create and sign a policy document, so that you can send the form, including the policy and signature, to the browser and allow an otherwise-untrusted user to upload a file -- the signed policy statement prevents the browser user from tampering with the form and performing actions that you didn't intend.

Dropbox Core API 413 error on creating a share link

Im currently trying to create a share links for a pdf file that was just uploaded through my App while using the Dropbox Core API.
The code is below:
request.post('https://api.dropboxapi.com/1/shares/auto/proposals/'+name+'?short_url=false',{
headers: { Authorization: 'Bearer TOKEN HERE', 'Content-Type': 'application/pdf'
},body:content}, function optionalCallback (err, httpResponse, bodymsg) {
if (err) {
console.log(err);
}else{
console.log('Shared link ' + JSON.stringify(httpResponse));
}
});
Points to note:
The PDF file size is 11MB, I can successfully and easily upload the file to dropbox using the API.
The issue only arises when I try to create a share link for the recently uploaded 11MB file.
Also note I am using Node.JS to upload & create share links.
The Error:
The error I get is HTTP Error 413, which based on my research means "Request entity too large"
Below is an image of the error, its not the whole image as the error was too long:
The maximum file size for uploading through the API is 150MB and my file is way below the line. Is there a separate file size for generating share links?
Note
I have tested small files of size 1MB to 2MB and was successfully able to generate a share link, issue arises with large files i.e (11MB)
Based on the fact that you're sending a body and using a Content-Type of application/pdf, I'm going to guess that you're trying to upload a file with this API call, but that's not what /shares does. /shares is a way to create a shared link to a file that's already in Dropbox. You should upload with, e.g. /files_put, and then call /shares to create a shared link to that file.

AWS S3 Javascript SDK Resend Request Failure

I am using AWS S3 Javascript sdk to upload files to my S3 bucket via my browser. I had no problem fetching files or uploading small and even huge files with the multi-part upload normally.
The issue I faced was while uploading a huge file and lost my connection in between. After the connection returned, the request was resend for the remaining parts to be uploaded but failed.
I have attached a screenshot of the failed requests
Any reason why this fails, or any way this can be handled/resolved?
When you are uploading a huge set of data, you can try including a class ManagedUpload for multi-part uploading. You need to specify the bucket size, however. A sample code of this fromt the documentation would be:
var upload = new AWS.S3.ManagedUpload({
partSize: 10 * 1024 * 1024, queueSize: 1,
params: {Bucket: 'bucket', Key: 'key', Body: stream}
});
Where, the partSize (Number), by default, the value is 5mb is the size in bytes for each individual part to be uploads.
There's also an open source project in GitHub: AWS S3 Multipart Upload from Browser, which is written in JavaScript and PHP to make huge files to be uploaded in Amazon S3 server directly, in chunks of 5 MB, so it is resumable and recovers easily from error.
Guessing that to use the above plugin, you might have to use PHP. There's also a limit on maximum upload size per file. Please do have a look at it.

Categories