I am using AWS S3 Javascript sdk to upload files to my S3 bucket via my browser. I had no problem fetching files or uploading small and even huge files with the multi-part upload normally.
The issue I faced was while uploading a huge file and lost my connection in between. After the connection returned, the request was resend for the remaining parts to be uploaded but failed.
I have attached a screenshot of the failed requests
Any reason why this fails, or any way this can be handled/resolved?
When you are uploading a huge set of data, you can try including a class ManagedUpload for multi-part uploading. You need to specify the bucket size, however. A sample code of this fromt the documentation would be:
var upload = new AWS.S3.ManagedUpload({
partSize: 10 * 1024 * 1024, queueSize: 1,
params: {Bucket: 'bucket', Key: 'key', Body: stream}
});
Where, the partSize (Number), by default, the value is 5mb is the size in bytes for each individual part to be uploads.
There's also an open source project in GitHub: AWS S3 Multipart Upload from Browser, which is written in JavaScript and PHP to make huge files to be uploaded in Amazon S3 server directly, in chunks of 5 MB, so it is resumable and recovers easily from error.
Guessing that to use the above plugin, you might have to use PHP. There's also a limit on maximum upload size per file. Please do have a look at it.
Related
The HTTP headers (Content-Type) and the metadata that an HTML file input returns (file.type) are not reliable and could be easily bypassed by hackers. So how do you make sure that the file that is going to be uploaded to S3 has the correct file type when using AWS s3 presigned[-post]-urls?
There are packages like file-type that can detect the actual type of a file. But the problem is that in order to detect the file type they need the content of the file as a buffer or Uint8Array. So I have to send the file twice. Once to the server to detect the file type and the presigned-url (if it has a correct type), and once for actually uploading it to s3, which is obviously a bad thing.
I have a simple file upload funcitonality in place using knockout, in my durandal website. I upload the file to the server by converting the file to a base64StringArray, then uploading the file using an AJAX post method, i.e.
$.post("localhost/uploadDocument", dataToPost)
I have the following request filtering in place in my application:
<requestLimits maxAllowedContentLength="31457280" />
and
<httpRuntime targetFramework="4.5.2" maxRequestLength="30720" />
So I have about a 30mb file limit.
The problem I am having is with a specific Microsoft Excel file, which also includes some embedded PDF files. This file is 14,887,424 bytes, but when I upload it through my application, Fiddler shows that 49,158,346 bytes were sent, therefore I receive a 404.13 error - where the request is denied due to exceeding the request content length.
Why are so many bytes being sent for this one Excel file with embedded PDF files?
I would compress the string client side using something like:
http://rosettacode.org/wiki/LZW_compression#JavaScript
and then on the server side, decompress it, and perform whatever validation you might be doing to check file size
I'm trying to upload files to S3 without having to send to my server. I've a endpoint which gives me signed S3 URL where I can make PUT requests to store files to my bucket.
I tried to do couple of things on JavaScript side which didn't work. (I'm not using amazon's SDK, and prefer not to, because I'm looking for simple file upload and nothing more than that)
Here's what I'm trying to do currently in JavaScript:
uploadToS3 = () => {
let file = this.state.files[0];
let formData = new FormData();
formData.append('Content-Type', file.type);
formData.append('file', file);
let xhr = new XMLHttpRequest();
xhr.open('put', this.signed_url, true);
xhr.send(formData)
};
I tried bunch of options, I prefer using fetch because I don't really care for upload progress since these are just images. I used xhr code from somewhere to try out like above. These do make network calls and seem like they should work but they don't.
Here's what happens: An object is created on S3, when I go to public URL, they get downloaded and when I use image viewer to open them, they say it's not valid JPG.
I'm thinking I'm not doing the upload correctly.
Here's how I do in postman:
Notice I have correct signed URL and I've attached binary image file to the request. And added a header stating content type is image/jpeg as shown below:
When I login to S3 and go to my bucket, I can see an image and I can go to it's public URL and view in browser. This works perfect and is exactly what I want, now I don't know how I could achieve the same on JavaScript.
PS: I even tried to click on code on postman, it doesn't generate file code for me.
The problem here starts with xhr.send(formData).
When you PUT a file in S3 you don't use any form structures at all, you just send the raw object bytes in the request body.
Content-Type: and other metadata goes in the request headers, not in form data in the body.
In this case, if you download your uploaded file and view it with a text editor, the problem should be very apparent once you see what your code is actually sending to S3, which S3 then obediently stores and serves up on subsequent requests.
Note that S3 does have support for browser-based form POST uploads, but when doing so the signing process is significantly different, requiring you to create and sign a policy document, so that you can send the form, including the policy and signature, to the browser and allow an otherwise-untrusted user to upload a file -- the signed policy statement prevents the browser user from tampering with the form and performing actions that you didn't intend.
I have created EC2 instances, with a load balancer and auto scaling as described in the documentation for AWS from the following link:
http://docs.aws.amazon.com/gettingstarted/latest/wah-linux/getting-started-application-server.html
I would like to store all of the user images and files on a S3 bucket, but I'm not sure of the best way to connect it to my web-app.
The web-app has an API coded in PHP, which would need to upload files to the bucket. The front end is coded in JavaScript which would need to be able to access the files. These files will be on every page, and range from user images to documents.
Currently all the media is loaded locally, and nothing is stored on the S3 bucket. The front end and the API are both stored on EC2 instances and would need to access the S3 bucket contents.
What is the most efficient way to load the data stored on the S3 bucket? I have looked into the AWS SDK for JavaScript, but it doesn't seem to be the best way of getting the files. Here is the code that seems relevant:
var s3 = new AWS.S3();
var params = {Bucket: 'myBucket', Key: 'myKey'};
s3.getSignedUrl('getObject', params, function (err, url) {
console.log("The URL is", url);
});
If this is the way to go, would I have to do this for each image?
Thank you
Im currently trying to create a share links for a pdf file that was just uploaded through my App while using the Dropbox Core API.
The code is below:
request.post('https://api.dropboxapi.com/1/shares/auto/proposals/'+name+'?short_url=false',{
headers: { Authorization: 'Bearer TOKEN HERE', 'Content-Type': 'application/pdf'
},body:content}, function optionalCallback (err, httpResponse, bodymsg) {
if (err) {
console.log(err);
}else{
console.log('Shared link ' + JSON.stringify(httpResponse));
}
});
Points to note:
The PDF file size is 11MB, I can successfully and easily upload the file to dropbox using the API.
The issue only arises when I try to create a share link for the recently uploaded 11MB file.
Also note I am using Node.JS to upload & create share links.
The Error:
The error I get is HTTP Error 413, which based on my research means "Request entity too large"
Below is an image of the error, its not the whole image as the error was too long:
The maximum file size for uploading through the API is 150MB and my file is way below the line. Is there a separate file size for generating share links?
Note
I have tested small files of size 1MB to 2MB and was successfully able to generate a share link, issue arises with large files i.e (11MB)
Based on the fact that you're sending a body and using a Content-Type of application/pdf, I'm going to guess that you're trying to upload a file with this API call, but that's not what /shares does. /shares is a way to create a shared link to a file that's already in Dropbox. You should upload with, e.g. /files_put, and then call /shares to create a shared link to that file.