AWS S3 Bucket, can't get head-object from file - javascript

I was faced with the following problem :
When I upload new file use the signed url and then try get head-object from uploaded file S3, using aws-sdk, I get error Forbidden, but if I upload new file use AWS console, I can get head-object. Does anyone know what the problem ?

Make sure you specify correct ACL in presigned POST url.
For example set (bucket-owner-full-control):
var s3 = new AWS.S3();
var params = { Bucket: req.body.bucketname, ACL: 'bucket-owner-full-control', Key: req.body.name, ContentType: req.body.type };
s3.getSignedUrl('putObject', params, function (err, url) ....

Related

Uploading Image to S3 bucket stores it as application/octet-stream, but I want to store it as an image to use it (JavaScript / Node)

I want my users to upload profile pictures. Right now I can choose a file and pass it into a POST request body. I'm working in Node.js + JavaScript.
I am using DigitalOcean's Spaces object storage service to store my images, which is S3-compatible.
Ideally, my storage stores the file as an actual image. Instead, it is storing as a strange file with Content-Type application/octet-stream. I don't know how I'm supposed to work with this -- normally to display the image, I simply reference the URL that hosts the image, but in this case the URL is pointing to this strange file. It is named something like VMEDFS3Q65JV4B7YQKLS (no extension). The size of the file is 14kb which seems right and it appears to hold the file data. It looks like this:
etc...
I know I'm grabbing the right image and I know the database is hooked up properly as it's posting to the exact right place, I'm just unhappy with the file type.
Request on front end:
fetch('/api/image/profileUpload', {
method: 'PUT',
body: { file: event.target.files[0] },
'Content-Type': 'image/jpg',
})
Code in backend:
const AWS = require('aws-sdk')
let file = req.body;
file = JSON.stringify(file);
AWS.config.update({
region: 'nyc3',
accessKeyId: process.env.SPACES_KEY,
secretAccessKey: process.env.SPACES_SECRET,
});
const s3 = new AWS.S3({
endpoint: new AWS.Endpoint('nyc3.digitaloceanspaces.com')
});
const uploadParams = {
Bucket: process.env.SPACES_BUCKET,
Key: process.env.SPACES_KEY,
Body: file,
ACL: "public-read",
ResponseContentType: 'image/jpg'
};
s3.upload(uploadParams, function(err, data) {
if (err) console.log(err, err.stack);
else console.log(data);
});
What I've tried:
Adding content-type header in request and content-type parameters in back
Other methods of fetching the data in the backend -- they all result in the same thing
Not stringifying the file after grabbing it from the req.body
Changing POST request to PUT request
Would appreciate any insight into either a) converting this octet-stream file into an image or b) getting this image to upload as an image. Thank you
I solved this problem somewhat -- I changed my parameters to this:
const uploadParams = {
Bucket: 'oscarexpert',
Key: 'asdff',
Body: image,
ContentType: "image/jpeg",
ACL: "public-read",
};
^^added ContentType.
It still doesn't store the actual image but that's sort of a different issue.

Saving an S3 buffer to file through AWS Javascript SDK

I'm using the AWS Javascript SDK to download a file from S3
var s3 = new AWS.S3();
var params = {
Bucket: "MYBUCKET",
Key: file
};
s3.getObject(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else {
//code to save file from data's byte array here
}
});
This feels like it should be easier than I'm making it out to be. Basically I want to trigger the native file download for the browser. Every resource I've found on the internet is for node's file system. I can't just use the file's URL to download as it is stored encrypted via KMS, so that is why I am going about it this way.
Thanks for the help!
I ended up changing how I was storing files. Instead of encrypting them with KMS, I moved them to a private bucket and then based the retrieval off of the logged in cognito user's ID. Then, I switched to using getSignedURL to appropriately pass in the cognito user ID.
var s3 = new AWS.S3();
var params = {
Bucket: "MYBUCKET",
Key: cognitoUser.username + "/" + file
};
var url = s3.getSignedUrl('getObject', params);
window.open(url);

Getting 403 (Forbidden) when uploading to S3 with a signed URL

I'm trying to generate a pre-signed URL then upload a file to S3 through a browser. My server-side code looks like this, and it generates the URL:
let s3 = new aws.S3({
// for dev purposes
accessKeyId: 'MY-ACCESS-KEY-ID',
secretAccessKey: 'MY-SECRET-ACCESS-KEY'
});
let params = {
Bucket: 'reqlist-user-storage',
Key: req.body.fileName,
Expires: 60,
ContentType: req.body.fileType,
ACL: 'public-read'
};
s3.getSignedUrl('putObject', params, (err, url) => {
if (err) return console.log(err);
res.json({ url: url });
});
This part seems to work fine. I can see the URL if I log it and it's passing it to the front-end. Then on the front end, I'm trying to upload the file with axios and the signed URL:
.then(res => {
var options = { headers: { 'Content-Type': fileType } };
return axios.put(res.data.url, fileFromFileInput, options);
}).then(res => {
console.log(res);
}).catch(err => {
console.log(err);
});
}
With that, I get the 403 Forbidden error. If I follow the link, there's some XML with more info:
<Error>
<Code>SignatureDoesNotMatch</Code>
<Message>
The request signature we calculated does not match the signature you provided. Check your key and signing method.
</Message>
...etc
Your request needs to match the signature, exactly. One apparent problem is that you are not actually including the canned ACL in the request, even though you included it in the signature. Change to this:
var options = { headers: { 'Content-Type': fileType, 'x-amz-acl': 'public-read' } };
Receiving a 403 Forbidden error for a pre-signed s3 put upload can also happen for a couple of reasons that are not immediately obvious:
It can happen if you generate a pre-signed put url using a wildcard content type such as image/*, as wildcards are not supported.
It can happen if you generate a pre-signed put url with no content type specified, but then pass in a content type header when uploading from the browser. If you don't specify a content type when generating the url, you have to omit the content type when uploading. Be conscious that if you are using an upload tool like Uppy, it may attach a content type header automatically even when you don't specify one. In that case, you'd have to manually set the content type header to be empty.
In any case, if you want to support uploading any file type, it's probably best to pass the file's content type to your api endpoint, and use that content type when generating your pre-signed url that you return to your client.
For example, generating a pre-signed url from your api:
const AWS = require('aws-sdk')
const uuid = require('uuid/v4')
async function getSignedUrl(contentType) {
const s3 = new AWS.S3({
accessKeyId: process.env.AWS_KEY,
secretAccessKey: process.env.AWS_SECRET_KEY
})
const signedUrl = await s3.getSignedUrlPromise('putObject', {
Bucket: 'mybucket',
Key: `uploads/${uuid()}`,
ContentType: contentType
})
return signedUrl
}
And then sending an upload request from the browser:
import Uppy from '#uppy/core'
import AwsS3 from '#uppy/aws-s3'
this.uppy = Uppy({
restrictions: {
allowedFileTypes: ['image/*'],
maxFileSize: 5242880, // 5 Megabytes
maxNumberOfFiles: 5
}
}).use(AwsS3, {
getUploadParameters(file) {
async function _getUploadParameters() {
let signedUrl = await getSignedUrl(file.type)
return {
method: 'PUT',
url: signedUrl
}
}
return _getUploadParameters()
}
})
For further reference also see these two stack overflow posts: how-to-generate-aws-s3-pre-signed-url-request-without-knowing-content-type and S3.getSignedUrl to accept multiple content-type
If you're trying to use an ACL, make sure that your Lambda IAM role has the s3:PutObjectAcl for the given Bucket and also that your bucket allows for the s3:PutObjectAcl for the uploading Principal (user/iam/account that's uploading).
This is what fixed it for me after double checking all my headers and everything else.
Inspired by this answer https://stackoverflow.com/a/53542531/2759427
1) You might need to use S3V4 signatures depending on how the data is transferred to AWS (chunk versus stream). Create the client as follows:
var s3 = new AWS.S3({
signatureVersion: 'v4'
});
2) Do not add new headers or modify existing headers. The request must be exactly as signed.
3) Make sure that the url generated matches what is being sent to AWS.
4) Make a test request removing these two lines before signing (and remove the headers from your PUT). This will help narrow down your issue:
ContentType: req.body.fileType,
ACL: 'public-read'
Had the same issue, here is how you need to solve it,
Extract the filename portion of the signed URL.
Do a print that you are extracting your filename portion correctly with querystring parameters. This is critical.
Encode to URI Encoding of the filename with query string parameters.
Return the url from your lambda with encoded filename along with other path or from your node service.
Now post from axios with that url, it will work.
EDIT1:
Your signature will also be invalid, if you pass in wrong content type.
Please ensure that the content-type you have you create the pre-signed url is same as the one you are using it for put.
Hope it helps.
As others have pointed out the solution is to add the signatureVerision.
const s3 = new AWS.S3(
{
apiVersion: '2006-03-01',
signatureVersion: 'v4'
}
);
There is very detailed discussion around the same take a look https://github.com/aws/aws-sdk-js/issues/468
This code was working with credentials and a bucket I created several years ago, but caused a 403 error on recently created credentials/buckets:
const s3 = new AWS.S3({
region: region,
accessKeyId: process.env.AWS_ACCESS_KEY,
secretAccessKey: process.env.AWS_SECRET_KEY,
})
The fix was simply to add signatureVersion: 'v4'.
const s3 = new AWS.S3({
signatureVersion: 'v4',
region: region,
accessKeyId: process.env.AWS_ACCESS_KEY,
secretAccessKey: process.env.AWS_SECRET_KEY,
})
Why? I don't know.
TLDR: Check that your bucket exists and is accessible by the AWS Key that is generating the Signed URL..
All of the answers are very good and most likely are the real solution, but my issue actually stemmed from S3 returning a Signed URL to a bucket that didn't exist.
Because the server didn't throw any errors, I had assumed that it must be the upload that was causing the problems without realizing that my local server had an old bucket name in it's .env file that used to be the correct one, but has since been moved.
Side note: This link helped https://aws.amazon.com/premiumsupport/knowledge-center/s3-troubleshoot-403/
It was while checking the uploading users IAM policies that I discovered that the user had access to multiple buckets, but only 1 of those existed anymore.
Did you add the CORS policy to the S3 bucket? This fixed the problem for me.
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"PUT"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": []
}
]
I encountered the same error twice with different root causes / solutions:
I was using generate_presigned_url.
The solution for me was switching to generate_presigned_post (doc) which returns a host of essential information such as
"url":"https://xyz.s3.amazonaws.com/",
"fields":{
"key":"filename.ext",
"AWSAccessKeyId":"ASIAEUROPRSWEDWOMM",
"x-amz-security-token":"some-really-long-string",
"policy":"another-long-string",
"signature":"the-signature"
}
Add these fields to your request headers, don't forget to keep file last!
That time I forgot to give proper permissions to the Lambda. Interestingly, Lambda can create good looking signed upload URLs which you won't have permission to use. The solution is to enrich the policy with S3 actions:
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::my-own-bucket/*"
]
}
using python boto3 when you upload a file the permissions are private by default. you can make the object public using ACL='public-read'
s3.put_object_acl(
Bucket='gid-requests', Key='potholes.csv', ACL='public-read')
I did all that's mentioned here and allowed these permissions for it to work:

aws-sdk: NoSuchKey: The specified key does not exist?

In my nodejs project, I am using aws-sdk to download all the images from my s3 bucket, But I got this error- NoSuchKey: The specified key does not exist. But keys are correct and I can upload images with these keys.
My code is:
var AWS = require('aws-sdk');
s3 = new AWS.S3();
var params = {
Bucket: config.get("aws.s3.bucket"),
Key: config.get("aws.credentials.secretAccessKey")
};
s3.getObject(params, function (err, data) {
console.log("data");
if (err) console.log(err, err.stack); // an error occurred
else console.log(data);
});
}
Can anyone please tell me where I am doing wrong?
There are problems related to how to use aws-sdk and it should be as following example:
var aws = require('aws-sdk');
aws.config.update({
accessKeyId: {{AWS_ACCESS_KEY}},
secretAccessKey: {{AWS_SECRET_KEY}}
});
var s3 = new aws.S3();
var s3Params = {
Bucket: {{bucket name}},
Key: {{path to dedicated S3 Object (folder name + file/object
name)}}
};
s3.getObject(s3Params, function (err, data) {
//Continue handling the returned results.
});
replace the strings inside {{}} with correct data and it should work well.
This is because that img url doesnot exist with same user.Means that u put a img alone.jpeg in your postman when you upload image in aws.But in getobject you are posting image.jpg with same user.conclusion is that image should be same which you are uploading and then getting with same user. [when you are getting a getObject,you post this image with same user]
when you are uploading image to aws
[1]: https://i.stack.imgur.com/WLh5v.png
but when you use getObject with another image with same user(which user's token you are using ),it will give the same error.
When you user image.jpg instead [1]: https://i.stack.imgur.com/WLh5v.png
So use same image key.
use image key which is coming from aws's response instead of url.

JavaScript : Hosting User Images in S3 Bucket For Application

The "Background" outlines the problem in depth- I added it to make this question a good guide for using S3 to host images(like a profile image)
You can skip right to "HERE I'M HAVING TROUBLE" to help directly.
-----------------------------------------------Background------------------------------------------------------------------
Note: Feel Free to Critique My Assumptions on how to Host Images Properly for future readers.
So for a quick prototype- I'm hosting user avatar images in an AWS S3 Bucket,but I want to model roughly how it is done in production.
Here are my 3 assumptions on how to model industry standard image hosting.(based off sites I've studied):
For Reading - you can use public endpoints(no tokens needed)
To Secure Reading Access - use hashing to store the resources(image).The application will give the hashed URL to users with access.
For Example 2 hashes (1 for file path and 1 for image):
https://myapp.com/images/1024523452345233232000/f567457645674242455c32cbc7.jpg
^The above can be done with S3 using a hashed "Prefix" and a hashed file name.
Write Permissions - The user should be logged into the app and be given temp credentials to write to the storage(ie add an image).
So with those 3 assumptions this is what I'm doing and the problem:
(A Simple Write - Use Credentials)
<script type="text/javascript">
AWS.config.credentials = ...;
AWS.config.region = 'us-west-2';
var bucket = new AWS.S3({params: {Bucket: 'myBucket'}});
...
var params = {Key: file.name, ContentType: file.type, Body: file};
bucket.upload(params, function (err, data) {
//Do Something Amazing!!
});
</script>
---------------------------------HERE I'M HAVING TROUBLE----------------------------------------------------------(A Simple Read -Give The User a Signed URL) Error 403 Permissions
<script type="text/javascript">
AWS.config.credentials..
AWS.config.region = 'us-west-2';
var s3 = new AWS.S3();
var params = {Bucket: 'myBucket', Key: 'myKey.jpg'};
document.addEventListener('DOMContentLoaded', function () {
s3.getSignedUrl('getObject', params, function (err, url) {
// Getting a 403 Permissions Error!!!
});
});
</script>
I figure the signed URL isn't needed, but I thought it would get me around the permission error but I have to set the permission manually to public to read the image.
QUESTION:
So how should I make the endpoints completely public(read-able) for those who have gained access to the URL, but only write-able when the user has credentials?
To make an object publicly-downloadable when you are uploading it, apply the canned (predefined) ACL called "public-read" with the putObject request.
var params = {
Key: file.name,
ContentType: file.type,
Body: file,
ACL: 'public-read'
};
http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#putObject-property

Categories