How to use react-s3-uploader with reactjs? - javascript

I am new to reactjs want to upload image on s3, But don't know how it would work... And don't know where will I get the image path come from aws (in which function)?
Here is my react code
import ApiClient from './ApiClient'; // where it comes from?
function getSignedUrl(file, callback) {
const client = new ApiClient();
const params = {
objectName: file.name,
contentType: file.type
};
client.get('/my/signing/server', { params }) // what's that url?
.then(data => {
callback(data); // what should I get in callback?
})
.catch(error => {
console.error(error);
});
}
My server file Server.js
AWS.config.setPromisesDependency(Bluebird)
AWS.config.update({
accessKeyId: 'accessKey',
secretAccessKey: 'secret',
region: 'ap-south-1'
})
const s3 = new AWS.S3({
params: {
Bucket: 'Bucketname'
},
signatureVersion: 'v4'
})
class S3Service {
static getPreSignedUrl(filename, fileType, acl = 'public-read') {
return new Bluebird(resolve => {
const signedUrlExpireSeconds = 30000
return resolve({
signedRequest : s3.getSignedUrl('putObject', {
Key: "wehab/"+filename,
ContentType:"multipart/form-data",
Bucket: config.get('/aws/bucket'),
Expires: signedUrlExpireSeconds
}),
url : `https://${config.get('/aws/bucket')}.s3.amezonaws.com/wehab/${filename}`
})
})
}
}

First You need to create s3 bucket and attach these policies
if bucket name is 'DROPZONEBUCKET' ( Bucket is globally unique )
Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:PutObject",
"s3:GetObject"
],
"Resource": "arn:aws:s3:::DROPZONEBUCKET/*"
}
]
}
CORS config
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<MaxAgeSeconds>9000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Then run your node.js server and try to upload the file.
Once you select upload file it will call this getSignedUrl(file, callback) function and it returns url.
Once you successfully get this URL you can upload the file.
Then file path is
https://s3.amazonaws.com/{BUCKET_NAME}/{FILE_NAME}
ex.
https://s3.amazonaws.com/DROPZONEBUCKET/profile.jpeg
Modify API like this
var s3Bucket = new AWS.S3();
var s3 = new Router({ mergeParams: true });
var params = {
Bucket: 'BUCKET_NAME', // add your s3 bucket name here
Key: data.filename,
Expires: 60,
ContentType: data.filetype
};
s3Bucket.getSignedUrl('putObject', params, function (err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});

Related

Accessing S3 from an external location using IAM user access keys

I have used the javascript AWS-SDK to put a file on S3 via the putObject call. This works fine if I set the bucket to be public, but as soon as I turn off public access, it no longer works and I'm given a 403 error in the response.
I have created a security key against an IAM user, the IAM user is myself and I have sufficient access to S3 via the aws console, so I think my permission are correct.
Here is my code snippet, which works if the bucket is public;
const options = {
region: AWS_REGION,
accessKeyId: AWS_ACCESS_KEY,
secretAccessKey: AWS_SERECT_KEY,
};
const filesAwaitingProcessing = getFilesAwaitingProcessing(FOLDER_ID);
filesAwaitingProcessing.forEach((fileId) => {
const dataFile = file.load({
id: fileId
});
if (dataFile) {
const s3 = new AWS.S3(options);
let error = false;
s3.putObject({
Bucket: BUCKET,
ACL: 'authenticated-read',
ContentEncoding: 'UTF-8',
ContentType: 'application/json',
Key: `${BUCKET_DIRECTORY}/${dataFile.name}`,
Body: dataFile.getContents()
}, (err, data) => {
if (err) {
error = true;
log.error(JSON.stringify(err), JSON.stringify(err));
} else {
log.debug(data);
}
});
s3.getObject({
Bucket: BUCKET,
Key: `${BUCKET_DIRECTORY}/${dataFile.name}`
}, (err, data) => {
if (err) {
error = true;
log.error(err, err.stack);
} else {
log.debug(data);
}
});
Am I passing the key up correctly?
Or am I doing this completely the wrong way for secure access?
This is my bucket policy;
{
"Version": "2012-10-17",
"Id": "Policy1602780209612",
"Statement": [
{
"Sid": "Stmt1602780204129",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::619425574045:user/myUserName"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::dawson-group/processing"
}
]
}
And then I have the full S3 policy against my user as well as a policy I created that related to just the bucket in question
So it appears that there was no issue with the actual code and bucket set up, the issue was the application we were trying to access S3 from. It is stripping the Authentication headers

AWS: Access Denied when trying to load presigned url (direct fileupload browser)

I'm attempting to use a presigned URL but I keep getting a 403 Forbidden Access Denied despite setting up everything as I believe that I'm supposed to. I want to upload a file directly from the browser to Amazon S3.
I'm first of all enabling the root AWS account to use putObject. I don't have any additional accounts - I just want it to work for my root account to begin with. Here is the bucket policy:
{
"Version": "2012-10-17",
"Id": "XXXX",
"Statement": [
{
"Sid": "XXXXX",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::XXXX:root"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::XXXXX/*"
}
]
}
This is my Node.js backend. Here I just generate the url and send it to the frontend. Some code for the backend:
const aws = require('aws-sdk');
aws.config.update({
region: "eu-north-1",
accessKeyId: "XXX",
secretAccessKey: "YYY"
});
const s3 = new aws.S3({ apiVersion: "2006-03-01" });
app.get('/geturl', (req,res) => {
const s3Params = {
Bucket: 'XXXXXXXXXXXXX',
Key: req.query.filename,
Expires: 500,
ContentType: req.query.type,
ACL: "public-read"
};
s3.getSignedUrl("putObject", s3Params, (err, data) => {
res.send(data);
});
})
In the frontend, i make a simple call using the URL with the file I wish to upload. When I perform the second fetch call, it will generate the error:
async function handleUpload(e) {
const file = e.target.files[0];
const res = await fetch('http://localhost:3001/geturl');
const url = await res.text();
const resUpload = await fetch(url, { method: 'PUT', body: file });
}
Any ideas what I did wrong?
Edit - Seems like it works if I uncheck the first checkbox - is this a big deal or should this always be blocked in a production env?
In your backend try changing ACL: "public-read" to ACL: "private". You should then be able to block all public access and still be able to successfully complete presigned puts/posts.

Only Allow Sever to Post to AWS S3 and Not Client Side App - Server Code & S3 Config

I'm working on a project that allows users to upload images.
First the client sends the image to the server and then from there the server uploads it to the amazons S3 bucket.
I only want the server with valid credentails to have the access to upload to S3 and not the client.
My Server is uploading the clients images to S3.
However I am unable to enforce the S3 bucket to only allow uploads from the server with the correct credential.
If I remove the AWS S3 credentials from the server the server is still able to upload the images to S3 which is not the intended result.
'use strict'
const S3 = require('aws-sdk/clients/s3');
const uuid = require('uuid/v4');
if (process.env.NODE_ENV !== 'production')
require('dotenv').config();
// Set credentials and region
var s3 = new S3( {
region: process.env.S3_REGION,
accessKeyId: process.env.S3_ACCESSKEYID,
secretAccessKey: process.env.S3_SECRETACCESSKEY
});
function uploadImage(bufferImage) {
return new Promise((resolve, reject) => {
let key = `${uuid()}.png`;
let params = { Bucket: process.env.S3_BUCKET, Key: key, Body: bufferImage };
console.log('uploadImage invoked');
s3.putObject(params, (err, data) => {
if (err) {
console.log('error', err);
return reject("This error");
}
return resolve(`https://s3.${process.env.S3_REGION}.amazonaws.com/${process.env.S3_BUCKET}/${key}`);
});
});
}
function removeImage(imageUri) {
console.log(imageUri);
var params = {
Bucket: process.env.S3_BUCKET,
Key: imageUri.replace(/^.*[\\\/]/, '')
};
s3.deleteObject(params, function (err, data) {
if (err)
return Promise.reject(err);
Promise.resolve(data);
});
}
module.exports = { uploadImage, removeImage };
Here is a copy of my S3 bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::testbucket/*"
}
]
}
and CORS configuration:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Any help on achieving the intended result would be much appreciated! - Only allowing the server to post images to S3 and not the client Also allowing the client to view the images but not post to S3 directly.

How to upload file to Digital Ocean Spaces using Javascript

I am looking to use Digital Oceans spaces (which seems to have an identical API to S3), and would like to try it by uploading a sample file. I am having lots of difficulty. Here's what I've done so far
{'hi' : 'world'}
Is the contents of a file hiworld.json that I would like to upload. I understand that I need to create an aws v4 signature before I can make this request.
var aws4 = require('aws4')
var request = require('request')
var opts = {'json': true,'body': "{'hi':'world'}",host: '${myspace}.nyc3.digitaloceanspaces.com', path: '/hiworld.json'}
aws4.sign(opts, {accessKeyId: '${SECRET}', secretAccessKey: '${SECRET}'})
Then I send the request
request.put(opts,function(error, response) {
if(error) {
console.log(error);
}
console.log(response.body);
});
However, when I check my Digital Ocean space, I see that my file was not created. I have noticed that if I changed my PUT to GET and try to access an existing file, I have no issues.
Here's what my headers look like
headers:
{ Host: '${myspace}.nyc3.digitaloceanspaces.com',
'Content-Type': 'application/x-www-form-urlencoded; charset=utf-8',
'Content-Length': 14,
'X-Amz-Date': '20171008T175325Z',
Authorization: 'AWS4-HMAC-SHA256 Credential=${mykey}/20171008/us-east-1//aws4_request, SignedHeaders=content-length;content-type;host;x-amz-date, Signature=475e691d4ddb81cca28eb0dcdc7c926359797d5e383e7bef70989656822accc0' },
method: 'POST' }
As an alternative, using aws-sdk:
// 1. Importing the SDK
import AWS from 'aws-sdk';
// 2. Configuring the S3 instance for Digital Ocean Spaces
const spacesEndpoint = new AWS.Endpoint(
`${REGION}.digitaloceanspaces.com`
);
const url = `https://${BUCKET}.${REGION}.digitaloceanspaces.com/${file.path}`;
const S3 = new AWS.S3({
endpoint: spacesEndpoint,
accessKeyId: ACCESS_KEY_ID,
secretAccessKey: SECRET_ACCESS_KEY
});
// 3. Using .putObject() to make the PUT request, S3 signs the request
const params = { Body: file.stream, Bucket: BUCKET, Key: file.path };
S3.putObject(params)
.on('build', request => {
request.httpRequest.headers.Host = `https://${BUCKET}.${REGION}.digitaloceanspaces.com`;
// Note: I am assigning the size to the file Stream manually
request.httpRequest.headers['Content-Length'] = file.size;
request.httpRequest.headers['Content-Type'] = file.mimetype;
request.httpRequest.headers['x-amz-acl'] = 'public-read';
})
.send((err, data) => {
if (err) logger(err, err.stack);
else logger(JSON.stringify(data, '', 2));
});
var str = {
'hi': 'world'
}
var c = JSON.stringify(str);
request(aws4.sign({
'uri': 'https://${space}.nyc3.digitaloceanspaces.com/newworlds.json',
'method': 'PUT',
'path': '/newworlds.json',
'headers': {
"Cache-Control":"no-cache",
"Content-Type":"application/x-www-form-urlencoded",
"accept":"*/*",
"host":"${space}.nyc3.digitaloceanspaces.com",
"accept-encoding":"gzip, deflate",
"content-length": c.length
},
body: c
},{accessKeyId: '${secret}', secretAccessKey: '${secret}'}),function(err,res){
if(err) {
console.log(err);
} else {
console.log(res);
}
})
This gave me a successful PUT
It can be done using multer and aws sdk. It worked for me.
const aws = require('aws-sdk');
const multer = require('multer');
const express = require('express');
const multerS3 = require('multer-s3');
const app = express();
const spacesEndpoint = new aws.Endpoint('sgp1.digitaloceanspaces.com');
const spaces = new aws.S3({
endpoint: spacesEndpoint,
accessKeyId: 'your_access_key_from_API',
secretAccessKey: 'your_secret_key'
});
const upload = multer({
storage: multerS3({
s3: spaces,
bucket: 'bucket-name',
acl: 'public-read',
key: function (request, file, cb) {
console.log(file);
cb(null, file.originalname);
}
})
}).array('upload', 1);
Now you can also call this using an API like this
app.post('/upload', function (request, response, next) {
upload(request, response, function (error) {
if (error) {
console.log(error);
}
console.log('File uploaded successfully.');
});
});
HTML would look like this
<form method="post" enctype="multipart/form-data" action="/upload">
<label for="file">Upload a file</label>
<input type="file" name="upload">
<input type="submit" class="button">
</form>

How do I delete an object on AWS S3 using JavaScript?

I want to delete a file from Amazon S3 using JavaScript. I have already uploaded the file using JavaScript. How can I delete it?
You can use the JS method from S3:
var AWS = require('aws-sdk');
AWS.config.loadFromPath('./credentials-ehl.json');
var s3 = new AWS.S3();
var params = { Bucket: 'your bucket', Key: 'your object' };
s3.deleteObject(params, function(err, data) {
if (err) console.log(err, err.stack); // error
else console.log(); // deleted
});
Be aware that S3 never returns the object if it has been deleted.
You have to check it before or after with getobject, headobject, waitfor, etc
You can use construction like this:
var params = {
Bucket: 'yourBucketName',
Key: 'fileName'
/*
where value for 'Key' equals 'pathName1/pathName2/.../pathNameN/fileName.ext'
- full path name to your file without '/' at the beginning
*/
};
s3.deleteObject(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
And don't forget to wrap it to the Promise.
Before deleting the file you have to check the 1) file whether it is in the bucket because If the file is not available in the bucket and using deleteObject API this doesn't throw any error 2)CORS Configuration of the bucket. By using headObject API gives the file status in the bucket.
AWS.config.update({
accessKeyId: "*****",
secretAccessKey: "****",
region: region,
version: "****"
});
const s3 = new AWS.S3();
const params = {
Bucket: s3BucketName,
Key: "filename" //if any sub folder-> path/of/the/folder.ext
}
try {
await s3.headObject(params).promise()
console.log("File Found in S3")
try {
await s3.deleteObject(params).promise()
console.log("file deleted Successfully")
}
catch (err) {
console.log("ERROR in file Deleting : " + JSON.stringify(err))
}
} catch (err) {
console.log("File not Found ERROR : " + err.code)
}
As params are constant, the best way to use it with const. If the file is not found in the s3 it throws the error NotFound : null.
If you want to apply any operations in the bucket, you have to change the permissions of CORS Configuration in the respective bucket in the AWS. For changing permissions Bucket->permission->CORS Configuration and Add this code.
<CORSConfiguration>
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>DELETE</AllowedMethod>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>HEAD</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
for more information about CROS Configuration : https://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html
You can use deleteObjects API to delete multiple objects at once instead of calling API for each key to delete. Helps save time and network bandwidth.
You can do following-
var deleteParam = {
Bucket: 'bucket-name',
Delete: {
Objects: [
{Key: 'a.txt'},
{Key: 'b.txt'},
{Key: 'c.txt'}
]
}
};
s3.deleteObjects(deleteParam, function(err, data) {
if (err) console.log(err, err.stack);
else console.log('delete', data);
});
For reference see - https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#deleteObjects-property
You can follow this GitHub gist link https://gist.github.com/jeonghwan-kim/9597478.
delete-aws-s3.js:
var aws = require('aws-sdk');
var BUCKET = 'node-sdk-sample-7271';
aws.config.loadFromPath(require('path').join(__dirname, './aws-config.json'));
var s3 = new aws.S3();
var params = {
Bucket: 'node-sdk-sample-7271',
Delete: { // required
Objects: [ // required
{
Key: 'foo.jpg' // required
},
{
Key: 'sample-image--10.jpg'
}
],
},
};
s3.deleteObjects(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Very straight forward
At first, create an instance of s3 and configure it with credentials
const S3 = require('aws-sdk').S3;
const s3 = new S3({
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
region: process.env.AWS_REGION
});
Afterward, follow the docs
var params = {
Bucket: "ExampleBucket",
Key: "HappyFace.jpg"
};
s3.deleteObject(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
/*
data = {
}
*/
});

Categories