I'm using googleapis to upload different files to the Google Drive. The scenario is:
User provides a document and it's information through my REST API (I'm using NodeJS).
The REST API creates the directory that will contain the document, if it's not already exist.
The REST API uploads the document to that directory.
The structure of the drive is:
/root/documents/$type/$new_document
where $type is one of the user's provided fields and the $new_document is the document that was provided by the user.
The way I connect:
oauth2Client.setCredentials({ refresh_token: REFRESH_TOKEN });
drive_instance = google.drive({
version: 'v3',
auth: oauth2Client,
});
I figured how to upload the document to root folder of the Google Drive:
}
try {
const response = await drive.files.create({
requestBody: {
name: file.name,
mimeType: file.mimetype,
},
media: {
mimeType: file.mimetype,
body: file.data,
},
});
console.log(response.data);
} catch (error) {
console.log(error.message);
}
What I'm struggling is:
How to create the directory /root/documents/$type if it's not already existing?
How to upload the $new_document to /root/documents/$type?
For the second question, I know that the docs provide an option of parents[] that will contain all the folder IDs. but then, how can I get the folder ID of /root/documents/$type? Is there someway to combine the steps (like maybe mkdir -p for the directories or creating the directory will return the ID of the directory).
1. Try found folder you need via drive.files.list() method
You need set filter. Example:
add 'q': "..." in requestBody to search what you need
use "name = 'Some Folder Name'" to search by name
use "mimeType = 'application/vnd.google-apps.folder'" to search only folders
Thus, combine via and:
'q': "name = 'Some Folder Name' and mimeType = 'application/vnd.google-apps.folder'" to find all folders with name you chose
const folderName = 'Some Folder Name'
gapi.client.drive.files
.list({
'q': "name = 'Some Folder Name' and mimeType = 'application/vnd.google-apps.folder'",
'pageSize': 1000,
'fields': "files(id, name, parents)"
})
.then((response) => {
const files = response.result.files;
if (files.length > 0) {
handleSearchResult(files)
} else {
createFolder()
}
console.log(total / (1024*1024*1024))
});
About handleSearchResult() or createFolder:
It maybe more than 1 file. So you can find necessary root getting files[i].parents . That's why I added parents in 'fields': "files(id, name, parents)". https://developers.google.com/drive/api/v3/reference/files
Also you can add searching rule e.g. 'parents contain "..."''
https://developers.google.com/drive/api/v3/search-files
If search brings 0 files result so just create folder by yourself. You can create path\directory step-by step. Create first folder in drive root and remember id. After that create second folder and add in requestBody parentId that equal first folder id. And etc... Btw you can use almost the same logic to search.
2. Create folder if its necessary
Example:
// name = 'Folder Name',
// parents = ['some-parent1-id', 'some-parent2-id', ...]
function createFolder(name, parents) {
const fileMetadata = {
'name' : name,
'mimeType' : 'application/vnd.google-apps.folder',
'parents': parents
};
gapi.client.drive.files
.create({
resource: fileMetadata,
}).then((response) => {
switch(response.status){
case 200:
const file = response.result;
console.log('Created Folder Id: ' + file.id);
break;
default:
console.log('Error creating the folder, '+response);
break;
}
});
}
3. Upload file with setted parents
you should add parents = ['id-of-folder'] in requestBody
Read more in Google Drive API - Files: create
I hope it will help at least a bit:) Keep it up!
You can upload a folder inside a folder using the follow method
You should have the id of the folder you want to store the new folder in (can be extracted using nodejs api or by opening the folder and looking at the characters after last / in the url)
Use a special mimetype reserved for folders in google drive ( application/vnd.google-apps.folder )
considering your example
drive.files.create({
requestBody:{
name:"SomeFolder",
mimeType:"application/vnd.google-apps.folder"
},(error,folder)=>{
console.log(folder.data.id);
drive.files.create:({
requestBody:{
name:"SomeFolderInsideAFolder",
mimeType:"application/vnd.google-apps.folder"
},
parents:[folder.data.id]
})
})
})
You can even easily create a recursively uploading folder function by combining file upload and folder upload which can upload a whole folder
I'm having a problem with the google drive API.
I'm trying to upload an excel file with this API, but it's not working. Even copying the google API documentation doesn't work.
Here is a sample of my code:
#Get('teste')
async teste(){
const keys = require(path.resolve('src', 'files', 'api', 'keys'))
const client = new google.auth.JWT(
keys.client_email,
null,
keys.private_key,
['https://www.googleapis.com/auth/drive.metadata.readonly']
)
client.authorize((err, tokens) =>{
if(err){
console.log(err)
return;
} else{
this.gdrun(client)
}
})
}
gdrun(client){
const drive = google.drive({version: 'v3', auth: client});
var fileMetadata = {
name: 'My Report',
mimeType: 'application/vnd.google-apps.spreadsheet'
};
var media = {
mimeType: 'application/vnd.ms-excel',
body: require(path.resolve('src', 'files', 'excel', 'solargroup.xlsx'))
};
drive.files.create({
resource: fileMetadata,
media: media,
fields: 'id'
}, function (err, file: any) {
if (err) {
// Handle error
console.error(err);
} else {
console.log('File Id:', file.id);
}
});
}
I received this error:
I believe your goal as follows.
You want to upload a file (XLSX file) to Google Drive of the service account.
You want to achieve this using the service account with googleapis for Node.js.
From your script, I thought that you might wanted to upload a XLSX file as Google Spreadsheet by converting.
Modification points:
When you want to upload a file to Google Drive, in this case, please use the scope of https://www.googleapis.com/auth/drive instead of https://www.googleapis.com/auth/drive.metadata.readonly.
When you want to upload XLSX file as the XLSX file, the mimeType is application/vnd.openxmlformats-officedocument.spreadsheetml.sheet.
When you want to upload a file, body: require(path.resolve('src', 'files', 'excel', 'solargroup.xlsx')) cannot be used. In this case, please use body: fs.createReadStream(path.resolve('src', 'files', 'excel', 'solargroup.xlsx')). I thought that your error message might be due to this.
When you want to retrieve the file ID of the uploaded file, please modify file.id to file.data.id.
When above points are reflected to your script, it becomes as follows.
Modified script:
From:
const client = new google.auth.JWT(
keys.client_email,
null,
keys.private_key,
['https://www.googleapis.com/auth/drive.metadata.readonly']
)
To:
const client = new google.auth.JWT(
keys.client_email,
null,
keys.private_key,
['https://www.googleapis.com/auth/drive'] // Modified
)
And also, please modify your gdrun() as follows.
gdrun(client){
const drive = google.drive({ version: "v3", auth: client });
var fileMetadata = {
name: "My Report",
mimeType: "application/vnd.google-apps.spreadsheet",
};
var media = {
mimeType: "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet", // Modified
body: fs.createReadStream(path.resolve('src', 'files', 'excel', 'solargroup.xlsx')), // Modified
};
drive.files.create(
{
resource: fileMetadata,
media: media,
fields: "id",
},
function (err, file) {
if (err) {
console.error(err);
} else {
console.log("File Id:", file.data.id); // Modified
}
}
);
}
In this case, please use const fs = require("fs").
Result:
When above script is run, the following result is obtained.
File Id: ###fileId###
Note:
Your script uploads a XLSX file to Google Drive of service account as Google Spreadsheet. In this case, you cannot directly seen the uploaded file at your Google Drive. Because the Google Drive of the service account is different from your Google Drive. When you want to see the uploaded file at your Google Drive, please create a folder on your Google Drive and share the created folder with the email of the service account. And then, please upload the file to the shared folder. By this, you can see the uploaded file on your Google Drive with your browser. For this, please modify fileMetadata as follows.
var fileMetadata = {
name: "My Report",
mimeType: "application/vnd.google-apps.spreadsheet",
parents: ["### folderId ###"], // Please set the folder ID of the folder shared with the service account.
};
In above script, the maximum file size is 5 MB. Please be careful this. When you want to upload a file more than 5 MB, please use resumable upload. Ref
References:
Upload file data
Files: create
I get an error AWS::S3::Errors::InvalidRequest The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256. when I try upload file to S3 bucket in new Frankfurt region. All works properly with US Standard region.
Script:
backup_file = '/media/db-backup_for_dev/2014-10-23_02-00-07/slave_dump.sql.gz'
s3 = AWS::S3.new(
access_key_id: AMAZONS3['access_key_id'],
secret_access_key: AMAZONS3['secret_access_key']
)
s3_bucket = s3.buckets['test-frankfurt']
# Folder and file name
s3_name = "database-backups-last20days/#{File.basename(File.dirname(backup_file))}_#{File.basename(backup_file)}"
file_obj = s3_bucket.objects[s3_name]
file_obj.write(file: backup_file)
aws-sdk (1.56.0)
How to fix it?
Thank you.
AWS4-HMAC-SHA256, also known as Signature Version 4, ("V4") is one of two authentication schemes supported by S3.
All regions support V4, but US-Standard¹, and many -- but not all -- other regions, also support the other, older scheme, Signature Version 2 ("V2").
According to http://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html ... new S3 regions deployed after January, 2014 will only support V4.
Since Frankfurt was introduced late in 2014, it does not support V2, which is what this error suggests you are using.
http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAWSSDK.html explains how to enable V4 in the various SDKs, assuming you are using an SDK that has that capability.
I would speculate that some older versions of the SDKs might not support this option, so if the above doesn't help, you may need a newer release of the SDK you are using.
¹US Standard is the former name for the S3 regional deployment that is based in the us-east-1 region. Since the time this answer was originally written,
"Amazon S3 renamed the US Standard Region to the US East (N. Virginia) Region to be consistent with AWS regional naming conventions." For all practical purposes, it's only a change in naming.
With node, try
var s3 = new AWS.S3( {
endpoint: 's3-eu-central-1.amazonaws.com',
signatureVersion: 'v4',
region: 'eu-central-1'
} );
You should set signatureVersion: 'v4' in config to use new sign version:
AWS.config.update({
signatureVersion: 'v4'
});
Works for JS sdk.
For people using boto3 (Python SDK) use the below code
from botocore.client import Config
s3 = boto3.resource(
's3',
aws_access_key_id='xxxxxx',
aws_secret_access_key='xxxxxx',
config=Config(signature_version='s3v4')
)
I have been using Django, and I had to add these extra config variables to make this work. (in addition to settings mentioned in https://simpleisbetterthancomplex.com/tutorial/2017/08/01/how-to-setup-amazon-s3-in-a-django-project.html).
AWS_S3_REGION_NAME = "ap-south-1"
Or previous to boto3 version 1.4.4:
AWS_S3_REGION_NAME = "ap-south-1"
AWS_S3_SIGNATURE_VERSION = "s3v4"
Similar issue with the PHP SDK, this works:
$s3Client = S3Client::factory(array('key'=>YOUR_AWS_KEY, 'secret'=>YOUR_AWS_SECRET, 'signature' => 'v4', 'region'=>'eu-central-1'));
The important bit is the signature and the region
AWS_S3_REGION_NAME = "ap-south-1"
AWS_S3_SIGNATURE_VERSION = "s3v4"
this also saved my time after surfing for 24Hours..
Code for Flask (boto3)
Don't forget to import Config. Also If you have your own config class, then change its name.
from botocore.client import Config
s3 = boto3.client('s3',config=Config(signature_version='s3v4'),region_name=app.config["AWS_REGION"],aws_access_key_id=app.config['AWS_ACCESS_KEY'], aws_secret_access_key=app.config['AWS_SECRET_KEY'])
s3.upload_fileobj(file,app.config["AWS_BUCKET_NAME"],file.filename)
url = s3.generate_presigned_url('get_object', Params = {'Bucket':app.config["AWS_BUCKET_NAME"] , 'Key': file.filename}, ExpiresIn = 10000)
In Java I had to set a property
System.setProperty(SDKGlobalConfiguration.ENFORCE_S3_SIGV4_SYSTEM_PROPERTY, "true")
and add the region to the s3Client instance.
s3Client.setRegion(Region.getRegion(Regions.EU_CENTRAL_1))
With boto3, this is the code :
s3_client = boto3.resource('s3', region_name='eu-central-1')
or
s3_client = boto3.client('s3', region_name='eu-central-1')
For thumbor-aws, that used boto config, i needed to put this to the $AWS_CONFIG_FILE
[default]
aws_access_key_id = (your ID)
aws_secret_access_key = (your secret key)
s3 =
signature_version = s3
So anything that used boto directly without changes, this may be useful
Supernova answer for django/boto3/django-storages worked with me:
AWS_S3_REGION_NAME = "ap-south-1"
Or previous to boto3 version 1.4.4:
AWS_S3_REGION_NAME = "ap-south-1"
AWS_S3_SIGNATURE_VERSION = "s3v4"
just add them to your settings.py and change region code accordingly
you can check aws regions from:
enter link description here
For Android SDK, setEndpoint solves the problem, although it's been deprecated.
CognitoCachingCredentialsProvider credentialsProvider = new CognitoCachingCredentialsProvider(
context, "identityPoolId", Regions.US_EAST_1);
AmazonS3 s3 = new AmazonS3Client(credentialsProvider);
s3.setEndpoint("s3.us-east-2.amazonaws.com");
Basically the error was because I was using old version of aws-sdk and I updated the version so this error occured.
in my case with node js i was using signatureVersion in parmas object like this :
const AWS_S3 = new AWS.S3({
params: {
Bucket: process.env.AWS_S3_BUCKET,
signatureVersion: 'v4',
region: process.env.AWS_S3_REGION
}
});
Then I put signature out of params object and worked like charm :
const AWS_S3 = new AWS.S3({
params: {
Bucket: process.env.AWS_S3_BUCKET,
region: process.env.AWS_S3_REGION
},
signatureVersion: 'v4'
});
Check your AWS S3 Bucket Region and Pass proper Region in Connection Request.
In My Senario I have set 'APSouth1' for Asia Pacific (Mumbai)
using (var client = new AmazonS3Client(awsAccessKeyId, awsSecretAccessKey, RegionEndpoint.APSouth1))
{
GetPreSignedUrlRequest request1 = new GetPreSignedUrlRequest
{
BucketName = bucketName,
Key = keyName,
Expires = DateTime.Now.AddMinutes(50),
};
urlString = client.GetPreSignedURL(request1);
}
In my case, the request type was wrong. I was using GET(dumb) It must be PUT.
Here is the function I used with Python
def uploadFileToS3(filePath, s3FileName):
s3 = boto3.client('s3',
endpoint_url=settings.BUCKET_ENDPOINT_URL,
aws_access_key_id=settings.BUCKET_ACCESS_KEY_ID,
aws_secret_access_key=settings.BUCKET_SECRET_KEY,
region_name=settings.BUCKET_REGION_NAME
)
try:
s3.upload_file(
filePath,
settings.BUCKET_NAME,
s3FileName
)
# remove file from local to free up space
os.remove(filePath)
return True
except Exception as e:
logger.error('uploadFileToS3#Error')
logger.error(e)
return False
Sometime the default version will not update. Add this command
AWS_S3_SIGNATURE_VERSION = "s3v4"
in settings.py
For Boto3 , use this code.
import boto3
from botocore.client import Config
s3 = boto3.resource('s3',
aws_access_key_id='xxxxxx',
aws_secret_access_key='xxxxxx',
region_name='us-south-1',
config=Config(signature_version='s3v4')
)
Try this combination.
const s3 = new AWS.S3({
endpoint: 's3-ap-south-1.amazonaws.com', // Bucket region
accessKeyId: 'A-----------------U',
secretAccessKey: 'k------ja----------------soGp',
Bucket: 'bucket_name',
useAccelerateEndpoint: true,
signatureVersion: 'v4',
region: 'ap-south-1' // Bucket region
});
I was stuck for 3 days and finally, after reading a ton of blogs and answers I was able to configure Amazon AWS S3 Bucket.
On the AWS Side
I am assuming you have already
Created an s3-bucket
Created a user in IAM
Steps
Configure CORS settings
you bucket > permissions > CORS configuration
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>```
Generate A bucket policy
your bucket > permissions > bucket policy
It should be similar to this one
{
"Version": "2012-10-17",
"Id": "Policy1602480700663",
"Statement": [
{
"Sid": "Stmt1602480694902",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::harshit-portfolio-bucket/*"
}
]
}
PS: Bucket policy should say `public` after this
Configure Access Control List
your bucket > permissions > acces control list
give public access
PS: Access Control List should say public after this
Unblock public Access
your bucket > permissions > Block Public Access
Edit and turn all options Off
**On a side note if you are working on django
add the following lines to you settings.py file of your project
**
#S3 BUCKETS CONFIG
AWS_ACCESS_KEY_ID = '****not to be shared*****'
AWS_SECRET_ACCESS_KEY = '*****not to be shared******'
AWS_STORAGE_BUCKET_NAME = 'your-bucket-name'
AWS_S3_FILE_OVERWRITE = False
AWS_DEFAULT_ACL = None
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
# look for files first in aws
STATICFILES_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
# In India these settings work
AWS_S3_REGION_NAME = "ap-south-1"
AWS_S3_SIGNATURE_VERSION = "s3v4"
Also coming from: https://simpleisbetterthancomplex.com/tutorial/2017/08/01/how-to-setup-amazon-s3-in-a-django-project.html
For me this was the solution:
AWS_S3_REGION_NAME = "eu-central-1"
AWS_S3_ADDRESSING_STYLE = 'virtual'
This needs to be added to settings.py in your Django project
Using PHP SDK Follow Below.
require 'vendor/autoload.php';
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
$client = S3Client::factory(
array(
'signature' => 'v4',
'region' => 'me-south-1',
'key' => YOUR_AWS_KEY,
'secret' => YOUR_AWS_SECRET
)
);
Nodejs
var aws = require("aws-sdk");
aws.config.update({
region: process.env.AWS_REGION,
secretAccessKey: process.env.AWS_S3_SECRET_ACCESS_KEY,
accessKeyId: process.env.AWS_S3_ACCESS_KEY_ID,
});
var s3 = new aws.S3({
signatureVersion: "v4",
});
let data = await s3.getSignedUrl("putObject", {
ContentType: mimeType, //image mime type from request
Bucket: "MybucketName",
Key: folder_name + "/" + uuidv4() + "." + mime.extension(mimeType),
Expires: 300,
});
console.log(data);
AWS S3 Bucket Permission Configuration
Deselect Block All Public Access
Add Below Policy
{
"Version":"2012-10-17",
"Statement":[{
"Sid":"PublicReadGetObject",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::MybucketName/*"
]
}
]
}
Then Paste the returned URL and make PUT request on the URL with binary file of image
Full working nodejs version:
const AWS = require('aws-sdk');
var s3 = new AWS.S3( {
endpoint: 's3.eu-west-2.amazonaws.com',
signatureVersion: 'v4',
region: 'eu-west-2'
} );
const getPreSignedUrl = async () => {
const params = {
Bucket: 'some-bucket-name/some-folder',
Key: 'some-filename.json',
Expires: 60 * 60 * 24 * 7
};
try {
const presignedUrl = await new Promise((resolve, reject) => {
s3.getSignedUrl('getObject', params, (err, url) => {
err ? reject(err) : resolve(url);
});
});
console.log(presignedUrl);
} catch (err) {
if (err) {
console.log(err);
}
}
};
getPreSignedUrl();
My goal is to make sure that all videos that are being uploaded to my application is the right format and that they are formatted to fit minimum size.
I did this before using ffmpeg however i have recently moved my application to an amazon server.
This gives me the option to use Amazon Elastic Transcoder
However by the looks of it from the interface i am unable to set up automatic jobs that look for video or audio files and converts them.
For this i have been looking at their SDK / api references but i am not quite sure how to use that in my application.
My question is has anyone successfully started transcoding jobs in node.js and know how to convert videos from one format to another and / or down set the bitrate? I would really appreciate it if someone could point me in the right direction with some examples of how this might work.
However by the looks of it from the interface i am unable to set up
automatic jobs that look for video or audio files and converts them.
The Node.js SDK doesn't support it but you can do the followings: if you store the videos in S3 (if not move them to S3 because elastic transcoder uses S3) you can run a Lambda function on S3 putObject triggered by AWS.
http://docs.aws.amazon.com/lambda/latest/dg/with-s3.html
My question is has anyone successfully started transcoding jobs in
node.js and know how to convert videos from one format to another and
/ or down set the bitrate? I would really appreciate it if someone
could point me in the right direction with some examples of how this
might work.
We used AWS for video transcoding with node without any problem. It was time consuming to find out every parameter, but I hope these few line could help you:
const aws = require('aws-sdk');
aws.config.update({
accessKeyId: config.AWS.accessKeyId,
secretAccessKey: config.AWS.secretAccessKey,
region: config.AWS.region
});
var transcoder = new aws.ElasticTranscoder();
let transcodeVideo = function (key, callback) {
// presets: http://docs.aws.amazon.com/elastictranscoder/latest/developerguide/system-presets.html
let params = {
PipelineId: config.AWS.transcode.video.pipelineId, // specifies output/input buckets in S3
Input: {
Key: key,
},
OutputKeyPrefix: config.AWS.transcode.video.outputKeyPrefix,
Outputs: config.AWS.transcode.video.presets.map(p => {
return {Key: `${key}${p.suffix}`, PresetId: p.presetId};
})
};
params.Outputs[0].ThumbnailPattern = `${key}-{count}`;
transcoder.createJob(params, function (err, data) {
if (!!err) {
logger.err(err);
return;
}
let jobId = data.Job.Id;
logger.info('AWS transcoder job created (' + jobId + ')');
transcoder.waitFor('jobComplete', {Id: jobId}, callback);
});
};
An example configuration file:
let config = {
accessKeyId: '',
secretAccessKey: '',
region: '',
videoBucket: 'blabla-media',
transcode: {
video: {
pipelineId: '1450364128039-xcv57g',
outputKeyPrefix: 'transcoded/', // put the video into the transcoded folder
presets: [ // Comes from AWS console
{presetId: '1351620000001-000040', suffix: '_360'},
{presetId: '1351620000001-000020', suffix: '_480'}
]
}
}
};
If you want to generate master playlist you can do it like this.
".ts" files can not playable via hls players. Generate ".m3u8" file
async function transcodeVideo(mp4Location, outputLocation) {
let params = {
PipelineId: elasticTranscoderPipelineId,
Input: {
Key: mp4Location,
AspectRatio: 'auto',
FrameRate: 'auto',
Resolution: 'auto',
Container: 'auto',
Interlaced: 'auto'
},
OutputKeyPrefix: outputLocation + "/",
Outputs: [
{
Key: "hls2000",
PresetId: "1351620000001-200010",
SegmentDuration: "10"
},
{
Key: "hls1500",
PresetId: "1351620000001-200020",
SegmentDuration: "10"
}
],
Playlists: [
{
Format: 'HLSv3',
Name: 'hls',
OutputKeys: [
"hls2000",
"hls1500"
]
},
],
};
let jobData = await createJob(params);
return jobData.Job.Id;
}
async function createJob(params) {
return new Promise((resolve, reject) => {
transcoder.createJob(params, function (err, data) {
if(err) return reject("err: " + err);
if(data) {
return resolve(data);
}
});
});
}
I have an application using Node and the AWS-SDK package. I am copying objects from one bucket to another using the copyObject method. I'm getting an error that says SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method.
I've been able to successfully run the code on my local machine and it copies the files from one bucket to another. The error occurs on our AWS server, which I deployed the application to. The full error is:
{ [SignatureDoesNotMatch: The request signature we calculated does not
match the signature you provided. Check your key and signing method.]
message: 'The request signature we calculated does not match the signature you provided. Check your key and signing method.',
code: 'SignatureDoesNotMatch',
region: null,
time: Mon Jul 11 2016 12:11:36 GMT-0400 (EDT),
requestId: <requestId>,
extendedRequestId: <extendedRequestId>,
cfId: undefined,
statusCode: 403,
retryable: false,
retryDelay: 66.48076744750142 }
Also, I'm able to perform the listObjects command. The error is only happening on copyObject.
So far, I've tried
setting correctClockSkew to true
checked the servers time (same as local computer)
checked the key/secret (loading from a config file and is working locally)
checked the file names (there are no strange characters. Alphanumeric, '.', '-' and '/')
Here is the code causing the problem:
AWS.config.update({
accessKeyId: <accessKeyId>,
secretAccessKey: <secretAccessKey>,
correctClockSkew: true
});
var s3 = new AWS.S3();
var params = {
Bucket: <bucket>,
Prefix: <prefix>
};
s3.listObjects(params, function(err, data) {
if (data.Contents.length) {
async.each(data.Contents, function(file, cb) {
var file_name = file.Key.substr(file.Key.indexOf('/')+1);
var copy_params = {
Bucket: <bucket2>,
CopySource: <bucket> + '/' + file.Key,
Key: file_name,
ACL: 'public-read'
};
s3.copyObject(copy_params, function(copyErr, copyData){
if (copyErr) {
console.log('Error:', copyErr);
}
else {
cb();
}
});
}, function(err){
...
}
});
} else {
...
}
});
Not sure if you've found a solution to this or not, but this was an issue raised on github and the solution seems to simply URL encode your CopySource parameter with encodeURI():
https://github.com/aws/aws-sdk-js/issues/1949