I get an error AWS::S3::Errors::InvalidRequest The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256. when I try upload file to S3 bucket in new Frankfurt region. All works properly with US Standard region.
Script:
backup_file = '/media/db-backup_for_dev/2014-10-23_02-00-07/slave_dump.sql.gz'
s3 = AWS::S3.new(
access_key_id: AMAZONS3['access_key_id'],
secret_access_key: AMAZONS3['secret_access_key']
)
s3_bucket = s3.buckets['test-frankfurt']
# Folder and file name
s3_name = "database-backups-last20days/#{File.basename(File.dirname(backup_file))}_#{File.basename(backup_file)}"
file_obj = s3_bucket.objects[s3_name]
file_obj.write(file: backup_file)
aws-sdk (1.56.0)
How to fix it?
Thank you.
AWS4-HMAC-SHA256, also known as Signature Version 4, ("V4") is one of two authentication schemes supported by S3.
All regions support V4, but US-Standard¹, and many -- but not all -- other regions, also support the other, older scheme, Signature Version 2 ("V2").
According to http://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html ... new S3 regions deployed after January, 2014 will only support V4.
Since Frankfurt was introduced late in 2014, it does not support V2, which is what this error suggests you are using.
http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAWSSDK.html explains how to enable V4 in the various SDKs, assuming you are using an SDK that has that capability.
I would speculate that some older versions of the SDKs might not support this option, so if the above doesn't help, you may need a newer release of the SDK you are using.
¹US Standard is the former name for the S3 regional deployment that is based in the us-east-1 region. Since the time this answer was originally written,
"Amazon S3 renamed the US Standard Region to the US East (N. Virginia) Region to be consistent with AWS regional naming conventions." For all practical purposes, it's only a change in naming.
With node, try
var s3 = new AWS.S3( {
endpoint: 's3-eu-central-1.amazonaws.com',
signatureVersion: 'v4',
region: 'eu-central-1'
} );
You should set signatureVersion: 'v4' in config to use new sign version:
AWS.config.update({
signatureVersion: 'v4'
});
Works for JS sdk.
For people using boto3 (Python SDK) use the below code
from botocore.client import Config
s3 = boto3.resource(
's3',
aws_access_key_id='xxxxxx',
aws_secret_access_key='xxxxxx',
config=Config(signature_version='s3v4')
)
I have been using Django, and I had to add these extra config variables to make this work. (in addition to settings mentioned in https://simpleisbetterthancomplex.com/tutorial/2017/08/01/how-to-setup-amazon-s3-in-a-django-project.html).
AWS_S3_REGION_NAME = "ap-south-1"
Or previous to boto3 version 1.4.4:
AWS_S3_REGION_NAME = "ap-south-1"
AWS_S3_SIGNATURE_VERSION = "s3v4"
Similar issue with the PHP SDK, this works:
$s3Client = S3Client::factory(array('key'=>YOUR_AWS_KEY, 'secret'=>YOUR_AWS_SECRET, 'signature' => 'v4', 'region'=>'eu-central-1'));
The important bit is the signature and the region
AWS_S3_REGION_NAME = "ap-south-1"
AWS_S3_SIGNATURE_VERSION = "s3v4"
this also saved my time after surfing for 24Hours..
Code for Flask (boto3)
Don't forget to import Config. Also If you have your own config class, then change its name.
from botocore.client import Config
s3 = boto3.client('s3',config=Config(signature_version='s3v4'),region_name=app.config["AWS_REGION"],aws_access_key_id=app.config['AWS_ACCESS_KEY'], aws_secret_access_key=app.config['AWS_SECRET_KEY'])
s3.upload_fileobj(file,app.config["AWS_BUCKET_NAME"],file.filename)
url = s3.generate_presigned_url('get_object', Params = {'Bucket':app.config["AWS_BUCKET_NAME"] , 'Key': file.filename}, ExpiresIn = 10000)
In Java I had to set a property
System.setProperty(SDKGlobalConfiguration.ENFORCE_S3_SIGV4_SYSTEM_PROPERTY, "true")
and add the region to the s3Client instance.
s3Client.setRegion(Region.getRegion(Regions.EU_CENTRAL_1))
With boto3, this is the code :
s3_client = boto3.resource('s3', region_name='eu-central-1')
or
s3_client = boto3.client('s3', region_name='eu-central-1')
For thumbor-aws, that used boto config, i needed to put this to the $AWS_CONFIG_FILE
[default]
aws_access_key_id = (your ID)
aws_secret_access_key = (your secret key)
s3 =
signature_version = s3
So anything that used boto directly without changes, this may be useful
Supernova answer for django/boto3/django-storages worked with me:
AWS_S3_REGION_NAME = "ap-south-1"
Or previous to boto3 version 1.4.4:
AWS_S3_REGION_NAME = "ap-south-1"
AWS_S3_SIGNATURE_VERSION = "s3v4"
just add them to your settings.py and change region code accordingly
you can check aws regions from:
enter link description here
For Android SDK, setEndpoint solves the problem, although it's been deprecated.
CognitoCachingCredentialsProvider credentialsProvider = new CognitoCachingCredentialsProvider(
context, "identityPoolId", Regions.US_EAST_1);
AmazonS3 s3 = new AmazonS3Client(credentialsProvider);
s3.setEndpoint("s3.us-east-2.amazonaws.com");
Basically the error was because I was using old version of aws-sdk and I updated the version so this error occured.
in my case with node js i was using signatureVersion in parmas object like this :
const AWS_S3 = new AWS.S3({
params: {
Bucket: process.env.AWS_S3_BUCKET,
signatureVersion: 'v4',
region: process.env.AWS_S3_REGION
}
});
Then I put signature out of params object and worked like charm :
const AWS_S3 = new AWS.S3({
params: {
Bucket: process.env.AWS_S3_BUCKET,
region: process.env.AWS_S3_REGION
},
signatureVersion: 'v4'
});
Check your AWS S3 Bucket Region and Pass proper Region in Connection Request.
In My Senario I have set 'APSouth1' for Asia Pacific (Mumbai)
using (var client = new AmazonS3Client(awsAccessKeyId, awsSecretAccessKey, RegionEndpoint.APSouth1))
{
GetPreSignedUrlRequest request1 = new GetPreSignedUrlRequest
{
BucketName = bucketName,
Key = keyName,
Expires = DateTime.Now.AddMinutes(50),
};
urlString = client.GetPreSignedURL(request1);
}
In my case, the request type was wrong. I was using GET(dumb) It must be PUT.
Here is the function I used with Python
def uploadFileToS3(filePath, s3FileName):
s3 = boto3.client('s3',
endpoint_url=settings.BUCKET_ENDPOINT_URL,
aws_access_key_id=settings.BUCKET_ACCESS_KEY_ID,
aws_secret_access_key=settings.BUCKET_SECRET_KEY,
region_name=settings.BUCKET_REGION_NAME
)
try:
s3.upload_file(
filePath,
settings.BUCKET_NAME,
s3FileName
)
# remove file from local to free up space
os.remove(filePath)
return True
except Exception as e:
logger.error('uploadFileToS3#Error')
logger.error(e)
return False
Sometime the default version will not update. Add this command
AWS_S3_SIGNATURE_VERSION = "s3v4"
in settings.py
For Boto3 , use this code.
import boto3
from botocore.client import Config
s3 = boto3.resource('s3',
aws_access_key_id='xxxxxx',
aws_secret_access_key='xxxxxx',
region_name='us-south-1',
config=Config(signature_version='s3v4')
)
Try this combination.
const s3 = new AWS.S3({
endpoint: 's3-ap-south-1.amazonaws.com', // Bucket region
accessKeyId: 'A-----------------U',
secretAccessKey: 'k------ja----------------soGp',
Bucket: 'bucket_name',
useAccelerateEndpoint: true,
signatureVersion: 'v4',
region: 'ap-south-1' // Bucket region
});
I was stuck for 3 days and finally, after reading a ton of blogs and answers I was able to configure Amazon AWS S3 Bucket.
On the AWS Side
I am assuming you have already
Created an s3-bucket
Created a user in IAM
Steps
Configure CORS settings
you bucket > permissions > CORS configuration
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>```
Generate A bucket policy
your bucket > permissions > bucket policy
It should be similar to this one
{
"Version": "2012-10-17",
"Id": "Policy1602480700663",
"Statement": [
{
"Sid": "Stmt1602480694902",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::harshit-portfolio-bucket/*"
}
]
}
PS: Bucket policy should say `public` after this
Configure Access Control List
your bucket > permissions > acces control list
give public access
PS: Access Control List should say public after this
Unblock public Access
your bucket > permissions > Block Public Access
Edit and turn all options Off
**On a side note if you are working on django
add the following lines to you settings.py file of your project
**
#S3 BUCKETS CONFIG
AWS_ACCESS_KEY_ID = '****not to be shared*****'
AWS_SECRET_ACCESS_KEY = '*****not to be shared******'
AWS_STORAGE_BUCKET_NAME = 'your-bucket-name'
AWS_S3_FILE_OVERWRITE = False
AWS_DEFAULT_ACL = None
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
# look for files first in aws
STATICFILES_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
# In India these settings work
AWS_S3_REGION_NAME = "ap-south-1"
AWS_S3_SIGNATURE_VERSION = "s3v4"
Also coming from: https://simpleisbetterthancomplex.com/tutorial/2017/08/01/how-to-setup-amazon-s3-in-a-django-project.html
For me this was the solution:
AWS_S3_REGION_NAME = "eu-central-1"
AWS_S3_ADDRESSING_STYLE = 'virtual'
This needs to be added to settings.py in your Django project
Using PHP SDK Follow Below.
require 'vendor/autoload.php';
use Aws\S3\S3Client;
use Aws\S3\Exception\S3Exception;
$client = S3Client::factory(
array(
'signature' => 'v4',
'region' => 'me-south-1',
'key' => YOUR_AWS_KEY,
'secret' => YOUR_AWS_SECRET
)
);
Nodejs
var aws = require("aws-sdk");
aws.config.update({
region: process.env.AWS_REGION,
secretAccessKey: process.env.AWS_S3_SECRET_ACCESS_KEY,
accessKeyId: process.env.AWS_S3_ACCESS_KEY_ID,
});
var s3 = new aws.S3({
signatureVersion: "v4",
});
let data = await s3.getSignedUrl("putObject", {
ContentType: mimeType, //image mime type from request
Bucket: "MybucketName",
Key: folder_name + "/" + uuidv4() + "." + mime.extension(mimeType),
Expires: 300,
});
console.log(data);
AWS S3 Bucket Permission Configuration
Deselect Block All Public Access
Add Below Policy
{
"Version":"2012-10-17",
"Statement":[{
"Sid":"PublicReadGetObject",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::MybucketName/*"
]
}
]
}
Then Paste the returned URL and make PUT request on the URL with binary file of image
Full working nodejs version:
const AWS = require('aws-sdk');
var s3 = new AWS.S3( {
endpoint: 's3.eu-west-2.amazonaws.com',
signatureVersion: 'v4',
region: 'eu-west-2'
} );
const getPreSignedUrl = async () => {
const params = {
Bucket: 'some-bucket-name/some-folder',
Key: 'some-filename.json',
Expires: 60 * 60 * 24 * 7
};
try {
const presignedUrl = await new Promise((resolve, reject) => {
s3.getSignedUrl('getObject', params, (err, url) => {
err ? reject(err) : resolve(url);
});
});
console.log(presignedUrl);
} catch (err) {
if (err) {
console.log(err);
}
}
};
getPreSignedUrl();
Related
I'm using External Secrets to sync my secrets from azure. And now I need a programmatic way to trigger the sync. With kubectl the command is
kubectl annotate es my-es force-sync=$(date +%s) --overwrite
So, I try to use k8s js sdk to do this. I can success fully get the External Secret
await crdApi.getNamespacedCustomObject("external-secrets.io", "v1beta1", "default", "externalsecrets", "my-es")
However, when I try to update it with patchNamespacedCustomObject, it always tells me "the body of the request was in an unknown format - accepted media types include: application/json-patch+json, application/merge-patch+json, application/apply-patch+yaml"
Here's my code
const kc = new k8s.KubeConfig();
kc.loadFromString(kubeConfig);
const crdApi = kc.makeApiClient(k8s.CustomObjectsApi);
let patch = [{
"op": "replace",
"path": "/metadata/annotations",
"value": {
"force-sync": "1663315075"
}
}];
await crdApi.patchNamespacedCustomObject("external-secrets.io", "v1beta1", "default", "externalsecrets", "my-es", patch);
I am referring their patch example here
const options = {
"headers": {
"Content-type": k8s.PatchUtils.PATCH_FORMAT_JSON_PATCH
}
};
is still required.
Not sure if this is the best place to post this question, please redirect me if this isn't then I will remove the post and post it to the correct location.
I know that recently amazon s3 has changed their url while accessing files.
It used to be something like http://s3.amazonaws.com/<bucket> or http://s3.<region>.amazonaws.com/<bucket>
But there's been changes into http://<bucket>.s3-<aws-region>.amazonaws.com or http://<bucket>.s3.amazonaws.com, due to this documentation https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro
http://<bucket>.s3.amazonaws.com would not be reachable after March 20, 2019, BUT when I use aws-sdk in javascript to do file upload with skipper-better-s3 the url I get in return from aws is http://<bucket>.s3.amazonaws.com/<Key>
If that url is not suppose to be reachable why would aws return such url? (I can still access the file using the url)
If that url is not suppose to be reachable in the near future, am I suppose to add in the region myself or modify the url myself instead of using the url returned by aws?
Or it might my code's problem?
Below is my code for the upload
const awsOptions = { // these fields are different because this uses skipper
adapter: require('skipper-better-s3'),
key: aws_access_key,
secret: aws_secret_key,
saveAs: PATH,
bucket: BUCKET,
s3params: {
ACL: 'public-read'
},
}
const fieldName = req._fileparser.upstreams[0].fieldName;
req.file(fieldName).upload(awsOptions, (err, filesUploaded) => {
if (err) reject(err);
const filesUploadedF = filesUploaded[0]; // F = first file
const url = filesUploadedF.extra.Location; // image url -> https://<bucket>.s3.amazonaws.com/<Key>
console.log(url, 'urlurlurl');
});
filesUploadedF would return
UploadedFileMetadata {
fd: '<Key>',
size: 4337,
type: 'image/png',
filename: 'filename.png',
status: 'bufferingOrWriting',
field: 'image',
extra:
{ ETag: '111111111111111111111',
Location: 'https://<bucket>.s3.amazonaws.com/<Key>',
key: '<key>',
Key: '<Key>',
Bucket: '<Bucket>',
md5: '32890jf32890jf0892j3f',
fd: '<Key>',
ContentType: 'image/png' }
}
The documentation you linked to for http://<bucket>.s3.amazonaws.com style naming says this:
Note
Buckets created in Regions launched after March 20, 2019 are not reachable via the https://bucket.s3.amazonaws.com naming scheme.
The wording there is important. They're only talking about new regions brought online after March 20, 2019.
To date, that's only buckets created in Middle East (Bahrain) and Asia Pacific (Hong Kong) regions.
I have a simple Node JS script and which works fine when run locally in terminal:
exports.google_translate = function (translate_text, res) {
var Translate = require('#google-cloud/translate');
var translate = new Translate.Translate({projectId: 'my project'});
translate.translate(translate_text, 'fr').then(results => {
var translation = results[0];
res.send(translation);
}).catch(err => {
res.send('ERROR:', err);
});
}
However whenever I call this via Ajax, I get the following error:
Error: The request is missing a valid API key.
I already added this as a permanent environmental variable using this:
export GOOGLE_APPLICATION_CREDENTIALS="[PATH to key downloaded]"
But still each time I call this script via Ajax, I get the same error. So my question is, how can I get the Node JS script to save the API key so that it works when called via Ajax?
Thanks
It seems that for whatever reason, the application cannot read the environmental variable correctly. Since nodejs stores all environmental variables in the process.env you could ensure that it is written by calling:
function google_translate(translate_text) {
process.env.GOOGLE_APPLICATION_CREDENTIALS = "[PATH to key downloaded]";
return translate.translate(translate_text, 'fr')
.then(console.log)
.catch(console.error);
}
or pass the key directly to the constructor with
const translate = new Translate.Translate({
projectId: 'my-project',
keyFilename: "[PATH to key downloaded]"
});
You can also ensure the key file is read on your end and just pass the config to the translate constructor
const translate = new Translate.Translate({
credentials: JSON.parse(fs.readFileSync("[PATH to key downloaded]", "utf8"))
});
if it still does not help, maybe it's the issue with a key itself, and you could try generating a new one here https://console.cloud.google.com/apis/credentials
const {Translate} = require('#google-cloud/translate').v2;
const translate = new Translate({
credentials: {
"type": "account",
"project_id": "your_project",
"private_key_id": "your_data",
"private_key": "your_data",
"client_email": "your_data",
"client_id": "your_data",
"auth_uri": "your_data",
"token_uri": "your_data",
"auth_provider_x509_cert_url": "your_data",
"client_x509_cert_url": "your_data"
}
});
const text = 'This is testing!';
const target = 'de';
async function translateText() {
// Translates the text into the target language. "text" can be a string for
// translating a single piece of text, or an array of strings for translating
// multiple texts.
let [translations] = await translate.translate(text, target);
translations = Array.isArray(translations) ? translations : [translations];
console.log('Translations:' + translations);
}
translateText();
You must take this credentials.json file from your project on google cloud. They will provide you a file .json
I'm trying to create a user in a AWS User Pool from an AWS Lambda
I tried with this script took from what seems to be the official JavascriptSDK for the AWS but can't get it working. http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/CognitoIdentityServiceProvider.html#adminCreateUser-property
I keep getting this error:ypeError: cognitoidentityserviceprovider.adminCreateUser is not a function
var cognitoidentityserviceprovider = new AWS.CognitoIdentityServiceProvider({apiVersion: '2016-04-18'});
var params = {
UserPoolId: 'eu-west-1_XXXXXXXX', /* required */
Username: 'me#example.com', /* required */
DesiredDeliveryMediums: [
'EMAIL'
],
ForceAliasCreation: false,
MessageAction: 'SUPPRESS',
TemporaryPassword: 'tempPassword1',
UserAttributes: [
{
Name: 'email', /* required */
Value: 'me#example.com'
},
{
Name: 'name', /* required */
Value: 'Me'
},
{
Name: 'last_name', /* required */
Value: 'lastme'
}
/* more items */
]
};
cognitoidentityserviceprovider.adminCreateUser(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
callback(null, data);
});
For adminCreateUser (you basically require the aws sdk, configure credentials, instantiate the client and call the specific operation).
var aws = require('aws-sdk');
aws.config.update({accessKeyId: 'akid', secretAccessKey: 'secret'});
var CognitoIdentityServiceProvider = aws.CognitoIdentityServiceProvider;
var client = new CognitoIdentityServiceProvider({ apiVersion: '2016-04-19 });
//your code goes here
Note that there might be different ways in which you can configure the AWS credentials to call the operation. You do need credentials as this is an authenticated operation. Other admin operations are similar, you just need to pass the appropriate parameters as JSON in the call.
According to this, the AWS SDK for JavaScript version 2.7.25 which contains the adminCreateUser operation should be available.
http://docs.aws.amazon.com/lambda/latest/dg/current-supported-versions.html
Try bundling the latest aws-sdk into your uploaded package instead of relying on the one made available by default.
Source: AWS Cognito adminCreateUser from Lambda
I have an application using Node and the AWS-SDK package. I am copying objects from one bucket to another using the copyObject method. I'm getting an error that says SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method.
I've been able to successfully run the code on my local machine and it copies the files from one bucket to another. The error occurs on our AWS server, which I deployed the application to. The full error is:
{ [SignatureDoesNotMatch: The request signature we calculated does not
match the signature you provided. Check your key and signing method.]
message: 'The request signature we calculated does not match the signature you provided. Check your key and signing method.',
code: 'SignatureDoesNotMatch',
region: null,
time: Mon Jul 11 2016 12:11:36 GMT-0400 (EDT),
requestId: <requestId>,
extendedRequestId: <extendedRequestId>,
cfId: undefined,
statusCode: 403,
retryable: false,
retryDelay: 66.48076744750142 }
Also, I'm able to perform the listObjects command. The error is only happening on copyObject.
So far, I've tried
setting correctClockSkew to true
checked the servers time (same as local computer)
checked the key/secret (loading from a config file and is working locally)
checked the file names (there are no strange characters. Alphanumeric, '.', '-' and '/')
Here is the code causing the problem:
AWS.config.update({
accessKeyId: <accessKeyId>,
secretAccessKey: <secretAccessKey>,
correctClockSkew: true
});
var s3 = new AWS.S3();
var params = {
Bucket: <bucket>,
Prefix: <prefix>
};
s3.listObjects(params, function(err, data) {
if (data.Contents.length) {
async.each(data.Contents, function(file, cb) {
var file_name = file.Key.substr(file.Key.indexOf('/')+1);
var copy_params = {
Bucket: <bucket2>,
CopySource: <bucket> + '/' + file.Key,
Key: file_name,
ACL: 'public-read'
};
s3.copyObject(copy_params, function(copyErr, copyData){
if (copyErr) {
console.log('Error:', copyErr);
}
else {
cb();
}
});
}, function(err){
...
}
});
} else {
...
}
});
Not sure if you've found a solution to this or not, but this was an issue raised on github and the solution seems to simply URL encode your CopySource parameter with encodeURI():
https://github.com/aws/aws-sdk-js/issues/1949