s3.copyObject does not apply ServerSideEncryption to object in target bucket? - javascript

I'm attempting to perform cross-account backups of any objects from one bucket on ACCOUNT-A to a backup bucket on ACCOUNT-B and I want the objects in the backup bucket to be encrypted using AES256. But the encryption doesn't seem to be getting applied to the objects that land in the backup bucket.
The Setup
ACCOUNT-A has a source bucket called assets.myapp.com
ACCOUNT-B has a target bucket called backup-assets.myapp.com
An s3.ObjectCreated:* bucket event on the assets.myapp.com bucket triggers a Lambda function to copy the newly created object to the backup-assets.myapp.com bucket under ACCOUNT-B.
Attempting to apply ServerSideEncryption: 'AES256' to the objects in the backup-assets.myapp.com bucket once they land there.
The Lambda Function Code
var async = require('async');
var aws = require('aws-sdk');
var s3 = new aws.S3({ apiVersion: '2006-03-01' });
exports.backupObject = function backupS3Object(event, context) {
if (event.Records === null) {
return context.fail('NOTICE:', 'No records to process.');
}
async.each(event.Records, function(record, iterate) {
var sourceBucket = record.s3.bucket.name;
var targetBucket = 'backup-' + record.s3.bucket.name;
var key = record.s3.object.key;
s3.copyObject({
Bucket : targetBucket,
CopySource : sourceBucket + '/' + key,
Key : key,
ACL : 'private',
ServerSideEncryption : 'AES256',
MetadataDirective : 'COPY',
StorageClass : 'STANDARD_IA'
}, function(error, data) {
if (error) return iterate(error);
console.log('SSE: ' + data.ServerSideEncryption);
console.log('SUCCESS: Backup of ' + sourceBucket + '/' + key);
return iterate();
});
}, function (error) {
if (error) {
return context.fail('ERROR:', 'One or more objects could not be copied.');
}
return context.done();
});
};
Cloudwatch Log Reports Success
When the function runs, the object is successfully copied, and the Cloudwatch Log for my Lambda function reports the ServerSideEncryption used as AES256.
However, The S3 Console Disagrees
But the problem is that when I inspect the Properties > Details of the copied object in the backup-assets.myapp.com bucket under ACCOUNT-B it reports Server Side Encryption: None.
Any idea why the SSE doesn't seem to be applied to the object when it lands in the backup-assets.myapp.com bucket? Or is it actually being applied and I've just discovered a display bug in the S3 Console?
BONUS QUESTION
When I attempt to apply SSE:AES256 to any given object manually
using the console, I get the following error: The additional
properties (RRS/SSE) were not enabled or disabled due to errors for
the following objects in backup-assets.myapp.com:
copied-object-one.txt.
Thanks in advance for any help.

Figured this out.
The problem was with the ACL parameter of the copyObject method.
If you want to use ServerSideEnryption: 'AES256' on the objects that land in the target bucket you must provide an ACL that allows bucket-owner-full-control to allow your backup bucket to apply the encryption. This is not documented anywhere (that I found), but I've done extensive testing now (not by choice) and determined that this does work. So the working Lambda function code is below:
var async = require('async');
var aws = require('aws-sdk');
var s3 = new aws.S3({ apiVersion: '2006-03-01' });
exports.backupObject = function backupS3Object(event, context) {
if (event.Records === null) {
return context.done('NOTICE: No records to process.');
}
async.each(event.Records, function(record, iterate) {
var sourceBucket = record.s3.bucket.name;
var targetBucket = 'backup-' + record.s3.bucket.name;
var key = record.s3.object.key;
s3.copyObject({
Bucket : targetBucket,
CopySource : sourceBucket + '/' + key,
Key : key,
ACL : 'bucket-owner-full-control',
ServerSideEncryption : 'AES256',
MetadataDirective : 'COPY',
StorageClass : 'STANDARD_IA'
}, function(error, data) {
if (error) return iterate(error);
console.log('SUCCESS: Backup of ' + sourceBucket + '/' + key);
return iterate();
});
}, function (error) {
return context.done(error);
});
};
I'm not sure if this is possible using the X-region X-account replication method discussed in the comments in the question above. There doesn't seem to be any way to declare SSE when performing a replication.

Related

How to send parameters from js in a webpage to a node function in aws lambda?

I am attempting to send a json file created from fields in a webpage to a node function in AWS Lambda to add it to a DynamoDB table. I have the JSON made but I don't know how to pass it from the js used for the page to the lambda function. As this is for a class project, my group and I have decided to forego amazon's gateway API, and are just raw calling lambda functions using amazon's js sdk. I've checked Amazon's documentation and other various examples, but I haven't been able to find a complete solution.
Node function in lambda
const AWS = require('aws-sdk');
const db = new AWS.DynamoDB.DocumentClient({region: 'us-east-1'});
exports.handler = async (event) => {
const params = {
TableName : 'characterTable',
Item: {
name : 'defaultName'
}
};
const userID = 'placeholder';
params.Item.userID = userID;
return await db.put(params).promise();
};
//}
Webpage js:
var lambda = new AWS.Lambda();
function makeJSON(){
var userID = "";
var name = document.forms["characterForm"]["characterName"].value;
var race = document.forms["characterForm"]["race"].value;
var playerClass = document.forms["characterForm"]["class"].value;
var strength = document.forms["characterForm"]["strength"].value;
var dexterity = document.forms["characterForm"]["dexterity"].value;
var constitution = document.forms["characterForm"]["constitution"].value;
var intelligence = document.forms["characterForm"]["intelligence"].value;
var wisdom = document.forms["characterForm"]["wisdom"].value;
var charisma = document.forms["characterForm"]["charisma"].value;
characterSheetObj = {userID: userID, name: name, race: race, class: playerClass, strength: strength, dexterity: dexterity, constitution: constitution, intelligence: intelligence, wisdom: wisdom, charisma: charisma}
characterSheetJSON = JSON.stringify(characterSheetObj);
alert(characterSheetJSON);
var myParams = {
FunctionName : 'addCharacterSheet',
InvocationType : 'RequestResponse',
LogType : 'None',
Payload : characterSheetJSON
}
lambda.invoke(myParams, function(err, data){
//if it errors, prompts an error message
if (err) {
prompt(err);
}
//otherwise puts up a message that it didnt error. the lambda function presently doesnt do anything
//in the future the lambda function should produce a json file for the JavaScript here to do something with
else {
alert("Did not error");
}
});
}
The html page for the raw javascript includes the proper setup for importing the sdk and configuring the region/user pool
I just don't know how to get the payload from the invocation in my node function, as this is my first time working with lambda and amazon's sdk, or doing any web development work at all, to be honest.
i would do it with async await. It's better to read.
lambda.invoke = util.promisify(lambda.invoke);
const result = await lambda.invoke(yourParams);
const payload = JSON.parse(result.Payload);

No 'Access-Control-Allow-Origin' header is present when creating BlobServiceWithSas

This is the first time I use Azure Storage JS API. I have followed instruction on this Microsoft tutorial.
I generate the SAS key on the node server with successful results but I still get the authentication failed error. I'm using the libraries provided by Microsoft Azure. How may I fix this?
function test() {
Restangular.all('cdn/sas').post({container: 'photos'}).then(function (sas) {
var blobUri = 'https://hamsar.blob.core.windows.net';
var blobService = AzureStorage.createBlobServiceWithSas(blobUri, sas.token);
blobService.listContainersSegmented(null, function (error, results) {
if (error) {
// List container error
} else {
// Deal with container object
}
});
}, function (error) {
console.log("Error generating SAS: ", error);
});
}
Error messages:
According to your error message, I found you create a Service SAS token. But if you want to list all the container name in your storage account. You need use account SAS token.
Notice: You could also use the blobService.listBlobsSegmented, you should make sure your service sas token has the permission to list the blob file and set the container name.
Like this:
blobService.listBlobsSegmented('mycontainer', null, function (error, results)
If you want to list all the container, I suggest you could follow these codes to generate the Account SAS.
Code like this :
var getPolicyWithFullPermissions = function(){
var startDate = new Date();
var expiryDate = new Date();
startDate.setTime(startDate.getTime() - 1000);
expiryDate.setTime(expiryDate.getTime() + 24*60*60*1000);
var sharedAccessPolicy = {
AccessPolicy: {
Services: AccountSasConstants.Services.BLOB +
AccountSasConstants.Services.FILE +
AccountSasConstants.Services.QUEUE +
AccountSasConstants.Services.TABLE,
ResourceTypes: AccountSasConstants.Resources.SERVICE +
AccountSasConstants.Resources.CONTAINER +
AccountSasConstants.Resources.OBJECT,
Permissions: AccountSasConstants.Permissions.READ +
AccountSasConstants.Permissions.ADD +
AccountSasConstants.Permissions.CREATE +
AccountSasConstants.Permissions.UPDATE +
AccountSasConstants.Permissions.PROCESS +
AccountSasConstants.Permissions.WRITE +
AccountSasConstants.Permissions.DELETE +
AccountSasConstants.Permissions.LIST,
Protocols: AccountSasConstants.Protocols.HTTPSORHTTP,
Start: startDate,
Expiry: expiryDate
}
};
return sharedAccessPolicy;
};
var sharedAccessSignature = azure.generateAccountSharedAccessSignature(environmentAzureStorageAccount, environmentAzureStorageAccessKey, getPolicyWithFullPermissions );
Then you could use the account SAS to list the account's container.
Result:
More details about the difference between service sas and account sas, you could refer to this article.

How do you write to the file system of an aws lambda instance?

I am unsuccessfully trying to write to the file system of an aws lambda instance. The docs say that a standard lambda instance has 512mb of space available at /tmp/. However the following code that runs on my local machine isn't working at all on the lambda instance:
var fs = require('fs');
fs.writeFile("/tmp/test.txt", "testing", function(err) {
if(err) {
return console.log(err);
}
console.log("The file was saved!");
});
The code in the anonymous callback function is never getting called on the lambda instance. Anyone had any success doing this? Thanks so much for your help.
It's possible that this is a related question. Is it possible that there is some kind of conflict going on between the s3 code and what I'm trying to do with the fs callback function? The code below is what's currently being run.
console.log('Loading function');
var aws = require('aws-sdk');
var s3 = new aws.S3({ apiVersion: '2006-03-01' });
var fs = require('fs');
exports.handler = function(event, context) {
//console.log('Received event:', JSON.stringify(event, null, 2));
// Get the object from the event and show its content type
var bucket = event.Records[0].s3.bucket.name;
var key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' '));
var params = {
Bucket: bucket,
Key: key
};
s3.getObject(params, function(err, data) {
if (err) {
console.log(err);
var message = "Error getting object " + key + " from bucket " + bucket +
". Make sure they exist and your bucket is in the same region as this function.";
console.log(message);
context.fail(message);
} else {
//console.log("DATA: " + data.Body.toString());
fs.writeFile("/tmp/test.csv", "testing", function (err) {
if(err) {
context.failed("writeToTmp Failed " + err);
} else {
context.succeed("writeFile succeeded");
}
});
}
});
};
Modifying your code into the Lambda template worked for me. I think you need to assign a function to exports.handler and call the appropriate context.succeed() or context.fail() method. Otherwise, you just get generic errors.
var fs = require("fs");
exports.handler = function(event, context) {
fs.writeFile("/tmp/test.txt", "testing", function (err) {
if (err) {
context.fail("writeFile failed: " + err);
} else {
context.succeed("writeFile succeeded");
}
});
};
So the answer lies in the context.fail() or context.succeed() functions. Being completely new to the world of aws and lambda I was ignorant to the fact that calling any of these methods stops execution of the lambda instance.
According to the docs:
The context.succeed() method signals successful execution and returns
a string.
By eliminating these and only calling them after I had run all the code that I wanted, everything worked well.
I ran into this, and it seems like AWS Lambda may be using an older (or modified) version of fs. I figured this out by logging the response from fs.writeFile and noticed it wasn't a promise.
To get around this, I wrapped the call in a promise:
var promise = new Promise(function(resolve, reject) {
fs.writeFile('/tmp/test.txt', 'testing', function (err) {
if (err) {
reject(err);
} else {
resolve();
}
});
});
Hopefully this helps someone else :hug-emoji:

Why does S3.deleteObject not fail when the specified key doesn't exist?

Using the AWS SDK for Node, why do I not get an error when trying to delete an object that doesn't exist (i.e. the S3 key is wrong)?
If I specify a non-existent bucket on the other hand, an error is produced.
If you consider the following Node program, the Key parameter lists a key which doesn't exist in the bucket, yet the error argument to the callback is null:
var aws = require('aws-sdk')
function getSetting(name) {
var value = process.env[name]
if (value == null) {
throw new Error('You must set the environment variable ' + name)
}
return value
}
var s3Client = new aws.S3({
accessKeyId: getSetting('AWSACCESSKEYID'),
secretAccessKey: getSetting('AWSSECRETACCESSKEY'),
region: getSetting('AWSREGION'),
params: {
Bucket: getSetting('S3BUCKET'),
},
})
picturePath = 'nothing/here'
s3Client.deleteObject({
Key: picturePath,
}, function (err, data) {
console.log('Delete object callback:', err)
})
Because that's what the specs say it should do.
deleteObject(params = {}, callback) ⇒ AWS.Request
Removes the null version (if there is one) of an object and inserts a
delete marker, which becomes the latest version of the object. If
there isn't a null version, Amazon S3 does not remove any objects.
So if the object doesn't exist, it's still not an error when calling deleteObject, and if versioning is enabled, it adds a delete marker even though there was nothing to delete previously.

Fine Uploader to S3 Folder

Does anyone know if it's possible to upload to a folder inside an S3 bucket using Fine Uploader? I have tried adding a '\foldername' to the request endpoint but had no luck with that.
Thanks.
S3 doesn't have folders. Some UIs (such as the one in the AWS console) and higher-level APIs may provide this concept, as an abstraction, but S3 doesn't have any native understanding of folders, only keys. You can create any key you want and pass it along to Fine Uploader S3, it will store that file given that specific key. For example, "folder1/subfolder/foobar.txt" is a perfectly valid key name. By default, Fine Uploader generates a UUID and uses that as your object's key name, but you can easily change this by providing an objectProperties.key value, which can also be a function. See the S3 objectProperties option in the documentation for more details.
This works for me:
var uploader = new qq.s3.FineUploader({
// ...
objectProperties: {
key: function(fileId) {
return 'folder/within/bucket/' + this.getUuid(fileId);
}
}
});
$("#fineuploader-s3").fineUploaderS3({
// ....
objectProperties: {
key: function (fileId) {
var filename = $('#fineuploader-s3').fineUploader('getName', fileId);
var uuid = $('#fineuploader-s3').fineUploader('getUuid', fileId);
var ext = filename.substr(filename.lastIndexOf('.') + 1);
return 'folder/within/bucket/' + uuid + '.' + ext;
}
}
});
You can modify the path before upload to the server:
uploader.on('submit', (id) => {
const key = s3Uploader.methods.getUuid(id)
const newkey = 'folder/within/bucket/' + key
uploader.methods.setUuid(id, newkey)
})

Categories