Does anyone know if it's possible to upload to a folder inside an S3 bucket using Fine Uploader? I have tried adding a '\foldername' to the request endpoint but had no luck with that.
Thanks.
S3 doesn't have folders. Some UIs (such as the one in the AWS console) and higher-level APIs may provide this concept, as an abstraction, but S3 doesn't have any native understanding of folders, only keys. You can create any key you want and pass it along to Fine Uploader S3, it will store that file given that specific key. For example, "folder1/subfolder/foobar.txt" is a perfectly valid key name. By default, Fine Uploader generates a UUID and uses that as your object's key name, but you can easily change this by providing an objectProperties.key value, which can also be a function. See the S3 objectProperties option in the documentation for more details.
This works for me:
var uploader = new qq.s3.FineUploader({
// ...
objectProperties: {
key: function(fileId) {
return 'folder/within/bucket/' + this.getUuid(fileId);
}
}
});
$("#fineuploader-s3").fineUploaderS3({
// ....
objectProperties: {
key: function (fileId) {
var filename = $('#fineuploader-s3').fineUploader('getName', fileId);
var uuid = $('#fineuploader-s3').fineUploader('getUuid', fileId);
var ext = filename.substr(filename.lastIndexOf('.') + 1);
return 'folder/within/bucket/' + uuid + '.' + ext;
}
}
});
You can modify the path before upload to the server:
uploader.on('submit', (id) => {
const key = s3Uploader.methods.getUuid(id)
const newkey = 'folder/within/bucket/' + key
uploader.methods.setUuid(id, newkey)
})
Related
I like to write a Thunderbird AddOn that encrypts stuff. For this, I already extracted all data from the compose window. Now I have to save this into files and run a local executable for encryption. But I found no way to save the files and execute an executable on the local machine. How can I do that?
I found the File and Directory Entries API documentation, but it seems to not work. I always get undefined while trying to get the object with this code:
var filesystem = FileSystemEntry.filesystem;
console.log(filesystem); // --> undefined
At least, is there a working AddOn that I can examine to find out how this is working and maybe what permissions I have to request in the manifest.json?
NOTE: Must work cross-platform (Windows and Linux).
The answer is, that WebExtensions are currently not able to execute local files. Also, saving to some local folder on the disk is also not possible.
Instead, you need to add some WebExtension Experiment to your project and there use the legacy APIs. There you can use the IOUtils and FileUtils extensions to reach your goal:
Execute a file:
In your background JS file:
var ret = await browser.experiment.execute("/usr/bin/executable", [ "-v" ]);
In the experiment you can execute like this:
var { ExtensionCommon } = ChromeUtils.import("resource://gre/modules/ExtensionCommon.jsm");
var { FileUtils } = ChromeUtils.import("resource://gre/modules/FileUtils.jsm");
var { XPCOMUtils } = ChromeUtils.import("resource://gre/modules/XPCOMUtils.jsm");
XPCOMUtils.defineLazyGlobalGetters(this, ["IOUtils");
async execute(executable, arrParams) {
var fileExists = await IOUtils.exists(executable);
if (!fileExists) {
Services.wm.getMostRecentWindow("mail:3pane")
.alert("Executable [" + executable + "] not found!");
return false;
}
var progPath = new FileUtils.File(executable);
let process = Cc["#mozilla.org/process/util;1"].createInstance(Ci.nsIProcess);
process.init(progPath);
process.startHidden = false;
process.noShell = true;
process.run(true, arrParams, arrParams.length);
return true;
},
Save an attachment to disk:
In your backround JS file you can do like this:
var f = messenger.compose.getAttachmentFile(attachment.id)
var blob = await f.arrayBuffer();
var t = await browser.experiment.writeFileBinary(tempFile, blob);
In the experiment you can then write the file like this:
async writeFileBinary(filename, data) {
// first we need to convert the arrayBuffer to some Uint8Array
var uint8 = new Uint8Array(data);
uint8.reduce((binary, uint8) => binary + uint8.toString(2), "");
// then we can save it
var ret = await IOUtils.write(filename, uint8);
return ret;
},
IOUtils documentation:
https://searchfox.org/mozilla-central/source/dom/chrome-webidl/IOUtils.webidl
FileUtils documentation:
https://searchfox.org/mozilla-central/source/toolkit/modules/FileUtils.jsm
The following is my cloud function code.
exports.increaseVolume = functions.storage.object().onFinalize(async (object) => {
const fileBucket = object.bucket; // The Storage bucket that contains the file.
const filePath = object.name; // File path in the bucket.
const contentType = object.contentType; // File content type.
// Exit if this is triggered on a file that is not an audio.
if (!contentType.startsWith('video/mp4')) {
console.log('This is not an audio.');
return null;
}
// Get the file name.
const fileName = path.basename(filePath);
// Exit if the audio is already converted.
if (fileName.endsWith('_output.mp4')) {
console.log('Already a converted audio.');
return null;
}
// Download file from bucket.
const bucket = gcs.bucket(fileBucket);
const tempFilePath = path.join(os.tmpdir(), fileName);
// We add a '_output.flac' suffix to target audio file name. That's where we'll upload the converted audio.
const targetTempFileName = fileName.replace(/\.[^/.]+$/, '') + '_output.mp4';
const targetTempFilePath = path.join(os.tmpdir(), targetTempFileName);
const targetStorageFilePath = path.join(path.dirname(filePath), targetTempFileName);
await bucket.file(filePath).download({destination: tempFilePath});
console.log('Audio downloaded locally to', tempFilePath);
// Convert the audio to mono channel using FFMPEG.
let command = ffmpeg(tempFilePath)
.audioFilters([
{
filter: 'volume',
options: '5dB'
},
{
filter: 'afftdn'
}
])
.format('mp4')
.output(targetTempFilePath);
await promisifyCommand(command);
console.log('Output audio created at', targetTempFilePath);
// Uploading the audio.
await bucket.upload(targetTempFilePath, {destination: targetStorageFilePath});
console.log('Output audio uploaded to', targetStorageFilePath);
// Once the audio has been uploaded delete the local file to free up disk space.
fs.unlinkSync(tempFilePath);
fs.unlinkSync(targetTempFilePath);
return console.log('Temporary files removed.', targetTempFilePath);
});
This is how the file in my storage bucket shows. Where do I get the download link or how can I access the file?. When I typed the link in the browser it returns a JSON saying 403 - unauthorized access.
You need to use the getSignedUrl() method as follows:
const uploadResp = await bucket.upload(targetTempFilePath, {destination: targetStorageFilePath});
const file = uploadResp[0];
const options = {
action: 'read',
expires: '03-17-2025'
};
const getSignedUrlResponse = await file.getSignedUrl(options);
const url = getSignedUrlResponse[0];
//Do wathever you want with this url: save it in Firestore for example
The message you are facing is about permissions. Who should have access to what?
First of all you have to think about your business logic. Who should have access to that file? Is it good if that file should be public? Is there a time-frame in which the file should be accessible? The data should remain in the Bucket or it can be deleted?
I think there are two options for you:
Create a Bucket with public data, in such case the data would accessible for everyone who has access to the specific filename in the Bucket.
If the above is not allowed, then you can create a SignedURL, as mentioned by #Renaud Tarnec link example. You should keep in mind that the SignedURL has time-limited access and everyone who has the URL of it, will be able to get the object. Once the time is expired, the object will not be longer accessible.
Once you have defined this, you can either delete the object in your Bucket programatically or you can set a Lifecycle Management. There, you can set configurations which contains a set of rules, for example, delete objects created by their age (in days).
I'm attempting to perform cross-account backups of any objects from one bucket on ACCOUNT-A to a backup bucket on ACCOUNT-B and I want the objects in the backup bucket to be encrypted using AES256. But the encryption doesn't seem to be getting applied to the objects that land in the backup bucket.
The Setup
ACCOUNT-A has a source bucket called assets.myapp.com
ACCOUNT-B has a target bucket called backup-assets.myapp.com
An s3.ObjectCreated:* bucket event on the assets.myapp.com bucket triggers a Lambda function to copy the newly created object to the backup-assets.myapp.com bucket under ACCOUNT-B.
Attempting to apply ServerSideEncryption: 'AES256' to the objects in the backup-assets.myapp.com bucket once they land there.
The Lambda Function Code
var async = require('async');
var aws = require('aws-sdk');
var s3 = new aws.S3({ apiVersion: '2006-03-01' });
exports.backupObject = function backupS3Object(event, context) {
if (event.Records === null) {
return context.fail('NOTICE:', 'No records to process.');
}
async.each(event.Records, function(record, iterate) {
var sourceBucket = record.s3.bucket.name;
var targetBucket = 'backup-' + record.s3.bucket.name;
var key = record.s3.object.key;
s3.copyObject({
Bucket : targetBucket,
CopySource : sourceBucket + '/' + key,
Key : key,
ACL : 'private',
ServerSideEncryption : 'AES256',
MetadataDirective : 'COPY',
StorageClass : 'STANDARD_IA'
}, function(error, data) {
if (error) return iterate(error);
console.log('SSE: ' + data.ServerSideEncryption);
console.log('SUCCESS: Backup of ' + sourceBucket + '/' + key);
return iterate();
});
}, function (error) {
if (error) {
return context.fail('ERROR:', 'One or more objects could not be copied.');
}
return context.done();
});
};
Cloudwatch Log Reports Success
When the function runs, the object is successfully copied, and the Cloudwatch Log for my Lambda function reports the ServerSideEncryption used as AES256.
However, The S3 Console Disagrees
But the problem is that when I inspect the Properties > Details of the copied object in the backup-assets.myapp.com bucket under ACCOUNT-B it reports Server Side Encryption: None.
Any idea why the SSE doesn't seem to be applied to the object when it lands in the backup-assets.myapp.com bucket? Or is it actually being applied and I've just discovered a display bug in the S3 Console?
BONUS QUESTION
When I attempt to apply SSE:AES256 to any given object manually
using the console, I get the following error: The additional
properties (RRS/SSE) were not enabled or disabled due to errors for
the following objects in backup-assets.myapp.com:
copied-object-one.txt.
Thanks in advance for any help.
Figured this out.
The problem was with the ACL parameter of the copyObject method.
If you want to use ServerSideEnryption: 'AES256' on the objects that land in the target bucket you must provide an ACL that allows bucket-owner-full-control to allow your backup bucket to apply the encryption. This is not documented anywhere (that I found), but I've done extensive testing now (not by choice) and determined that this does work. So the working Lambda function code is below:
var async = require('async');
var aws = require('aws-sdk');
var s3 = new aws.S3({ apiVersion: '2006-03-01' });
exports.backupObject = function backupS3Object(event, context) {
if (event.Records === null) {
return context.done('NOTICE: No records to process.');
}
async.each(event.Records, function(record, iterate) {
var sourceBucket = record.s3.bucket.name;
var targetBucket = 'backup-' + record.s3.bucket.name;
var key = record.s3.object.key;
s3.copyObject({
Bucket : targetBucket,
CopySource : sourceBucket + '/' + key,
Key : key,
ACL : 'bucket-owner-full-control',
ServerSideEncryption : 'AES256',
MetadataDirective : 'COPY',
StorageClass : 'STANDARD_IA'
}, function(error, data) {
if (error) return iterate(error);
console.log('SUCCESS: Backup of ' + sourceBucket + '/' + key);
return iterate();
});
}, function (error) {
return context.done(error);
});
};
I'm not sure if this is possible using the X-region X-account replication method discussed in the comments in the question above. There doesn't seem to be any way to declare SSE when performing a replication.
I am trying to convert an XML String that I get from a server to JSON inside my Lambda function.
I have set up this rather simple example to simulate the XML answer that i get from the server using DynamoDB. (Currently I'm just trying to get the convertion going)
'use strict';
var AWS = require('aws-sdk');
var docClient = new AWS.DynamoDB.DocumentClient({region: 'eu-west-1'});
exports.handler = function (e, ctx, callback){
let table = "dsbTable";
let bpNumber = 1337;
var test;
var x2js = new X2JS();
let params = {
TableName: table,
Key:{
"bpNumber": bpNumber
},
};
docClient.get(params, function(err, data) {
if (err) {
console.error("Unable to read item. Error JSON:", JSON.stringify(err, null, 2));
callback(err, null);
} else {
console.log("GetItem succeeded:", JSON.stringify(data, null, 2));
console.log('test' +data.Item.getBp);
//var jsonObj = x2js.xml_str2json(data.Item.getBp);
//console.log(jsonObj);
callback(null, data);
}
});
} ;
getting the item works just fine and is returned like this
{
"Item": {
"getBp": "<message version=\"1.0\" system=\"AVS/3\"><header><client>553</client><avs3-sales-organization>7564</avs3-sales-organization><avs3-service-provider>DSD</avs3-service-provider></header><body><business-partner><salutation-code>01</salutation-code><titel-code-academic/><titel-academic/><titel-code-royal/><titel-royal/><job-titel/><last-name1>Pickle</last-name1><last-name2/><first-name>N</first-name><street/><street-suffix/><street-number/><street-number-suffix/><address-line-1>10 Waterside Way</address-line-1><address-line-2/><address-line-3/><zipcode>NN4 7XD</zipcode><country-code>GB</country-code><city>NORTHAMPTON</city><district/><region-code>NH</region-code><region-text>Northamptonshire</region-text><company1/><company2/><company3/><department/><po-box/><po-box-zipcode/><po-box-city/><po-box-country-code/><major-customer-zipcode/><address-source/><advertisement>Y</advertisement><category/><bp-number>1100000772</bp-number><bp-number-external/><bp-group>ABON</bp-group><eu-sales-tax-number/><bic-master-number/><sector/><communication><communication-type>WW</communication-type><communication-value>kate.southorn#dsbnet.co.uk</communication-value><communication-default>Y</communication-default></communication><attribute><attribute-type>ACC</attribute-type><attribute-value>Y</attribute-value></attribute><attribute><attribute-type>OIEMEX</attribute-type><attribute-value>N20121211</attribute-value></attribute><attribute><attribute-type>OINLIN</attribute-type><attribute-value>N20121211</attribute-value></attribute><attribute><attribute-type>OISMEX</attribute-type><attribute-value>N20121211</attribute-value></attribute><attribute><attribute-type>OISMIN</attribute-type><attribute-value>N20121211</attribute-value></attribute><attribute><attribute-type>OOEMIN</attribute-type><attribute-value>N20121211</attribute-value></attribute><attribute><attribute-type>OOFXEX</attribute-type><attribute-value>N20121211</attribute-value></attribute><attribute><attribute-type>OOFXIN</attribute-type><attribute-value>N20121211</attribute-value></attribute><attribute><attribute-type>OOPTEX</attribute-type><attribute-value>N20121211</attribute-value></attribute><attribute><attribute-type>OOPTIN</attribute-type><attribute-value>N20121211</attribute-value></attribute><attribute><attribute-type>OOTEEX</attribute-type><attribute-value>N20121211</attribute-value></attribute><attribute><attribute-type>OOTEIN</attribute-type><attribute-value>N20121211</attribute-value></attribute><attribute><attribute-type>THEDSU</attribute-type><attribute-value/></attribute></business-partner></body></message>",
"bpNumber": 1337
}
}
My main issue now is that I can not figure out how i can import any XMLtoJSON library files like this one here
I hope my code in this case is not completely worthless and there is a rather simple solution.
You're going through the path that many new Lambda users have gone.
With Lambda, it is absolutely easy, you just write your code and validate that it works as expected - I mean on your computer.
Once you have validated it, do as follows:
Zip the entire folder's content, including node_modules directory and any dependency that you use.
Upload it to Lambda.
If you accidentally zipped the containing folder as well, that is fine, just make sure to update Lambda to run the script from: dir_name/file_name.function_name (don't forget to export function_name from your module).
Always the handler name is the <filename>.<handler> function name> and if the filename is incorrectly mentioned then also such error is thrown in cloudwatch logs.
In our meteor app, the client will upload some files using collectionFS-filesystem which i store it to an uploads folders in root directory of my app.
//My CFS uploads collection
uploads = new FS.Collection("uploads", {
stores: [new FS.Store.FileSystem("uploads", {path: "~/uploads"})]
});
Later, i want to save the files to the database using collectionFS-gridFS.
//My CFS grid collection
files = new FS.Collection("files", {
stores: [new FS.Store.GridFS("files")]
});
How do i read the data from the file on server so that i can store the file to db? Can i use the file from the CFS-filesystem collection to convert it to CFS-gridFS file in anyway?
Thanks in advance.
I had accept the answer by #perusopersonale. However, below is my approach which i used to achieve this, based on documentation from here and here
uploads.find(document_id).forEach(function (fileObj) {
var type = fileObj.type();
var name = fileObj.name();
var readStream = fileObj.createReadStream(fileObj.collectionName);
var newFile = new FS.File();
newFile.attachData(readStream, {type: type});
newFile.name(name);
files.insert(newFile);
});
I don't understand why you want to use both. However I have to implement something similar (read from a cfs filesystem, do something and then reinsert in another db), here a version modified that should accomplish what you are triyng to do:
var fileObj = uploads.findOne(objId);
var newName = "newfilename"; //file name
//fileObj.copies.uploads.key contains the filename for store :"uploads"
fs.readFile( "~/uploads/"+fileObj.copies.uploads.key, function (err, data) {
var newFile = new FS.File();
newFile.attachData(data,{type: 'application/octet-stream'}, function(error){
newFile.name( newName);
file.insert(newFile);
});
});