So i have a bucket in S3, I read about push model event of S3 bucket and I know how to bind an AWS lambda to a putObject event on the bucket. My problem is I want to rename the file before it uploads to a bucket, for example: in my bucket named 'example' there's a file called 'toto.jpeg', so if a user upload another image with the same name it will override it and the lambda function will receive the information after it has been uploaded to the bucket. And I don't want to rename the file on the client. Is there any solution?
The only solution is to specify the name before the file is uploaded. In other words, you have to do this in the client software. A Lambda function can't help you in this case because it won't be fired until after the file is uploaded. If you are worried about duplicate names causing files to be overwritten, you should enable versioning on your S3 bucket.
The question here is are you using a Backend language as NodeJS or PHP ?
If not and you are using the AWS SDK for browser, you can do something like:
<input type="file" accept="image/*" onchange="handleName(this.files)" />
function handleName (files) {
var file = files[0];
if (file) {
var getExtension = file.name.slice((file.name.lastIndexOf(".") - 1 >>> 0) + 2);
var fName = shortid.generate() + '.' + getExtension;
}
}
notice that I'm using shortid to generate an unique short id, see this post if you want to create your own implementation create uuid in js.
then you can just call your S3 uploader function in handleName() function, if you want to trigger and upload in a click event, give an id to your input and get the file using a classical DOM selector: document.getElementById('input').files[0];
hope this can give you an idea.
regards!
Related
I'm using dropzone.js as a part of my flask application, where the user can upload multiple folders at the same time. But sometimes the most valuable information about the files is the folder name.
An example could be:
Folder A
sample.txt
img.png
Folder B
sample.txt
img.png
Would it here be possible to to register which files comes from Folder A and which comes from Folder B, assuming that they drag-n-drop the whole folder into the dropzone?
I figured out how to do this with assistance from this answer and MDN's article on Using FormData objects.
If you define your dropzone like this in the template:
Dropzone.autoDiscover = false;
var myDropzone = new Dropzone("form#my-awesome-dropzone",
{ url: "/upload",
});
You must then add an event handler for the sending event, which takes the file.fullPath string, and adds it to a blob which becomes part of the multipart/form-data submission.
myDropzone.on("sending", function(file, xhr, formData) {
// get the original fullPath and add it to the multipart form.
var blob = new Blob([file.fullPath], { type: "text/plain"});
formData.append("originalPath", blob);
});
Note that this essentially adds a second file called blob with the text/plain mimetype to your form submission. To visualize this, within the upload route on your Flask server, you can print(request.files) which gives:
ImmutableMultiDict(
[('originalPath', <FileStorage: 'blob' ('text/plain')>),
('file', <FileStorage: 'up.txt' ('text/plain')>)
])
So to extract the actual string from that on the server end you could do:
original_path = request.files['originalPath'].read().decode('utf-8')
Note that from the frontend perspective, dropzone doesn't know about parent folders relative to the file or folder that was dropped, so:
dropping a file called up.txt into Dropzone, original_path will be 'undefined'.
dropping a folder called untitled folder containing a file called up.txt into Dropzone, original_path will be 'untitled folder/up.txt'.
You should build some functionality on the backend to validate the info coming in, and parse these strings accordingly.
I need to upload N files, with some data attached to each file, like the name of the recipient of the file.
To do that, I have something like this:
let form = new FormData();
for (let file of file_list) {
form.append('files', file);
form.append('metadata', JSON.stringify(file.metadata));
}
I send that using a simple axios POST, and on the server side, I match each file with its metadata using the index of the file in the list.
It works but is not super reliable.
Is there a reliable way to upload a list of files along with attached metadata?
I'd rather avoid converting to base64 due to the size of the files which can already be pretty high.
When appending a file you can use the third argument to set a filename.
By generating a unique name, you can relate it to your metadata.
let uniqueId = 0;
const metadata = {};
for (let file of file_list) {
uniqueId++;
const fileName = `file-${uniqueId}.bin`;
form.append('files', file, fileName);
metadata[fileName] = file.metadata;
}
form.append('metadata', JSON.stringify(metadata));
Total nodejs newbie here. I am using meme-maker package to generate meme. However I want to create meme with image from url
var fileName = 'https://imgflip.com/s/meme/Futurama-Fry.jpg';
var memeMaker = require('meme-maker')
var options = {
image: fileName, // Required
outfile: 'meme.png', // Required
topText: 'top', // Required
bottomText: 'bottom', // Optional
}
memeMaker(options, function(err) {
if(err) throw new Error(err)
console.log('Image saved: ')
});
However I get error: Error: File does not exist: https://imgflip.com/s/meme/Futurama-Fry.jpg
How to read file from url and make meme?
If you go read the documentation of meme-maker you will see that it only supports local images and not URL's.
You will need to download the image first then use the local address. Go have a look at request
That library does not look like it supports URLs. The image param presumably takes a file path on the local system. If you want to use the URL to make a meme, you will have to:
Download that image from the URL using AJAX or something similar, store it to a file on the disk and get it's local path.
Pass the local file path of the file to the library
Get the generated meme path (and enable download if needed) and do clean up like deleting the old image, for example
I am creating a node.js app in which user can upload files and can download later.
I am storing file information (original file name that user uploaded, ...) in mongodb document and named that file same as mongodb document id. Now i want my user to be able to download that file with the original file name.
What i want to know is when a user sends a GET request on http://myapp.com/mongoDocument_Id
user gets a file named myOriginalfile.ext
I know about node-static and other modules but i can't rename them before sending file.
i am using koa.js framework.
Here's a simple example using koa-file-server:
var app = require('koa')();
var route = require('koa-route');
var send = require('koa-file-server')({ root : './static' }).send;
app.use(route.get('/:id', function *(id) {
// TODO: perform lookup from id to filename here.
// We'll use a hardcoded filename as an example.
var filename = 'test.txt';
// Set the looked-up filename as the download name.
this.attachment(filename);
// Send the file.
yield send(this, id);
}));
app.listen(3012);
In short:
the files are stored in ./static using the MongoDB id as their filename
a user requests http://myapp.com/123456
you look up that ID in MongoDB to find out the original filename (in the example above, the filename is just hardcoded to test.txt)
the file ./static/123456 is offered as a download using the original filename set in the Content-Disposition header (by using this.attachment(filename)), which will make the browser store it locally as test.txt instead of 123456.
Here is my workflow as of now:
In a button click event, I have search results being exported to a .csv file, which is saved to the server. Once the file is saved, I want to send it for download to the browser. Using this question How to handle conditional file downloads in meteor.js, I created a method that is called after the method that saves the file returns. Here is that method:
exportFiles: function(file_to_export) {
console.log("to export = "+file_to_export);
Meteor.Router.add('/export', 'GET', function() {
console.log('send '+file_to_export+' to browser');
return [200,
{
'Content-type': 'text/plain',
'Content-Disposition': "attachment; filename=" + this.request.query.file
}, fs.readFileSync( save_path + this.request.query.file )];
});
}
My question, however, is how to invoke that route? Using .Router.to('/export?file=filename.ext') doesn't work, and causes the user to leave the current page. I want this to appear seamless to the user, and I don't want them to have any idea they are being redirected. Before anyone asks, save_path is declared outside of the method, so it does exist.
I have gotten it! However, it required the use of a few additional packages. First, let me describe the workflow a little more clearly:
A user on our site performs a search. On the subsequent search results page, a button exists that allows the user to export his/her search results to a .csv file. The file is then to be exported to the browser for download.
One concern we had was if a file is written to the server, making sure only the user who is exporting the file has the ability to view the file. To control who had visibility on files, I used a meteorite package, CollectionFS (mrt add collectionFS or clone from github). This package writes file buffers to a mongo collection. Supplying an "owner" field when saving gives you control over access.
Regardless of how the file is created, whether saved to the server via an upload form or generated on the fly the way I did using the json2csv package, the file must be streamed to CollectionFS as a buffer.
var userId = Meteor.userId()
var buffer = Buffer(csv.length); //csv is a var holding the data for write
var filename = "name_of_file.csv";
for ( var i=0; i<csv.length; i++ ) {
buffer[i] = csv.charCodeAt(i);
}
CollectionFS.storeBuffer(filename, buffer, {
contentType: 'text/plain',
owner: userId
});
So at this point, I have taken my data file, and streamed it as a buffer into the mongo collection. Because my data exists in memory in the var csv, I stream it as a buffer by looping through each character. If this were a file saved on a physical disk, I would use fs.readFileSync(file) and send the returned buffer to CollectionFS.storeBuffer().
Now that the file is saved as a buffer in mongo with an owner, I can limit through way I publish the CollectionFS collection who can download/update/delete the file or even know the file exists.
In order to read the file from mongo and send the file to the browser for download, another Javascript library is necessary: FileSaver (github).
Using the retrieveBlob method from CollectionFS, pull your file out of mongo as a blob by supplying the _id that references the file in your mongo collection. FileSaver has a method, saveAs that accepts a blob, and exports to the browser for download as a specified file name.
var file = // file object stored in meteor
CollectionFS.retrieveBlob(file._id, function(fileItem) {
if ( fileItem.blob ) saveAs(fileItem.blob, file.filename);
else if ( fileItem.file ) saveAs(fileItem.file, file.filename);
});
I hope someone will find this useful!
If your route works, when when your method returns you could open a new window containing the link to the text file.
You've already added in content disposition headers so the file should always ask to be saved.
Even if you just redirect to the file, because it has these content disposition headers it will ask to be saved and not interrupt your session.