How to serve static files in node.js after renaming them? - javascript

I am creating a node.js app in which user can upload files and can download later.
I am storing file information (original file name that user uploaded, ...) in mongodb document and named that file same as mongodb document id. Now i want my user to be able to download that file with the original file name.
What i want to know is when a user sends a GET request on http://myapp.com/mongoDocument_Id
user gets a file named myOriginalfile.ext
I know about node-static and other modules but i can't rename them before sending file.
i am using koa.js framework.

Here's a simple example using koa-file-server:
var app = require('koa')();
var route = require('koa-route');
var send = require('koa-file-server')({ root : './static' }).send;
app.use(route.get('/:id', function *(id) {
// TODO: perform lookup from id to filename here.
// We'll use a hardcoded filename as an example.
var filename = 'test.txt';
// Set the looked-up filename as the download name.
this.attachment(filename);
// Send the file.
yield send(this, id);
}));
app.listen(3012);
In short:
the files are stored in ./static using the MongoDB id as their filename
a user requests http://myapp.com/123456
you look up that ID in MongoDB to find out the original filename (in the example above, the filename is just hardcoded to test.txt)
the file ./static/123456 is offered as a download using the original filename set in the Content-Disposition header (by using this.attachment(filename)), which will make the browser store it locally as test.txt instead of 123456.

Related

angular js file download issue due file name's space

I am trying to download file using angular js. currently i am sending GET request to do it.
my file is = flower-red.jpg
my request like below
GET http://localhost:8080/aml/downloadDoc/852410507V/flower-red.jpg
this is correctly downloaded. but if file name have spaces, it did not downloaded. Please check this
file = flower - red.jpg
my request like below
GET http://localhost:8080/aml/downloadDoc/852410507V/flower%20-%20red.jpg
this is not download. because request is changed due to spaces in the file name .
how i fixed this issue.
// javascript
var url = "http://localhost:8080/aml/downloadDoc/852410507V/flower - red.jpg";
var urlRemoveSpace = url.split(" ").join("");
// now you can process get request to download using variable urlRemoveSpace

Node Js Meme generation error: File does not exist

Total nodejs newbie here. I am using meme-maker package to generate meme. However I want to create meme with image from url
var fileName = 'https://imgflip.com/s/meme/Futurama-Fry.jpg';
var memeMaker = require('meme-maker')
var options = {
image: fileName, // Required
outfile: 'meme.png', // Required
topText: 'top', // Required
bottomText: 'bottom', // Optional
}
memeMaker(options, function(err) {
if(err) throw new Error(err)
console.log('Image saved: ')
});
However I get error: Error: File does not exist: https://imgflip.com/s/meme/Futurama-Fry.jpg
How to read file from url and make meme?
If you go read the documentation of meme-maker you will see that it only supports local images and not URL's.
You will need to download the image first then use the local address. Go have a look at request
That library does not look like it supports URLs. The image param presumably takes a file path on the local system. If you want to use the URL to make a meme, you will have to:
Download that image from the URL using AJAX or something similar, store it to a file on the disk and get it's local path.
Pass the local file path of the file to the library
Get the generated meme path (and enable download if needed) and do clean up like deleting the old image, for example

renaming file before uploading it in S3

So i have a bucket in S3, I read about push model event of S3 bucket and I know how to bind an AWS lambda to a putObject event on the bucket. My problem is I want to rename the file before it uploads to a bucket, for example: in my bucket named 'example' there's a file called 'toto.jpeg', so if a user upload another image with the same name it will override it and the lambda function will receive the information after it has been uploaded to the bucket. And I don't want to rename the file on the client. Is there any solution?
The only solution is to specify the name before the file is uploaded. In other words, you have to do this in the client software. A Lambda function can't help you in this case because it won't be fired until after the file is uploaded. If you are worried about duplicate names causing files to be overwritten, you should enable versioning on your S3 bucket.
The question here is are you using a Backend language as NodeJS or PHP ?
If not and you are using the AWS SDK for browser, you can do something like:
<input type="file" accept="image/*" onchange="handleName(this.files)" />
function handleName (files) {
var file = files[0];
if (file) {
var getExtension = file.name.slice((file.name.lastIndexOf(".") - 1 >>> 0) + 2);
var fName = shortid.generate() + '.' + getExtension;
}
}
notice that I'm using shortid to generate an unique short id, see this post if you want to create your own implementation create uuid in js.
then you can just call your S3 uploader function in handleName() function, if you want to trigger and upload in a click event, give an id to your input and get the file using a classical DOM selector: document.getElementById('input').files[0];
hope this can give you an idea.
regards!

Upload file to Windows Azure with only the link of the file is provided

How can I upload a file in azure if I only have the URL of the file to upload. In this case, i 'm using Dropbox file chooser which selects file from dropbox and returns its url path.
eq
https://www.dropbox.com/s/o9myet72y19iaan/Getting%20Started.pdf
Now we need the file to be stored in Windows Azure blob. What is the easiest way to do this without downloading the file first.
I'm planning to use a asp.net web api for the uploading of file to azure blob.
At first, I thought it should be quite straight forward as Azure Blob Storage support copying blobs from external URL however I don't think this would work in case of Dropbox files. I just tried it and got an error even though.
The link you mentioned above is not the direct link to the file. It's a link to a page on Dropbox's website from where you can download a file. This is obviously you don't want. Here's an alternate solution which you can try:
Replace www.dropbox.com in your URL with dl.dropboxusercontent.com (based on #smarx's comments below) and use that URL in the following code:
First you would need to append dl=1 to your request URL as query string. So your Dropbox URL would be https://www.dropbox.com/s/o9myet72y19iaan/Getting%20Started.pdf?dl=1. dl query string parameter indicates the file needs to be downloaded.
Next, using HTTPWebRequest try accessing this URL. Dropbox will respond back with another link and 302 status code. This link would be something like https://dl.dropboxusercontent.com/s/o9myet72y19iaan/Getting%20Started.pdf?token_hash=<tokenhash>.
Use this link in the code below to copy file. This would work.
CloudStorageAccount acc = new CloudStorageAccount(new StorageCredentials("account", "key"), false);
var client = acc.CreateCloudBlobClient();
var container = client.GetContainerReference("container-name");
container.CreateIfNotExists();
var blob = container.GetBlockBlobReference("dropbox-file-name");
blob.StartCopyFromBlob(new Uri("dropbox URL with dl.dropboxusercontent.com"));
Console.WriteLine("Copy request accepted");
Console.WriteLine("Now checking for copy state");
bool continueLoop = true;
do
{
blob.FetchAttributes();
var copyState = blob.CopyState;
switch (copyState.Status)
{
case CopyStatus.Pending:
Console.WriteLine("Copy is still pending. Will check status again after 1 second.");
System.Threading.Thread.Sleep(1000);//Copy is still pending...check after 1 second
break;
default:
Console.WriteLine("Terminating process with copy state = " + copyState.Status);
continueLoop = false;
break;
}
}
while (continueLoop);
Console.WriteLine("Press any key to continue.");

Download files from server using Meteor.js

Here is my workflow as of now:
In a button click event, I have search results being exported to a .csv file, which is saved to the server. Once the file is saved, I want to send it for download to the browser. Using this question How to handle conditional file downloads in meteor.js, I created a method that is called after the method that saves the file returns. Here is that method:
exportFiles: function(file_to_export) {
console.log("to export = "+file_to_export);
Meteor.Router.add('/export', 'GET', function() {
console.log('send '+file_to_export+' to browser');
return [200,
{
'Content-type': 'text/plain',
'Content-Disposition': "attachment; filename=" + this.request.query.file
}, fs.readFileSync( save_path + this.request.query.file )];
});
}
My question, however, is how to invoke that route? Using .Router.to('/export?file=filename.ext') doesn't work, and causes the user to leave the current page. I want this to appear seamless to the user, and I don't want them to have any idea they are being redirected. Before anyone asks, save_path is declared outside of the method, so it does exist.
I have gotten it! However, it required the use of a few additional packages. First, let me describe the workflow a little more clearly:
A user on our site performs a search. On the subsequent search results page, a button exists that allows the user to export his/her search results to a .csv file. The file is then to be exported to the browser for download.
One concern we had was if a file is written to the server, making sure only the user who is exporting the file has the ability to view the file. To control who had visibility on files, I used a meteorite package, CollectionFS (mrt add collectionFS or clone from github). This package writes file buffers to a mongo collection. Supplying an "owner" field when saving gives you control over access.
Regardless of how the file is created, whether saved to the server via an upload form or generated on the fly the way I did using the json2csv package, the file must be streamed to CollectionFS as a buffer.
var userId = Meteor.userId()
var buffer = Buffer(csv.length); //csv is a var holding the data for write
var filename = "name_of_file.csv";
for ( var i=0; i<csv.length; i++ ) {
buffer[i] = csv.charCodeAt(i);
}
CollectionFS.storeBuffer(filename, buffer, {
contentType: 'text/plain',
owner: userId
});
So at this point, I have taken my data file, and streamed it as a buffer into the mongo collection. Because my data exists in memory in the var csv, I stream it as a buffer by looping through each character. If this were a file saved on a physical disk, I would use fs.readFileSync(file) and send the returned buffer to CollectionFS.storeBuffer().
Now that the file is saved as a buffer in mongo with an owner, I can limit through way I publish the CollectionFS collection who can download/update/delete the file or even know the file exists.
In order to read the file from mongo and send the file to the browser for download, another Javascript library is necessary: FileSaver (github).
Using the retrieveBlob method from CollectionFS, pull your file out of mongo as a blob by supplying the _id that references the file in your mongo collection. FileSaver has a method, saveAs that accepts a blob, and exports to the browser for download as a specified file name.
var file = // file object stored in meteor
CollectionFS.retrieveBlob(file._id, function(fileItem) {
if ( fileItem.blob ) saveAs(fileItem.blob, file.filename);
else if ( fileItem.file ) saveAs(fileItem.file, file.filename);
});
I hope someone will find this useful!
If your route works, when when your method returns you could open a new window containing the link to the text file.
You've already added in content disposition headers so the file should always ask to be saved.
Even if you just redirect to the file, because it has these content disposition headers it will ask to be saved and not interrupt your session.

Categories