I'm looking to create static text files based upon the content of a supplied object, which can then be downloaded by the user. Here's what I was planning on doing:
When the user hits 'export' the application calls a Meteor.method() which, in turn, parses and writes the file to the public directory using typical Node methods.
Once the file is created, in the callback from Meteor.method() I provide a link to the generated file. For example, 'public/userId/file.txt'. The user can then choose to download the file at that link.
I then use Meteor's Connect modele (which it uses internally) to route any requests to the above URL to the file itself. I could do some permissions checking based on the userId and the logged in state of the user.
The problem: When static files are generated in public, the web page automatically reloads each time. I thought that it might make more sense to use something like Express to generate a REST endpoint, which could deal with creating the files. But then I'm not sure how to deal with permissions if I don't have access to the Meteor session data.
Any ideas on the best strategy here?
In version 0.6.6.3 0.7.x - 1.3.x you can do the following:
To write
var fs = Npm.require('fs');
var filePath = process.env.PWD + '/.uploads_dir_on_server/' + fileName;
fs.writeFileSync(filePath, data, 'binary');
To serve
In vanilla meteor app
var fs = Npm.require('fs');
WebApp.connectHandlers.use(function(req, res, next) {
var re = /^\/uploads_url_prefix\/(.*)$/.exec(req.url);
if (re !== null) { // Only handle URLs that start with /uploads_url_prefix/*
var filePath = process.env.PWD + '/.uploads_dir_on_server/' + re[1];
var data = fs.readFileSync(filePath);
res.writeHead(200, {
'Content-Type': 'image'
});
res.write(data);
res.end();
} else { // Other urls will have default behaviors
next();
}
});
When using iron:router
This should be a server side route (ex: defined in a file in /server/ folder)
Edit (2016-May-9)
var fs = Npm.require('fs');
Router.route('uploads', {
name: 'uploads',
path: /^\/uploads_url_prefix\/(.*)$/,
where: 'server',
action: function() {
var filePath = process.env.PWD + '/.uploads_dir_on_server/' + this.params[0];
var data = fs.readFileSync(filePath);
this.response.writeHead(200, {
'Content-Type': 'image'
});
this.response.write(data);
this.response.end();
}
});
Outdated format:
Router.map(function() {
this.route('serverFile', {
...// same as object above
}
});
Notes
process.env.PWD will give you the project root
if you plan to put files inside your project
don't use the public or private meteor folders
use dot folders (eg. hidden folders ex: .uploads)
Not respecting these two will cause local meteor to restart on every upload, unless you run your meteor app with: meteor run --production
I've used this approach for a simple image upload & serve (based on dario's version)
Should you wish for more complex file management please consider CollectionFS
The symlink hack will no longer work in Meteor (from 0.6.5). Instead I suggest creating a package with similar code to the following:
packge.js
Package.describe({
summary: "Application file server."
});
Npm.depends({
connect: "2.7.10"
});
Package.on_use(function(api) {
api.use(['webapp', 'routepolicy'], 'server');
api.add_files([
'app-file-server.js',
], 'server');
});
app-file-server.js
var connect = Npm.require('connect');
RoutePolicy.declare('/my-uploaded-content', 'network');
// Listen to incoming http requests
WebApp.connectHandlers
.use('/my-uploaded-content', connect.static(process.env['APP_DYN_CONTENT_DIR']));
I was stuck at the exact same problem, where i need the users to upload files in contrast to your server generated files. I solved it sort of by creating an "uploads" folder as sibling to the "client public server" on the same folder level. and then i created a simbolic link to the '.meteor/local/build/static' folder like
ln -s ../../../../uploads .meteor/local/build/static/
but with nodejs filesystem api at server start time
Meteor.startup(function () {
var fs = Npm.require('fs');
fs.symlinkSync('../../../../uploads', '.meteor/local/build/static/uploads');
};
in your case you may have a folder like "generatedFiles" instead of my "uploads" folder
you need to do this every time the server starts up cuz these folders are generated every time the server starts up e.g. a file changes in your implementation.
Another option is to use a server side route to generate the content and send it to the user's browser for download. For example, the following will look up a user by ID and return it as JSON. The end user is prompted to save the response to a file with the name specified in the Content-Disposition header. Other headers, such as Expires, could be added to the response as well. If the user does not exist, a 404 is returned.
Router.route("userJson", {
where: "server",
path: "/user-json/:userId",
action: function() {
var user = Meteor.users.findOne({ _id: this.params.userId });
if (!user) {
this.response.writeHead(404);
this.response.end("User not found");
return;
}
this.response.writeHead(200, {
"Content-Type": "application/json",
"Content-Disposition": "attachment; filename=user-" + user._id + ".json"
});
this.response.end(JSON.stringify(user));
}
});
This method has one big downside, however. Server side routes do not provide an easy way to get the currently logged in user. See this issue on GitHub.
Related
I have a need to generate the received data in excel, however I am facing the problem of accessing a static file on my server
I use NuxtJS + VUE a static Excel file is located in my /static folder
/static/Excel.xlsx
/static/Word.docx
If I try to access via FileReader () then I get the error
var workBook = new Excel.Workbook();
workBook.xlsx.readFile("/Excel.xlsx").then(function() {
const ws = workbook.getWorksheet("Sheet1");
cell = ws.getCell("A1").value;
console.log(cell);
});
Error:
TypeError: Cannot read property 'F_OK' of undefined
This error, as I understood from the discussions, is related to browser security.
I would not want to violate the security of the browser and the application to access the file.
The option according to which the user loads the template himself is a little inconvenient, you have to do a lot of unnecessary actions
Link to solution on https://github.com/
at the same time I found a solution for Word using package: docxtemplater - it uses JSZipUtils
and the example with Word works fine, however the module for xlsx is very expensive for me
loadFile(url, callback) {
JSZipUtils.getBinaryContent(url, callback);
},
this.loadFile("/Word.docx", function(error, content) {
if (error) {
throw error;
}
var zip = new JSZip(content);
var doc = new Docxtemplater();
doc.loadZip(zip);
//... getting data from the database and inserting it into the Word template
});
Is it possible to somehow use the JSZipUtils package to implement access to Excel? or have
another way is it different to get an instance of a static file without violating security?
I have followed a tutorial to produce a Twitter bot using node.js, github and Heroku. Everything works great, the bot pulls a random image from a folder at timed intervals and tweets the image.
I'm trying to change the process so that instead of pulling images from a local folder (called 'images'), it pulls them from a web hosted folder. For example, rather than get the images from the local /images folder, I'd like it to pull the image from http://mysite/images. I have tried changing what I think are the relevant bits of code below, but am having no luck. Could anybody offer some advice please?
The whole code is below, but for reference, the bits I have tried changing are:
var image_path = path.join(__dirname, '/images/' +
random_from_array(images))
and
fs.readdir(__dirname + '/images', function(err, files) {
In both cases above I tried changing the /images folder to http://mysite/images but it doesn't work. I get an error stating that no such folder can be found. I have tried changing/deleting the __dirname part too but to no avail.
Any help appreciated!
Full code below:
const http = require('http');
const port=process.env.PORT || 3000
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/html');
res.end('<h1>Hello World</h1>');
});
server.listen(port,() => {
console.log(`Server running at port `+port);
});
var Twit = require('twit')
var fs = require('fs'),
path = require('path'),
Twit = require('twit'),
config = require(path.join(__dirname, 'config.js'));
var T = new Twit(config);
function random_from_array(images){
return images[Math.floor(Math.random() * images.length)];
}
function upload_random_image(images){
console.log('Opening an image...');
var image_path = path.join(__dirname, '/images/' +
random_from_array(images)),
b64content = fs.readFileSync(image_path, { encoding: 'base64' });
console.log('Uploading an image...');
T.post('media/upload', { media_data: b64content }, function (err, data,
response) {
if (err){
console.log('ERROR:');
console.log(err);
}
else{
console.log('Image uploaded!');
console.log('Now tweeting it...');
T.post('statuses/update', {
/* You can include text with your image as well. */
// status: 'New picture!',
/* Or you can pick random text from an array. */
status: random_from_array([
'New picture!',
'Check this out!'
]),
media_ids: new Array(data.media_id_string)
},
function(err, data, response) {
if (err){
console.log('ERROR:');
console.log(err);
}
else{
console.log('Posted an image!');
}
}
);
}
});
}
fs.readdir(__dirname + '/images', function(err, files) {
if (err){
console.log(err);
}
else{
var images = [];
files.forEach(function(f) {
images.push(f);
});
/*
You have two options here. Either you will keep your bot running, and
upload images using setInterval (see below; 10000 means '10 milliseconds',
or 10 seconds), --
*/
setInterval(function(){
upload_random_image(images);
}, 30000);
/*
Or you could use cron (code.tutsplus.com/tutorials/scheduling-tasks-
with-cron-jobs--net-8800), in which case you just need:
*/
// upload_random_image(images);
}
});
Well, my first answer to a question about building a twitter bot would probably be: "Don't do that!" (Because the world doesn't need more twitter bots.) But, putting that aside...
Your code is using the "fs" library, which is exactly what you needed for grabbing stuff from the local file system. That was fine. But now you want to grab stuff from web servers, which "fs" is not going to be able to do. Instead, you need a library that gives you the ability to make an HTTP or HTTPS request across the web and bring you back some data. There are different libraries that do this. Looks like you are already bringing in the "http" library, so I think you are on the right track there, but you seem to be using it to set up a server, and I don't think that's what you want. Rather, you need to use http as a client, and replace your fs.readFileSync() calls with the appropriate calls from the http library (if that's the one you choose to use) to pull in the data you want from whatever server has the data.
Hope that helps. And I hope your twitter bot is going to be a good little bot, not an evil one!
My app is created with mean and I am a user of docker too. The purpose of my app is to create and download a CSV file. I already created my file, compressed it and placed it in a temp folder (the file will be removed after the download). This part is in the nodejs server side and works without problems.
I already use several things like (res.download) which is supposed to download directly the file in the browser but nothing append. I tried to use blob in the angularjs part but it doesn't work.
The getData function creates and compresses the file (it exists I can reach it directly when I look where the app is saved).
exports.getData = function getData(req, res, next){
var listRequest = req.body.params.listURL;
var stringTags = req.body.params.tagString;
//The name of the compressed CSV file
var nameFile = req.body.params.fileName;
var query = url.parse(req.url, true).query;
//The function which create the file
ApollineData.getData(listRequest, stringTags, nameFile)
.then(function (response){
var filePath = '/opt/mean.js/modules/apolline/client/CSVDownload/'+response;
const file = fs.createReadStream(filePath);
res.download(filePath, response);
})
.catch(function (response){
console.log(response);
});
};
My main problem is to download this file directly in the browser without using any variable because it could be huge (like several GB). I want to download it and then delete it.
There is nothing wrong with res.download
Probably the reason why res.download don't work for you is b/c you are using AJAX to fetch the resource, Do a regular navigation. Or if it requires some post data and another method: create a form and submit.
I'm building a Meteor app that communicates with a desktop client via HTTP requests with https://github.com/crazytoad/meteor-collectionapi
The desktop client generates images at irregular time intervals, and I want the Meteor site to only display the most recently generated image (ideally in real time). My initial idea was to use a PUT request to a singleton collection with the base64 imagedata, but I don't know how to turn that data into an image in the web browser. Note: the images are all pretty small (much less than 1 MB) so using gridFS should be unnecessary.
I realize this idea could be completely wrong, so if I'm completely on the wrong track, please suggest a better course of action.
You'll need to write a middleware to serve your images with proper MIME type. Example:
WebApp.connectHandlers.stack.splice (0, 0, {
route: '/imageserver',
handle: function(req, res, next) {
// Assuming the path is /imageserver/:id, here you get the :id
var iid = req.url.split('/')[1];
var item = Images.findOne(iid);
if(!item) {
// Image not found
res.writeHead(404);
res.end('File not found');
return;
}
// Image found
res.writeHead(200, {
'Content-Type': item.type,
});
res.write(new Buffer(item.data, 'base64'));
res.end();
},
});
Here is my workflow as of now:
In a button click event, I have search results being exported to a .csv file, which is saved to the server. Once the file is saved, I want to send it for download to the browser. Using this question How to handle conditional file downloads in meteor.js, I created a method that is called after the method that saves the file returns. Here is that method:
exportFiles: function(file_to_export) {
console.log("to export = "+file_to_export);
Meteor.Router.add('/export', 'GET', function() {
console.log('send '+file_to_export+' to browser');
return [200,
{
'Content-type': 'text/plain',
'Content-Disposition': "attachment; filename=" + this.request.query.file
}, fs.readFileSync( save_path + this.request.query.file )];
});
}
My question, however, is how to invoke that route? Using .Router.to('/export?file=filename.ext') doesn't work, and causes the user to leave the current page. I want this to appear seamless to the user, and I don't want them to have any idea they are being redirected. Before anyone asks, save_path is declared outside of the method, so it does exist.
I have gotten it! However, it required the use of a few additional packages. First, let me describe the workflow a little more clearly:
A user on our site performs a search. On the subsequent search results page, a button exists that allows the user to export his/her search results to a .csv file. The file is then to be exported to the browser for download.
One concern we had was if a file is written to the server, making sure only the user who is exporting the file has the ability to view the file. To control who had visibility on files, I used a meteorite package, CollectionFS (mrt add collectionFS or clone from github). This package writes file buffers to a mongo collection. Supplying an "owner" field when saving gives you control over access.
Regardless of how the file is created, whether saved to the server via an upload form or generated on the fly the way I did using the json2csv package, the file must be streamed to CollectionFS as a buffer.
var userId = Meteor.userId()
var buffer = Buffer(csv.length); //csv is a var holding the data for write
var filename = "name_of_file.csv";
for ( var i=0; i<csv.length; i++ ) {
buffer[i] = csv.charCodeAt(i);
}
CollectionFS.storeBuffer(filename, buffer, {
contentType: 'text/plain',
owner: userId
});
So at this point, I have taken my data file, and streamed it as a buffer into the mongo collection. Because my data exists in memory in the var csv, I stream it as a buffer by looping through each character. If this were a file saved on a physical disk, I would use fs.readFileSync(file) and send the returned buffer to CollectionFS.storeBuffer().
Now that the file is saved as a buffer in mongo with an owner, I can limit through way I publish the CollectionFS collection who can download/update/delete the file or even know the file exists.
In order to read the file from mongo and send the file to the browser for download, another Javascript library is necessary: FileSaver (github).
Using the retrieveBlob method from CollectionFS, pull your file out of mongo as a blob by supplying the _id that references the file in your mongo collection. FileSaver has a method, saveAs that accepts a blob, and exports to the browser for download as a specified file name.
var file = // file object stored in meteor
CollectionFS.retrieveBlob(file._id, function(fileItem) {
if ( fileItem.blob ) saveAs(fileItem.blob, file.filename);
else if ( fileItem.file ) saveAs(fileItem.file, file.filename);
});
I hope someone will find this useful!
If your route works, when when your method returns you could open a new window containing the link to the text file.
You've already added in content disposition headers so the file should always ask to be saved.
Even if you just redirect to the file, because it has these content disposition headers it will ask to be saved and not interrupt your session.