I'm trying to look for a way to create a .txt File object without saving it to disk in Node.js. In browser I'd do something like
new File(['file_contents'], 'file_name.txt', {type: 'text/plain'})
The end aim is to create a .txt file object temporarily for my Discord Bot to send as a message attachment.
From the discord.js documentation, I presume I'm supposed to create a Buffer or Stream object representing the file.
I'm already using require('node-fetch') library to read attachments in messages, if that has any relevancy (but can't see anything in their docs on this either).
After playing around in console I've found Buffer.from('string of txt file contents') to represent the file seems to work fine.
Usage:
channel.send(new Discord.MessageAttachment(Buffer.from('Hello World'), 'file_name.txt'))
In order to write a text file in JS you should use the fs library, included in Node.js.
Here is an example of writing to a file:
fs = require('fs');
let content = "Hello world!";
fs.writeFile('file.txt', content, function (err) {
if (err) return console.log(err);
// Code here will execute if successfully written
});
Then in order to send a local file to Discord you can use .send with an object containing a files property in it, like this:
channel.send({
files: [{
attachment: 'entire/path/to/file.txt',
name: 'file.txt'
}]
}).then(() => {
// Delete file
fs.unlink('file.txt');
});
You can see more about .send() here and you can find more about the fs module here
Related
I would like some help to make a File object from a pdf stored in MongoDB.
I am not using GridFS, the file is stored as such:
File structure in MongoDB
I am using this function to make the File:
const handlegetfile = () => {
API.getFile(2).then((result) => {
console.log(result.data);
const file = new File(Uint8Array.from(result.data.File.data), result.data.File.name);
console.log(file);
API.writeFile({
CodeTiers: "2525",
Type: { value: "non", label: "testfile" },
Format: "pdf",
File: file,
});
});
};
The .pdf file created by the writeFile() function can't be opened, and when opened with an editor, it looks like this:
pdf data after retrieved
Important: I do not want to write the file to the disk, writeFile() is just here to be sure that the pdf can be opened.
The thing is: the data goes from this in the original file:
original pdf data
To this in MongoDB:
data in MongoDB
To what is in the second screenshot. I am pretty sure the problem comes from the cast to a Uint8array but I can't find what else to use there for it to work. I tried exploding the response with {...result.data.File} and using an ArrayBuffer, also by simply casting to an array new File([result.data.File.data], result.data.File.name) but I couldn't get it to work either way.
Could you help me please ?
I have a local JSON file which I intent to read/write from a NodeJS electron app. I am not sure, but I believe that instead of using readFile() and writeFile(), I should get a FileHandle to avoid multiple open and close actions.
So I've tried to grab a FileHandle from fs.promises.open(), but the problem seems to be that I am unable to get a FileHandle from an existing file without truncate it and clear it to 0.
const { resolve } = require('path');
const fsPromises = require('fs').promises;
function init() {
// Save table name
this.path = resolve(__dirname, '..', 'data', `test.json`);
// Create/Open the json file
fsPromises
.open(this.path, 'wx+')
.then(fileHandle => {
// Grab file handle if the file don't exists
// because of the flag 'wx+'
this.fh = fileHandle;
})
.catch(err => {
if (err.code === 'EEXIST') {
// File exists
}
});
}
Am I doing something wrong? Are there better ways to do it?
Links:
https://nodejs.org/api/fs.html#fs_fspromises_open_path_flags_mode
https://nodejs.org/api/fs.html#fs_file_system_flags
Because JSON is a text format that has to be read or written all at once and can't be easily modified or added onto in place, you're going to have to read the whole file or write the whole file at once.
So, your simplest option will be to just use fs.promises.readFile() and fs.promises.writeFile() and let the library open the file, read/write it and close the file. Opening and closing a file in a modern OS takes advantage of disk caching so if you're reopening a file you just previously opened not long ago, it's not going to be a slow operation. Further, since nodejs performs these operations in secondary threads in libuv, it doesn't block the main thread of nodejs either so its generally not a performance issue for your server.
If you really wanted to open the file once and hold it open, you would open it for reading and writing using the r+ flag as in:
const fileHandle = await fsPromises.open(this.path, 'r+');
Reading the whole file would be simple as the new fileHandle object has a .readFile() method.
const text = await fileHandle.readFile({encoding 'utf8'});
For writing the whole file from an open filehandle, you would have to truncate the file, then write your bytes, then flush the write buffer to ensure the last bit of the data got to the disk and isn't sitting in a buffer.
await fileHandle.truncate(0); // clear previous contents
let {bytesWritten} = await fileHandle.write(mybuffer, 0, someLength, 0); // write new data
assert(bytesWritten === someLength);
await fileHandle.sync(); // flush buffering to disk
I (as a node server with Koa framework), need to take a JSON blob, turn it into a file with extension .json, then stick that in a zip archive, then send the archive as a file attachment in response to a request from the client.
It seems the way to do this is use the Archiver tool. Best I can understand, the way to do this is to create an archive, append a json blog to it as a .json file (it automatically creates the file within the archive?), then "pipe" that .zip to the response object. The "piping" paradigm is where my understanding fails, mostly due to not getting what these docs are saying.
Archiver docs, as well as some stackoverflow answers, use language that to me means "stream the data to the client by piping (the zip file) to the HTTP response object. The Koa docs say that the ctx.body can be set to a stream directly, so here's what I tried:
Attempt 1
const archive = archiver.create('zip', {});
ctx.append('Content-Type', 'application/zip');
ctx.append('Content-Disposition', `attachment; filename=libraries.zip`);
ctx.response.body = archive.pipe(ctx.body);
archive.append(JSON.stringify(blob), { name: 'libraries.json'});
archive.finalize();
Logic: the response body should be set to a stream, and that stream should be the archiver stream (pointing at ctx.body).
Result: .zip file downloads on client-side, however the zip is malformed somehow (can't open).
Attempt 2
const archive = archiver.create('zip', {});
ctx.append('Content-Type', 'application/zip');
ctx.append('Content-Disposition', `attachment; filename=libraries.zip`);
archive.pipe(ctx.body);
archive.append(JSON.stringify(blob), { name: 'libraries.json'});
archive.finalize();
Logic: Setting a body to be a stream after, uh, "pointing a stream at it" does seem silly, so instead copy other stackoverflow examples.
Result: Same as attempt 1.
Attempt 3
Based on https://github.com/koajs/koa/issues/944
const archive = archiver.create('zip', {});
ctx.append('Content-Type', 'application/zip');
ctx.append('Content-Disposition', `attachment; filename=libraries.zip`);
ctx.body = ctx.request.pipe(archive);
archive.append(JSON.stringify(body), { name: 'libraries.json'});
archive.finalize();
Result: ctx.request.pipe is not a function.
I'm probably not reading this right, but everything online seems to indicate that doing archive.pipe(some sort of client-sent stream) "magically just works." That's a quote of the archive tool example file, "streaming magic" is the words they use.
How do I in-memory turn a JSON blob into a .json, then append that .json to a .zip that then is sent to the client and downloaded, and can then be successfully unzipped to see the.json ?
EDIT: If I console.log the ctx.body after archive.finalize(), it shows a ReadableStream, which seems right. However, it has a "path" property that worries me - its the index.html, which I had wondered about - in the "response preview" on the client side, I'm seeing a stringified version of our index.html. The file was still downloading a .zip, so I wasn't too concerned, but now I'm wondering if this is related.
EDIT2: Looking deeper into the response on the client-side, it appears that the data sent back is straight up our index.html, so now I'm very confused.
const passThrough = new PassThrough();
const archive = archiver.create('zip', {});
archive.pipe(passThrough);
archive.append(JSON.stringify(blob), { name: 'libraries.json'});
archive.finalize();
ctx.body = passThrough;
ctx.type = 'zip';
This should work fine for your use case, since archiver isn't actually a stream that we should pass to ctx.body I guess.
Yes, you can directly set ctx.body to the stream. Koa will take care of the piping. No need to manually pipe anything (unless you also want to pipe to a log, for instance).
const archive = archiver('zip');
ctx.type = 'application/zip';
ctx.response.attachment('test.zip');
ctx.body = archive;
archive.append(JSON.stringify(blob), { name: 'libraries.json' });
archive.finalize();
I want json stream stored in text file. When running the node server, the json file isn't appended to the json.txt file. What am I missing? Am new to to node, so be gentle..
Here is a code chunk I expect to capture the json content:
var fs = require('fs');
fs.writeFile("json.txt",{encoding:"utf8"}, function(err) {
if(err) {
console.log(err);
} else {
console.log("The file was saved!");
}
});
The issue is you aren't using the correct parameters. When calling fs.writeFile it expects a string for the filename, a buffer or string for the content, an object for the options and a callback function. What it looks like you're doing is sending the options as the second parameter when it expects a buffer or a string. Correction below;
var fs = require('fs');
fs.writeFile("json.txt", JSON.stringify({some: object}), {encoding:"utf8"}, function(err) {
if(err) {
console.log(err);
} else {
console.log("The file was saved!");
}
});
You can replace the JSON.stringify part with some plain text if you wanted to, but you specified JSON in your question so I assumed you wanted to store some object in a file
Source (NodeJS documentation)
EDIT:
The links to other questions in the comments may be more relevant if you want to add new lines to the end of the file and not completely overwrite the old one. However I made the assumption that fs.writeFile was the intended function. If that wasn't the intention, those other questions will help a lot more
UPDATE:
It seems the issue was the fact that the body wasn't being parsed, so when the POST request was going through, Node didn't have the request body. To alleviate this, during the express configuration, the following code is needed:
var bodyParser = require('body-parser');
app.use(bodyParser.json());
Uses the npm module body-parser. This will convert the JSON body to a JavaScript object, and it is accessible via req.body.
I used this link to make an original link into a download link:
https://milanaryal.com/2015/direct-linking-to-your-files-on-dropbox-google-drive-and-onedrive/
Now how do I actually use that download link to download the file in JavaScript? I want to do something like:
link = 'https://drive.google.com/uc?export=download&id=FILE_ID';
let x = download(link); //now x is the download file
I looked it up and it seems like there are ways of doing this with HTML/jQuery, but I am not using those because I am working on the server side with Nodejs. I am doing this download thing because I want to check if the file is a pdf or text, parse the text, and then search through it using Elasticsearch.
It's easiest to use a module such as Request to do a HTTP get from a node script.
For example:
var request = require('request');
request.get('https://drive.google.com/uc?export=download&id=FILE_ID',
function(err, res, body){
if(err) return console.log(err);
console.log(body);
});
Once the file has downloaded, the callback function is run with the downloaded file in the body variable
If you only want to download the file, open it, search for data and delete it, you can easily edit this code snippet: https://stackoverflow.com/a/11944984/642977