I'm working on a Discord Bot that has a variable 0-9999.
I need to be able to store that variable (let's say 9) into a .txt file,
and then whenever the Bot starts, it reads the file and sets the variable as
the first line of the text document.
So example:
Variable is 9. 9 is written to a text document on the first line.
Whenever I start the bot, Variable is set to the first line of text
document (9)
Not entirely sure how Discord bots work, but I know Discord is made with Electron and Node.js, so you could probably use the file system package (https://www.npmjs.com/package/fs)
Install it with:
npm install fs --save -g
You can read from the file like this:
// File system stuff.
var fs = require("fs");
// Get the text file and load it into a variable.
var file = fs.readFileSync("path/to/my/text/file.txt", "utf8");
And you can write to the file like this:
// Write the file
fs.writeFile("path/to/my/text/file.txt", myVariable, function (err) {
// Checks if there is an error
if (err) return console.log(err);
});
why not instead use cookies? If you're writing something for the browser you shouldn't store data in text files on the client's computer. If you are using nodejs for server code you could write text files but it would be better to use some sort of database.
Related
I am trying to make a local JavaScript program to read a file, process it's contents and format the text line by line, store it in an array and also display it on the console.
I'm using NodeJS.
I don't need any assistance in the later steps. The problem is that the code posted everywhere runs on a browser. I'm looking for something that runs locally by directly picking up the file from the local address on the disk and displays the contents on a windows command prompt like interface.
// Import
var fs = require('fs');
// File path
fileName = <File name with path>;
// Read Function
fs.readFile(fileName, 'utf8', function (err, data) {
console.log(data);
});
This reads the file asynchronously from a file on the disk.
I'm working on a project where I need to store the data in files next to the html-file but it needs to be runnable even without internet, and without any local web server.
Could somebody tell me how to create/write a txt file what I'll use as script in my html?
Build your app using Electron (or NW.js) and use Node's filesystem module to write your text file:
const { writeFile } = require('fs');
const data = 'Some text';
writeFile('data.txt', data, (error) => {
console.log(error || 'Done!');
});
Seriously I can't create a file on my computer even in the same directory where the html file is in without webserver?
I'm new to Node.JS and I'm trying to run an example in the book Node.js The Right Way. The following code is saved in a file called watcher.js and the text file target.txt is in the same directory.
const fs = require('fs');
fs.watch('target.txt',function(){
console.log("File 'target.txt' just changed!");
});
console.log("Now watching target.txt for changes...");
When I run the file with the node command, the last line of the file, which should naturally be output first, is never seen. The log statement inside fs.watch() works fine and outputs the message every time the file changes.
There is a big gap between my version of Node.js (v6.11.0) and the one in the book (v0.10.20).
Is there something I am missing?
I've tested, your code works perfectly on node v7.1:
I'm learning nodejs and I want to send some file to the printing queue.
I tried elctron-printer and node-printer modules but actually they do not work proprly (can't detect printer with printer.list command for example). Now I'm trying to make it with child_process module and I want to know is there any posibility to start file with its associated application with "print" argument like a python can do it?
For example, this is a code sample of file execution with nodejs:
var childProcess = require('child_process');
childProcess.exec('start printme.txt', function (err, stdout, stderr) {
if (err) {
console.error(err);
return;
}
console.log(stdout);
process.exit(0);// exit process once it is opened
})
Unfortunately it seems that "print" argument is invalid for this code.
And this is a code sample for python and it works fine on windows:
import os
os.startfile('printme.txt', 'print')
All in all I hope that there is possibility to emulate system commands with nodejs.
Otherwise I will have to execute python script via nodejs just for file printing, something like this:
let python = spawn('python', [path.join(app.getAppPath(), '..', 'python_scripts/print_file.py'])
But it is terrible way to do it.
The goal: Have a Node.js server which will take a pdf from a PUT call, and return each page converted to a .jpg. Ultimately, I don't care how it gets done, of course.
I've successfully created a Node.js server which uses imagemagick to convert a pdf to an image. It's an express server, and it uses this imagemagick npm package, which wraps the imagemagick CLI.
Here's the code for the route, which works very well running locally
app.put('/v1/convert-pdf', (req, res) => {
res.status(202).send({ message: "we're working on it, alright?!" })
console.log("beginning image conversion")
im.convert(["-density", "144", "./content/pdf/foo.pdf", "./content/img/bar.jpg"], (err, stdout) => {
if (err) {
console.error(err)
} else {
console.log(stdout)
console.log("finished converting pdfs");
}
})
})
The route above outputs a bunch of .jpg files named foo-0.jpg, foo-1.jpg, etc... Now what I want to do is take a file, which is put to the route, convert it via imagemagick and then give it back to the user (eventually I'll put it in an S3 bucket, but baby steps).
If it does require saving the file to a folder in Heroku, how do I do that? I've attempted to read/write from Heroku, and I don't get an error (unless one of the files/directories doesn't exist), but I don't see the file either.
Summarising our conversation in comments here in case it's helpful to anyone:
The reason you won't be able to see the files that were created and saved by the application when you run heroku run bash is because that actually spins up a new dyno and doesn't have access to the file system of the dyno running the app.
As mentioned, the best way to do this is to upload the resultant images to something like S3 as soon as the conversion is complete, and to serve any incoming requests to retrieve those files through S3.
If running a single web dyno this wouldn't be a problem right now, but if you're running multiple then the converted files will only be available on the dyno that received and transformed the PDF, so other dynos won't have access to them.
Also, each deployment creates new dynos, so unless you store those files in something like S3, they'll be lost as soon as you push up.