I am writing a discord bot using javascript (discord.js).
I use json files to store my data and of course always need the latest data.
I do the following steps:
I start the bot
I run a function that requires the config.json file every time a message is sent
I increase the xp a user gets from the message he sent
I update the users xp in the config.json
I log the data
So now after logging the first time (aka sending the first message) I get the data that was in the json file before I started the bot (makes sense). But after sending the second message, I expect the xp value to be higher than before, because the data should have been updated, the file new loaded and the data logged again.
(Yes I do update the file every time. When I look in the file by myself, the data is always up to date)
So is there any reason the file is not updated after requiring it the second time? Does require not reload the file?
Here is my code:
function loadJson() {
var jsonData = require("./config.json")
//here I navigate through my json file and end up getting to the ... That won't be needed I guess :)
return jsonData
}
//edits the xp of a user
function changeUserXP(receivedMessage) {
let xpPerMessage = getJsonData(receivedMessage)["levelSystemInfo"].xpPerMessage
jsonReader('./config.json', (err, data) => {
if (err) {
console.log('Error reading file:',err)
return
}
//increase the users xp
data.guilds[receivedMessage.guild.id].members[receivedMessage.author.id].xp += Number(xpPerMessage)
data.guilds[receivedMessage.guild.id].members[receivedMessage.author.id].stats.messagesSent += 1
fs.writeFile('./test_config.json', JSON.stringify(data, null, 4), (err) => {
if (err) console.log('Error writing file:', err)
})
})
}
client.on("message", (receivedMessage) => {
changeUserXP(receivedMessage)
console.log(loadJson(receivedMessage))
});
I hope the code helps :)
If my question was not precise enough or if you have further questions, feel free to comment
Thank you for your help <3
This is because require() reads the file only once and caches it. In order to read the same file again, you should first delete its key (the key is the path to the file) from require.cache
Related
I am doing a JS and Node.js CLI game. I need to create a Save game option that saves the current stats and once you exit the script, when you execute it again you can choose Load game which will return the same stats that were saved previousy.
In order to save the current stats I am storing them in a JSON file.
// save.js
const save = async () => {
clear(true)
const data = JSON.stringify(userStats)
writeFile("userStats.json", data, (err) => {
if (err) {
throw err
}
})
await start()
headerColor("Game saved", colors.blue, colors.green)
console.log("Thanks for playing. -- (game saved)")
}
export default save
Once executed, the script shows this error:
ERR_INVALID_ARG_TYPE The "data" argument must be of type string or an instance of Buffer, TypedArray, or DataView. Received undefined.
I tried this: fs.writeFileSync(configPath, parse(nextBlock, 10).toString()) to try and turn it into a string but it didn't work.
Can someone please help me with this error. Thanks in advance.
I want to be able to read the client.login(BOT_TOKEN); dynamically from a file/database, but this is getting executed before my file read function finishes executing.
BOT_TOKEN = '';
if(BUILD_PROFILE == 'dev'){
filesystem.readFile('./../devToken.txt', 'utf8', (err, data) => {
if(err) throw err;
console.log(data);
BOT_TOKEN = data;
})
}
client.login(BOT_TOKEN);
This is the error I'm getting in logs - I have double checked the file and it's console.log(data) shows the right token, but it's not being applied
I suggest you place your token in an ENV file.
I also think you should copy your token directly from your bot's bot page on discord and pasting it directly.
You console.log'd the data was it the right token?
A very easy way to do this would be to have a config.js file in your main bot folder, and set out the
{
token: “token-here”
}
Then, in your main.js file, require the config file as a variable, then at your ‘bot.login’, just do ‘bot.login(config.token)’
You can also have your prefix set in this file too, allowing a user to possibly change your command prefix in the future
Additionally, you could use a SQLite database, that saves your token - you have to have the SQLite npm library, from https://www.npmjs.com/package/sqlite here, but it is very simple to set up, if anyone needs help here, add my discord Proto#4992
n.m. SQLite databases also will come in useful when/if you want to set up a currency system in the future.
Disclaimer: I have been programming for about 4 months now, so I'm still very new to programming.
I am using Firebase Cloud Firestore for a database and inserting CSV files with large amounts of data in them. Each file can be about 100k records in length. The project I'm working on requires a user to upload these CSV's from a web page.
I created a file uploader and I'm using the PapaParse JS tool to parse the csv, and it does so very nicely, it returns an array instantly, even if it's very long. I've tried to use the largest files I can find and then logging it to the console, it's very fast.
The problem is when I then take that array it gives me and loop through it and insert that into Cloud Firestore. It works, the data is inserted exactly how I want it. But it's very slow. And if I close the browser window it stops inserting. Inserting only 50 records takes about 10-15 seconds. So with files of 100k records, this is not going to work.
I was considering using Cloud Functions, but before I now try and learn how all that works, maybe I'm just not doing this in an efficient way? So I thought to ask here.
Here is the JS
// Get the file uploader in the dom
var uploader = document.getElementById('vc-file-upload');
// Listen for when file is uploaded
uploader.addEventListener('change',function(e){
// Get the file
var file = e.target.files[0];
// Parse the CSV File and insert into Firestore
const csv = Papa.parse(file, {
header: true,
complete: function(results) {
console.log(results);
sim = results.data;
var simLength = sim.length;
for (var i = 0; i < simLength;i++) {
var indSim = sim[i]
iccid = indSim.iccid;
const docRef = firestore.collection('vc_uploads').doc(iccid);
docRef.set({indSim}).then(function() {
console.log('Insert Complete.')
}).catch(function (err) {
console.log('Got an error: '+ err)
})
};
}
});
});
It will almost certainly be faster overall if you just upload the file (perhaps to Cloud Storage) and perform the database operations in Cloud Functions or some other backend, and it will complete even if the user leaves the app.
I am trying to code a receipt route that grabs receipts by their ID and then renders them to the page.
My route with MongoDB:
app.get('/receipt/:transactionId', (req,res) => {
var transactionId = req.params.transactionId;
console.log('TRANSACTION ID: ' + transactionId);
SquareTransactionDB.findOne({_id: transactionId})
.then((transaction) => {
console.log('FOUND A TRANSACTION');
res.render('receipt.hbs', {
chargeList: transaction.receipt_list,
date: transaction.created_on,
transactionId
});
return;
}).catch((e) => {
console.log(e);
})
});
What I get as a result is an infinitely waiting web-page, and these errors being thrown in Node terminal:
As soon as I Ctrl+C cancel the server, the page renders. Otherwise, the page just waits for a response.
How is it possible that my code is being called with 'bootstrap.css', or any other value for that matter? What can I do to fix this? I'm on hour 4 and have come to the conclusion that I cannot answer this alone.
I can resolve the page load - with the error still occurring server-side - if I add next() within the catch(e) block.
I appreciate any feedback.
Michael
UPDATE: By adding path '/bootstrap.css' instead of 'bootstrap.css' into my handlebars file, this error stopped getting thrown. Still does not answer my question as to why my style sheet was being used as an argument in parameters - seemingly without purpose.
Hello All I have a collection in mongoDB whoose size is 30K.
When I run the find Query (I am using mongoose) from Node server, following problems occur.
1: It takes long time to get result back from datatabase server
2: While creating JSON object from the result data, Node server get crashed
To solve the problem I tried to fetch the data in chunk (Stated in the Doc)
Now i am getting the docuemnt one by one in my stream.on callback,.
here is my code
var index=1;
var stream = MyModel.find().stream();
stream.on('data', function (doc) {
console.log("document number"+ index);
index++;
}).on('error', function (err) {
// handle the error
}).on('close', function () {
// the stream is closed
});
And the out put of my code is
Document number1 document number2 ...... documant number 30000.
Output shows that database is sending the document one by one.
Now my question is, Is there any way to fetch the data in the chunk of 5000 documents.
Or is there any better way to do the same??
Thanks in advance
I tried using batch_size() but it did not solve my problem
Can I use the same streaming for MAP reduce ?