Getting Chunk of data in Node server from mongoDB server - javascript

Hello All I have a collection in mongoDB whoose size is 30K.
When I run the find Query (I am using mongoose) from Node server, following problems occur.
1: It takes long time to get result back from datatabase server
2: While creating JSON object from the result data, Node server get crashed
To solve the problem I tried to fetch the data in chunk (Stated in the Doc)
Now i am getting the docuemnt one by one in my stream.on callback,.
here is my code
var index=1;
var stream = MyModel.find().stream();
stream.on('data', function (doc) {
console.log("document number"+ index);
index++;
}).on('error', function (err) {
// handle the error
}).on('close', function () {
// the stream is closed
});
And the out put of my code is
Document number1 document number2 ...... documant number 30000.
Output shows that database is sending the document one by one.
Now my question is, Is there any way to fetch the data in the chunk of 5000 documents.
Or is there any better way to do the same??
Thanks in advance
I tried using batch_size() but it did not solve my problem
Can I use the same streaming for MAP reduce ?

Related

Required JSON file has old values from before running the program

I am writing a discord bot using javascript (discord.js).
I use json files to store my data and of course always need the latest data.
I do the following steps:
I start the bot
I run a function that requires the config.json file every time a message is sent
I increase the xp a user gets from the message he sent
I update the users xp in the config.json
I log the data
So now after logging the first time (aka sending the first message) I get the data that was in the json file before I started the bot (makes sense). But after sending the second message, I expect the xp value to be higher than before, because the data should have been updated, the file new loaded and the data logged again.
(Yes I do update the file every time. When I look in the file by myself, the data is always up to date)
So is there any reason the file is not updated after requiring it the second time? Does require not reload the file?
Here is my code:
function loadJson() {
var jsonData = require("./config.json")
//here I navigate through my json file and end up getting to the ... That won't be needed I guess :)
return jsonData
}
//edits the xp of a user
function changeUserXP(receivedMessage) {
let xpPerMessage = getJsonData(receivedMessage)["levelSystemInfo"].xpPerMessage
jsonReader('./config.json', (err, data) => {
if (err) {
console.log('Error reading file:',err)
return
}
//increase the users xp
data.guilds[receivedMessage.guild.id].members[receivedMessage.author.id].xp += Number(xpPerMessage)
data.guilds[receivedMessage.guild.id].members[receivedMessage.author.id].stats.messagesSent += 1
fs.writeFile('./test_config.json', JSON.stringify(data, null, 4), (err) => {
if (err) console.log('Error writing file:', err)
})
})
}
client.on("message", (receivedMessage) => {
changeUserXP(receivedMessage)
console.log(loadJson(receivedMessage))
});
I hope the code helps :)
If my question was not precise enough or if you have further questions, feel free to comment
Thank you for your help <3
This is because require() reads the file only once and caches it. In order to read the same file again, you should first delete its key (the key is the path to the file) from require.cache

Firebase Cloud FireStore: Insert Large Array

Disclaimer: I have been programming for about 4 months now, so I'm still very new to programming.
I am using Firebase Cloud Firestore for a database and inserting CSV files with large amounts of data in them. Each file can be about 100k records in length. The project I'm working on requires a user to upload these CSV's from a web page.
I created a file uploader and I'm using the PapaParse JS tool to parse the csv, and it does so very nicely, it returns an array instantly, even if it's very long. I've tried to use the largest files I can find and then logging it to the console, it's very fast.
The problem is when I then take that array it gives me and loop through it and insert that into Cloud Firestore. It works, the data is inserted exactly how I want it. But it's very slow. And if I close the browser window it stops inserting. Inserting only 50 records takes about 10-15 seconds. So with files of 100k records, this is not going to work.
I was considering using Cloud Functions, but before I now try and learn how all that works, maybe I'm just not doing this in an efficient way? So I thought to ask here.
Here is the JS
// Get the file uploader in the dom
var uploader = document.getElementById('vc-file-upload');
// Listen for when file is uploaded
uploader.addEventListener('change',function(e){
// Get the file
var file = e.target.files[0];
// Parse the CSV File and insert into Firestore
const csv = Papa.parse(file, {
header: true,
complete: function(results) {
console.log(results);
sim = results.data;
var simLength = sim.length;
for (var i = 0; i < simLength;i++) {
var indSim = sim[i]
iccid = indSim.iccid;
const docRef = firestore.collection('vc_uploads').doc(iccid);
docRef.set({indSim}).then(function() {
console.log('Insert Complete.')
}).catch(function (err) {
console.log('Got an error: '+ err)
})
};
}
});
});
It will almost certainly be faster overall if you just upload the file (perhaps to Cloud Storage) and perform the database operations in Cloud Functions or some other backend, and it will complete even if the user leaves the app.

Ionic 1 - ngCordovaNativeStorage issue?

how's it going?
I've using this plugin for a long time, but today I needed to do some notifications in my app. I need to store my data in device and then, when I got internet connection, I'll send this for my servers. But ok, this is not important.
What I'm trying to do is:
Get my data from server;
Store my data in device using nativeStorage;
Getting data and putting in my storage
myFactory.getMyData().then(function(success) {
$cordovaNativeStorage.setItem("mydata", success);
}, function(err) {...});
OK, my data was correctly stored. Next I'll loop in thru this data and show in view.
$cordovaNativeStorage.getItem("mydata").then(function (success)
{
for (var i in success)
{
$scope.myData.push(success[i]);
}
}, function (err){
getMyData(); // function who will get my data from server
});
OK until now.
Next I'll send this data to another view and show my data. But when I do ANY modifications in that data (even if I change directly in object or in nativeStorage), that modification do not persists if I back to the main view.
$cordovaNativeStorage.getItem("myData").then(function (success){
success[myIndex].anyProperty = 'abc';
});
Is that a bug or am I not understanding something?
When you're calling $cordovaNativeStorage.setItem(), Native storage actually save JSON string rather than JSON object.Same with $cordovaNativeStorage.getItem(), it will return JSON string. Thus, you must parse it first before manipulating the object .
$cordovaNativeStorage.getItem("myData").then(function (jsonString){
if (jsonString) {
var jsonObj = JSON.parse(jsonString);
jsonObj.anyProperty = 'abc';
}
});

Meteor - Server-side API call and insert into mongodb every minute

I am in the process of learning Meteor while at the same time experimenting with the TwitchTV API.
My goal right now is to call the TwitchAPI every minute and then insert part of the json object into the mongo database. Since MongoDB matches on _id and Twitch uses _id as its key I am hoping subsequent inserts will either update existing records or create a new one if the _id doesnt exist yet.
The call and insert (at least the initial one) seem to be working fine. However, I can't seem to get the Meteor.setTimeout() function to work. The call happens when I start the app but does not continue occurring every minute.
Here is what I have in a .js. file in my server folder:
Meteor.methods({
getStreams: function() {
this.unblock();
var url = 'https://api.twitch.tv/kraken/streams?limit=3';
return Meteor.http.get(url);
},
saveStreams: function() {
Meteor.call('getStreams', function(err, res) {
var data = res.data;
Test.insert(data);
}
}
});
Deps.autorun(function(){
Meteor.setTimeout(function(){Meteor.call('saveStreams');}, 1000);
});
Any help or advice is appreciated.
I made the changes mentioned by #richsilv and #saimeunt and it worked. Resulting code:
Meteor.startup(function(){
Meteor.setInterval(function(){Meteor.call('saveStreams');}, 1000);
});

Finding saved data from mongo shell (no output)

Here is the code for initialization
mongoose.connect('mongodb://localhost/gpsdb');
var db = mongoose.connection;
db.on('open', function () {
// now we can start talking
});
After successful opening, I am saving data like this, it's giving me no errors.
function saveGPSData(data){
var newData = new GPSData(data);
newData.save(function(err){
if(err)
return console.error(err);
});
}
Now in mongo shell, I am trying to retrieve that data but it's giving me empty output.
> use gpsdb
> db.GPSData.find();
>
It's giving me no output. Also can I found what models are there in gpsdb?
Here is the full source code http://pastebin.com/K7QPYAx8
JUST FOUND THAT in db folder there these files for my db created by mongodb
/data/db/gpsdb.0
/data/db/gpsdb.1
/data/db/gpsdb.n
A good place to start to get a quick answer is
https://groups.google.com/forum/#!forum/mongoose-orm
the community is very responsive :)
In the shell I did the following
>use gpsdb
switched to gpsdb
>db show collections
gpsdatas
From here I found that collection name is gpsdatas...... Not sure why its adding extra (s) to my modal, although you can see from the code that I am setting Modal to
var GPSData = mongoose.model('GPSData', GPSDataSchema);
Now using the shell its working like this
>db.gpsdatas.find()

Categories