I am in the process of learning Meteor while at the same time experimenting with the TwitchTV API.
My goal right now is to call the TwitchAPI every minute and then insert part of the json object into the mongo database. Since MongoDB matches on _id and Twitch uses _id as its key I am hoping subsequent inserts will either update existing records or create a new one if the _id doesnt exist yet.
The call and insert (at least the initial one) seem to be working fine. However, I can't seem to get the Meteor.setTimeout() function to work. The call happens when I start the app but does not continue occurring every minute.
Here is what I have in a .js. file in my server folder:
Meteor.methods({
getStreams: function() {
this.unblock();
var url = 'https://api.twitch.tv/kraken/streams?limit=3';
return Meteor.http.get(url);
},
saveStreams: function() {
Meteor.call('getStreams', function(err, res) {
var data = res.data;
Test.insert(data);
}
}
});
Deps.autorun(function(){
Meteor.setTimeout(function(){Meteor.call('saveStreams');}, 1000);
});
Any help or advice is appreciated.
I made the changes mentioned by #richsilv and #saimeunt and it worked. Resulting code:
Meteor.startup(function(){
Meteor.setInterval(function(){Meteor.call('saveStreams');}, 1000);
});
Related
I've been trying to implement a web scraper that will use data pulled from MongoDB to create an array of urls to scrape periodically with puppeteer. I have been trying to get my scraper function to scrape periodically by using setIntervalAsync.
Here is my code right now that throws "UnhandledPromiseRejectionWarning: TypeError: Cannot convert undefined or null to object at Function.values..."
puppeteer.js
async function scrape(array){
// initialize for loop here
let port = '9052'
if(localStorage.getItem('scrapeRunning')=='restart'){
clearIntervalAsync(scrape)
localStorage.setItem('scrapeRunning') == 'go'
}else if(localStorage.getItem('scrapeRunning') != 'restart'){
/// Puppeteer scrapes urls in array here ///
}
server.js
app.post('/submit-form', [
// Form Validation Here //
], (req,res)=>{
async function submitForm(amazonUrl,desiredPrice,email){
// Connect to MongoDB and update entry or create new entry
// with post request data
createMongo.newConnectToMongo(amazonUrl,desiredPrice,email)
.then(()=>{
// Set local variable that will alert scraper to clearIntervalAsync///
localStorage.setItem('scrapeRunning','restart');
// before pulling the new updated mongoDB data ...
return createMongo.pullMongoArray();
})
.then((result)=>{
// and start scraping again with the new data
puppeteer.scrape(result)
})
submitForm(req.body.amazonUrl, req.body.desiredPrice,req.body.email);
}
}
createMongo.pullMongoArray()
.then((result)=>{
setIntervalAsync(puppeteer.scrape, 10000, result);
})
Currently the scraper starts as expected after the server is started and keeps 10 seconds between when the scrape ends and when it begins again. Once there is a post submit the MongoDB collection gets updated with the post data, the localStorage item is created, but the scrape function goes off the rails and throws the typeError. I am not sure what is going on and have tried multiple ways to fix this (including leaving setIntervalAsync and clearIntervalAsync inside of the post request code block) but have been unsuccessful so far. I am somewhat new to coding, and extremely inexperienced with asynchronous code, so if someone has any experience with this kind of issue and could shed some light on what is happening I would truly appreciate it!
I only think that it has something to do with async as no matter what I have tried it also seems to run the pullMongoArray function before the newConnectToMongo function is complete.
After a few more hours of searching around I think I may have found a workable solution. I've completely eliminated the use of localStorage and removed the if and else if statements from within the scrape function. I have also make a global timer variable and added control functions to this file.
puppeteer.js
let timer;
function start(result){
timer = setIntervalAsync(scrape,4000, result)
}
function stop(){
clearIntervalAsync(timer)
}
async function scrape(array){
// initialize for loop here
let port = '9052'
// Puppeteer scrapes urls from array here //
}
I've altered my server code a little bit so at the server start it gets the results from MongoDB and uses that in the scraper start function. A post request also calls the stop function before updating MongoDB, pulling a new result from MongoDB, and then recalling the start scraper function.
server.js
createMongo.pullMongoArray()
.then((result)=>{
puppeteer.start(result);
})
app.post('/submit-form', [
// Form Validation In Here //
], (req,res)=>{
async function submitForm(amazonUrl,desiredPrice,email){
// Stop the current instance of the scrape function
puppeteer.stop();
// Connect to MongoDB and update entry or create new entry
// with post request data
createMongo.newConnectToMongo(amazonUrl,desiredPrice,email)
.then(()=>{
// before pulling the new updated mongoDB data ...
console.log('Pulling New Array');
return createMongo.pullMongoArray();
})
.then((result)=>{
// and restarting the repeating scrape function
puppeteer.start(result);
})
}
})
I'm working on a section of my app where a user can add a note to his project. Each note has the capability to have additional comments appended to it.
So far the backend is working. When calling all records at a time or one record at a time(by id) manually, be it via postman or simply adding the id number of a project(gotten from mongo) to the browser, both pull up the records as specified.
The problem starts when I try to pull this information through the front end via
$.getJSON.
Say for example that I have two projects in my app.
Project 01 has an id of "123" and has 3 comments
and
Project 02 has an id of "456" and has 10 comments
When I call all projects on the front end of the app, I see both appear, and all their comments come through ok but when I try to call, for example, project two by id, it does show but I get 10 "undefined" for all of that projects 10 comments. Same thing happens for any one record I call.
And again, this happens only when trying to call it via jquery $.getJSON because when I manually do it via postman/browser, they come through fine.
Here is some code below for when I try to find one record (not working fully).
This is the backend code:
app.get("/notesapi/:tabID", (request, response) => {
var idNum = request.params.tabID;
var newIdNumber = idNum.trim();
NotesModel.findOne({_id: newIdNumber}, (error, data)=>{
if(error){
console.log("error in finding this particular record!");
throw error;
} else {
console.log("data for this one record found YO");
response.status(200);
response.json(data);
}
});
});
And this is the front end code:
function getFreshComments(tabID){
console.log("FRONTEND -- the link is: /notesapi/" + tabID);
$.getJSON("/notesapi/456", showCommentsGotten);
function showCommentsGotten(dataFedIn){
var tabInnerComments = $("#" + tabID +" .theNote");
var output = "";
$.each(dataFedIn, (key, item)=>{
output += item.todoMessages + "<br>";
});
var normalComments = output;
var newComments = normalComments.split(",").join("<br>");
tabInnerComments.html(newComments);
}
}
As the example explained above, if I wanted to pull the 10 comments from id 456, then when I use $.getJSON("/notesapi/456", showCommentsGotten);
This returns me 10 "undefined".
When I remove the id number from the URL, then it fetches me ALL the comments for ALL the notes.
I don't get any errors anywhere. What am I missing or doing wrong?
I have a real time database with firebase and i'm using the following code to connect to the database and get some data from it.
window.onload = function(){
var databaseWebsites = firebase.database().ref('/websites').orderByChild('votes');
console.log(databaseWebsites);
databaseWebsites.on('value', function(snapshot) {
snapshot.forEach(function(childSnapshot) {
var webTemp = document.getElementById(childSnapshot.val().id);
webTemp.style.order = order;
var webText = webTemp.getElementsByClassName('likeText');
webText[0].innerHTML = childSnapshot.val().votes;
order--;
});
order = 0;
});
It gets all the data, in order and uses it correctly.
The problem is, I don't want the data on the front end to update until the user refreshes the page. The system is a voting system that is ordered by votes value, if it was constantly updating it would be a bad user experience.
Any help will be appreciated.
Thanks
Change the on to once, Firebase on listens for changes in your database node and sends a response.
databaseWebsites.on('value', function(snapshot) {
to
databaseWebsites.once('value', function(snapshot) {
An excerpt from Firebase doc
The value event is called every time data is changed at the specified
database reference, including changes to children. To limit the size
of your snapshots, attach only at the lowest level needed for watching
changes.
Visit this url to read more
The accepted response is not correct (maybe outdated?) because once() requires to add a then()
It's actually
databaseWebsites.once('value').then(function(snapshot) {}
that replaces
databaseWebsites.on('value', function(snapshot) {}
The title might sound strange, but I have a website that will query some data in a Mongo collection. However, there is no user system (no logins, etc). Everyone is an anonymouse user.
The issue is that I need to query some data on the Mongo collection based on the input text boxes the user gives. Hence I cannot use this.userId to insert a row of specifications, and the server end reads this specifications, and sends the data to the client.
Hence:
// Code ran at the server
if (Meteor.isServer)
{
Meteor.publish("comments", function ()
{
return comments.find();
});
}
// Code ran at the client
if (Meteor.isClient)
{
Template.body.helpers
(
{
comments: function ()
{
return comments.find()
// Add code to try to parse out the data that we don't want here
}
}
);
}
It seems possible that at the user end I filter some data based on some user input. However, it seems that if I use return comments.find() the server will be sending a lot of data to the client, then the client would take the job of cleaning the data.
By a lot of data, there shouldn't be much (10,000 rows), but let's assume that there are a million rows, what should I do?
I'm very new to MeteorJS, just completed the tutorial, any advice is appreciated!
My advice is to read the docs, in particular the section on Publish and Subscribe.
By changing the signature of your publish function above to one that takes an argument, you can filter the collection on the server, and limiting the data transferred to what is required.
Meteor.publish("comments", function (postId)
{
return comments.find({post_id: postId});
});
Then on the client you will need a subscribe call that passes a value for the argument.
Meteor.subscribe("comments", postId)
Ensure you have removed the autopublish package, or it will ignore this filtering.
tldr - What is the best pattern create a 'proprietary database' with data from an API? In this case, using Meteor JS and collections in mongo db.
Steps
1. Ping API
2. Insert Data into Mongo at some interval
In lib/collections.js
Prices = new Mongo.Collection("prices");
Basic stock api call, in server.js:
Meteor.methods({
getPrice: function () {
var result = Meteor.http.call("GET", "http://api.fakestockprices.com/ticker/GOOG.json");
return result.data;
}
});
Assume the JSON is returned clean and tidy, and I want to store the entire object (how you manipulate what is returned is not important, storing the return value is)
We could manipulate the data in the Meteor.method function above but should we? In Angular services are used to call API, but its recommended to modularize and keep the API call in its own function. Lets borrow that, and Meteor.call the above getPrice.
Assume this also done in server.js (please correct).
Meteor.call("getPrice", function(error, result) {
if (error)
console.log(error)
var price = result;
Meteor.setInterval(function() {
Prices.insert(price);
}, 1800000); // 30min
});
Once in the db, a pub/sub could be established, which I'll omit and link to this overview.
You may want to take a look at the synced-cron package.
With a cron job it's pretty easy, just call your method:
// server.js
SyncedCron.start();
SyncedCron.add({
name: "get Price",
schedule: function(parser){
return parser.text('every 30 minutes');
},
job: function(){
return Meteor.call("getPrice");
}
});
Then in getPrice you can do var result = HTTP.call(/* etc */); and Prices.insert(result);. You would want some additional checks of course, as you have pointed out.