I am looking to work with the twitter stream api in javascript and have a script that run successfully pulling from the streaming API. My question is what would be the best way for me to set this up so it can run constantly. Or would it be better to switch to the search API instead? I am just trying to collect tweets based on a few keywords, but I want to collect a load of them and store them into Mongolab. Would a Cron job be best for this? I am going to use openshift to handle the streaming and processing.
I think I am looking for guidance on the best route so I don't have to constantly monitor and check that it is collecting tweets.
Thank you!
var Twit = require('twit');
var MongoClient = require('mongodb').MongoClient;
var T = new Twit({
consumer_key: '***',
consumer_secret: '***',
access_token: '***',
access_token_secret: '***'
});
var url = "***";
MongoClient.connect(url, function (err, db) {
var col= db.collection('test');
// filter public stream on keywords
var stream = T.stream('statuses/filter', {track: ['#food', 'drinks']
});
stream.on('tweet', function (data) {
console.log("tweet: " + data);
col.insert(data, function (err, result) {
if (!err) {
console.log("insert successful on tweet: " + data.id);
} else {
console.log(err);
}
});
});
});
Someone else might be able to provide a better answer but, I think using the cronjob add-on cart would be the way to go. You could set it up to execute jobs on your cartridge in timed intervals and thus would keep you from having to pull things manually. Here's a OpenShift blog article that can help you get started https://blog.openshift.com/getting-started-with-cron-jobs-on-openshift/.
Related
I'm working on a MERN webapp (using MySQL instead of MongoDB) and we are having an issue where at some point after querying to add an entry, the backend is somehow re-querying the last query. In the meantime, between the original entry and the duplicate one, the backend runs continuously to keep the frontend displaying up-to-date data.
I've read on old threads that MySQL for Node may have some issues with memory leaking and thus running sleep queries. I don't believe this is a frontend issue, and the backend isn't running the queries as if they were called from the function that handles the frontend button click (to submit).
I'm a noob to full-stack JS, so I don't really know where to look for an answer to this issue. I'm not sure if we should band-aid the solution and just make sure a query isn't sent without the submit button being clicked within a 1 second time span.
EDIT: Here's code for the query to add the entry (data for a music album).
db.query(insertAlbumQuery, insertAlbumValues, function(err, result) {
if (err) throw err;
console.log("1 Album added.");
});
We don't close the connection to the db after querying, one connection runs for the duration of the app's execution.
Here's the code for the db connection:
let mysql = require("mysql");
var db;
function connectDB() {
if (!db) {
db = connection = mysql.createConnection({
host: "******",
user: "******",
password: "******",
database: "******"
});
connection.connect(function(err) {
if (err) {
return console.error("error: " + err.message);
} else {
console.log("Connected to the MySQL server.");
}
});
}
return db;
}
module.exports = connectDB();
From the main, backend app.js file we have the router post
router.post("/addAlbum", (req, res) => {
const {
Album_title,
Artist,
Release_date,
Category,
Description,
Rotation
} = req.body;
album.add(Album_title, Artist, Release_date, Category, Description, Rotation, null);
});
From the front end we post with
addAlbum = currentState => {
this.setState({ Rotation: +this.state.Rotation });
axios.post("http://localhost:3001/api/addAlbum", this.state);
};
Not sure if this is enough background on the app, but I can add more snippets if needed.
I am currently developing APIs in express js. I want to write a function to which saves the analytic in DB but I should be able to call the function in fire and forget way. The function should accept parameters and do its work. This should work like a separate thread and current code execution should not wait for its response. For example the way Akka Actors work in Java. Can someone suggest a way to do it or some link to refer?
Node is async by default. Just send your response outside of the db query callback:
app.get("/ping", function (req, res) {
// fire
dbConnection.query("UPDATE analytics SET count = count + 1", function(err, result) {
// forget
});
res.send("Pong");
});
You can add your information to some kind of MessageQueue and then launch another process which will listen for MQ and process messages accordingly.
It's not particularly how Actors work, but that's how it's usually done in nodejs realm.
For example you can use kue or AWS SQS or Google PubSub or any other available solution
// example with kue
// http-process.js
var kue = require('kue');
var queue = kue.createQueue();
...
app.post('/something-somewhere', (req, res) => {
var job = queue.create('event', {
data: 'analytics, data',
median: 5.3,
}).save( function(err){
if( !err ) return next(err);
res.send('ok');
});
});
// event-processor.js
var kue = require('kue');
var queue = kue.createQueue();
queue.process('event', function(job, done){
someKindOfORM.myEventsTable.insert({
job.data
}).notify(done);
});
I'm making a web application using the MEAN framework and MVC design pattern. I am trying to perform a POST request from the Angular front-end for finding a document in my server-side MongoDB (version 2.4.9). The console logs show that the query is successful, but when I try to send the response back to the client, the query result is undefined.
I understand that NodeJS is asynchronous and uses callbacks, but I am having trouble understanding what is wrong with my code. I tried using returns and callbacks but I can't get it working. I'm confused how to use the controller to access the model and have the controller ultimately send the response.
Here is my code to connect to the database (model):
module.exports = {
readDocument : function(callback, coll, owner) {
// Connect to database
MongoClient.connect("mongodb://localhost:27017/tradingpost", function(err, db) {
if (err) {
console.log("Cannot connect to db (db.js)");
callback(err);
}
else {
console.log("Connected to DB from db.js: ", db.databaseName);
//Read document by owner
// Get the documents collection
var collection = db.collection(coll);
// Find document
collection.find({owner: owner}).toArray(function (err, result) {
if (err) {
console.log(err);
} else if (result.length) {
console.log('Found:', result);
} else {
console.log('No document(s) found with defined "find" criteria!');
}
// Close connection
db.close();
return callback(result);
});
}
})
}}
And here is my controller that sends the response:
var model = require('../models/db');
exports.sendRecentPosts = function (req,res) {
// Connect to the DB
// Run query for recent posts
// Close the connection
// Send the data to the client
var result = model.readDocument(dbCallback, "gs", "Mana");
res.end( result );
};
Client's post request:
// Use post for secure queries
// Need recent posts for display
$http.post('/recent').
success(function(responseData) {
$scope.testValue = responseData;
}).
error(function(responseData) {
console.log('Recent posts POST error. Received: ', responseData);
});
Snippet for my express route:
var goodsServices = require('../controllers/gs-server-controller.js');
app.post('/recent', goodsServices.sendRecentPosts);
I have been struggling with this for a long time and searched the forum for solutions but could not find any. Thanks for any feedback.
I do not know why this question has not been answered yet. When I faced the same problem, I learnt that the response to all DB queries are returned after the DB transaction is complete. Try placing db.close() within the success callback response of the find() call.
My overall goal is the following:
-> Get data from fabric.io (=crashlytics) into a geckoboard dashboard.
To my knowledge there is no API on crashlytics/fabric side, so I had the following idea:
Write a script in nodejs (because I know node js a bit, I'm no expert though) that would:
Open the html page where the data I want is
Find my data using htmlparser
Save that data into a google sheet
Make geckoboard read the google sheet and display the data
Step 4 is done already and I'm working on step 1 now.
Unfortunately I'm having a problem as the page is not publicly accessible, I need to authenticate using my user account.
I've reused some code that works just fine when doing GET/POST/PUT on some other websites' REST api, but it doesn't seem to work here as fabric is redirecting me to the login page.
However when I search the web for node auth, I find modules for people to create a server that will handle authentication, whereas I'm trying to use node to login to a website.
It's well possible that what I'm trying to achieve or the way I'm trying to do it don't make sense at all. But as I'm not skilled enough to realise that, I'd be happy if someone could confirm it to me. At least I'd know I'm looking into the wrong direction ;-)
Thanks for reading me
Here is my code:
var fs = require('fs');
var https = require('https');
var creds = require('./config/credentials.js');
var date = new Date();
var htmlpage = "";
var authorizationHeader = "Basic " + new Buffer(creds.login + ":" + creds.pwd).toString("base64");
var get_options = {
hostname: "fabric.io",
path: "/my-company-account/ios/apps/app.identifier/answers/stability",
method: "GET",
port: 443,
headers: {
"content-type": "application/json",
"Authorization": authorizationHeader
}
};
var get_req = https.request(get_options, function(res){
res.setEncoding('utf8');
res.on('data', function(chunk){
htmlpage += chunk;
}); // end res.on 'data'
res.on('end', function(){
console.log(htmlpage);
//tmp debug
fs.writeFile('./logs/ac-ios_' + date.getTime() + '.html', htmlpage, function(err){
if (err) {
console.log(err);
}
}); // end write html file
/*var parser = new htmlparser.Parser(handler);
parser.parseComplete(htmlpage);*/
}); // end res.on 'end'
}); // end https.request
get_req.on("error", function(err){
console.log(err);
});
get_req.end();
Which gives me the following html:
<html>
<body>You are being redirected.</body>
</html>
I've installed node-twitter via npm and have created an index.js file with the code below. When I run node index.js in the console I'm not getting any response. I've included my own keys and secrets so that doesn't appear to be the issue.
var Twitter = require('twitter');
var client = new Twitter({
consumer_key: process.env.TWITTER_CONSUMER_KEY,
consumer_secret: process.env.TWITTER_CONSUMER_SECRET,
access_token_key: process.env.TWITTER_ACCESS_TOKEN_KEY,
access_token_secret: process.env.TWITTER_ACCESS_TOKEN_SECRET,
});
client.stream('statuses/filter', {track: 'nyc'}, function(stream){
stream.on('data', function(tweet) {
console.log(tweet.text);
});
stream.on('error', function(error) {
console.log(error);
});
});
You're connecting to one of Twitter's streaming endpoints, so you will only see updates if someone posts a tweet with the text 'nyc' in it at that precise moment.
I suspect this is not your intention, and you instead wish to search for the most recent tweets containing the text 'nyc', as below:
client.get('/search/tweets.json?q=nyc', function(error, tweets, response){
if(error) throw error;
console.log(tweets);
});
The Twitter search API documentation is here. If you actually do wish to stream tweets, I recommend using something like forever.js to keep your script running long enough to listen for tweets and output them.
Edit: Since you would like to stream the tweets as they come in, you can use forever.js to keep your script running, as below:
var forever = require('forever-monitor');
var child = new (forever.Monitor)('twitter.js', {
max: 1,
silent: false,
args: []
});
child.on('exit', function () {
console.log('twitter.js has exited after 3 restarts');
});
child.start();
Where twitter.js is the file posted in the question. If you want to search backwards or forwards from a specified time, you can find details on how to do so in the Twitter API docs.