How to sync CouchDB from multiple PouchDB in Angularjs? - javascript

I am working on an angular app. How I can make multiple PouchDB sync to a single CouchDB without any information loss?

You need to do replication by filter
Created filter in CouchDB for every client databases;
Start replication from CouchDB to client by filter (look at replication option named filter)
Start replication for all documents from client to CouchDB for every client database.
You can find everything you need here (examples 3, 4)

if you mean by creating multiple PouchDB instances, then you have to create their respective listener to each PouchDB instance pointing to the remote DB you want ( the CouchDB in question ).
this example for the listener worked for me:
var sync = PouchDB.sync('mydb', 'http://localhost:5984/mydb', {
live: true,
retry: true
}).on('change', function (info) {
// handle change
}).on('paused', function (err) {
// replication paused (e.g. replication up to date, user went offline)
}).on('active', function () {
// replicate resumed (e.g. new changes replicating, user went back online)
}).on('denied', function (err) {
// a document failed to replicate (e.g. due to permissions)
}).on('complete', function (info) {
// handle complete
}).on('error', function (err) {
// handle error
});

Related

running a mysql query using mysql-npm on AWS

Hi guys I have a problem that i don't really have idea how to solve. it's also a bit strange :/
Basically I have created this Lambda function to connect to a mysql DB using the node package 'mysql'.
If i run the function from command line on my pc using the command 'sls function run function1' and make different queries everything is fine.
But when I call the function from a web browser using the link, I have to refresh the page 2 times to get the right result because at the first refresh the server respond with the old result.
I have noticed that from the command line I always have different thredID while from webbrowser is always the same.
Also I don't close the connection in the lambda function code because everything is fine if i run the function from command line but from browser I can only make 2 queries and then I get a message that say that I cannot use a closed connection.
So it seems like Lambda store the old query result when I call it from web browser.
Obviously I'm making same stupid mistake but I don't know how to solve it.
Does anyone have an idea?
Thanks :)
'use strict';
//npm packages
var mysql=require('mysql');
var deasync = require('deasync');
//variables
var goNext=false; //use to synchronize deasync
var error=false; //it becomes TRUE if an error occured during the connection to the DB
var dataColumnTable; //the data thet you extract from the query to the DB
var errorMessage;
//----------------------------------------------------------------------------------------------------------------
//always same credentials
var connection = mysql.createConnection({
host : 'hostAddress',
user : 'Puser',
password : 'password',
port : '3306',
database : 'database1',
});
//----------------------------------------------------------------------------------------------------------------
module.exports.handler = function(event, context) {
var Email=event.email;
connection.query('SELECT City, Address FROM Person WHERE E_Mail=?', Email, function(err, rows) {
if(err){
console.log("Cannot connect to DB");
console.log(err);
error=true;
errorMessage=err;
}
else{
console.log("data from column acquired!");
dataColumnTable=rows;
}
//connection.end(function(err) {
// connection.destroy();
//});
//console.log("Connection closed!");
goNext=true;
});
require('deasync').loopWhile(function(){return goNext!=true;});
//----------------------------------------------------------------------------------------------------------------
if(error==true)
return callback('Error '+ errorMessage);
else
return callback(null,dataColumnTable); //return a JsonFile
//fine headler
};
Disclaimer: I'm not very familiar with AWS and/or AWS Lambda.
http://docs.aws.amazon.com/lambda/latest/dg/programming-model-v2.html states (emphasis mine):
Your Lambda function code must be written in a stateless style, and have no affinity with the underlying compute infrastructure. Your code should expect local file system access, child processes, and similar artifacts to be limited to the lifetime of the request. Persistent state should be stored in Amazon S3, Amazon DynamoDB, or another cloud storage service. Requiring functions to be stateless enables AWS Lambda to launch as many copies of a function as needed to scale to the incoming rate of events and requests. These functions may not always run on the same compute instance from request to request, and a given instance of your Lambda function may be used more than once by AWS Lambda.
Opening a connection and storing it in a variable outside your handler function is state. The connection will likely be closed between requests or even before your first request. Your lambda function may be reused (hence identical thread ids).
My assumption would be (and an attempt to solve this problem), that you need to create the connection on every request (i.e., inside your handler) and may not expect any value be as initialized or as on last request. (except for constants probably).

PouchDB - Lazily fetch and replicate documents

TL;DR: I want a PouchDB db that acts like Ember Data: fetch from the local store first, and if not found, go to the remote. Replicate only that document in both cases.
I have a single document type called Post in my PouchDB/CouchDB servers. I want PouchDB to look at the local store, and if it has the document, return the document and start replicating. If not, go to the remote CouchDB server, fetch the document, store it in the local PouchDB instance, then start replicating only that document. I don't want to replicate the entire DB in this case, only things the user has already fetched.
I could achieve it by writing something like this:
var local = new PouchDB('local');
var remote = new PouchDB('http://localhost:5984/posts');
function getDocument(id) {
return local.get(id).catch(function(err) {
if (err.status === 404) {
return remote.get(id).then(function(doc) {
return local.put(id);
});
}
throw error;
});
}
This doesn't handle the replication issue either, but it's the general direction of what I want to do.
I can write this code myself I guess, but I'm wondering if there's some built-in way to do this.
Unfortunately what you describe doesn't quite exist (at least as a built-in function). You can definitely fall back from local to remote using the code above (which is perfect BTW :)), but local.put() will give you problems, because the local doc will end up with a different _rev than the remote doc, which could mess with replication later on down the line (it would be interpreted as a conflict).
You should be able to use {revs: true} to fetch the doc with its revision history, then insert with {new_edits: false} to properly replicate the missing doc, while preserving revision history (this is what the replicator does under the hood). That would look like this:
var local = new PouchDB('local');
var remote = new PouchDB('http://localhost:5984/posts');
function getDocument(id) {
return local.get(id).catch(function(err) {
if (err.status === 404) {
// revs: true gives us the critical "_revisions" object,
// which contains the revision history metadata
return remote.get(id, {revs: true}).then(function(doc) {
// new_edits: false inserts the doc while preserving revision
// history, which is equivalent to what replication does
return local.bulkDocs([doc], {new_edits: false});
}).then(function () {
return local.get(id); // finally, return the doc to the user
});
}
throw error;
});
}
That should work! Let me know if that helps.

MeteorJS - No user system, how to filter data at the client end?

The title might sound strange, but I have a website that will query some data in a Mongo collection. However, there is no user system (no logins, etc). Everyone is an anonymouse user.
The issue is that I need to query some data on the Mongo collection based on the input text boxes the user gives. Hence I cannot use this.userId to insert a row of specifications, and the server end reads this specifications, and sends the data to the client.
Hence:
// Code ran at the server
if (Meteor.isServer)
{
Meteor.publish("comments", function ()
{
return comments.find();
});
}
// Code ran at the client
if (Meteor.isClient)
{
Template.body.helpers
(
{
comments: function ()
{
return comments.find()
// Add code to try to parse out the data that we don't want here
}
}
);
}
It seems possible that at the user end I filter some data based on some user input. However, it seems that if I use return comments.find() the server will be sending a lot of data to the client, then the client would take the job of cleaning the data.
By a lot of data, there shouldn't be much (10,000 rows), but let's assume that there are a million rows, what should I do?
I'm very new to MeteorJS, just completed the tutorial, any advice is appreciated!
My advice is to read the docs, in particular the section on Publish and Subscribe.
By changing the signature of your publish function above to one that takes an argument, you can filter the collection on the server, and limiting the data transferred to what is required.
Meteor.publish("comments", function (postId)
{
return comments.find({post_id: postId});
});
Then on the client you will need a subscribe call that passes a value for the argument.
Meteor.subscribe("comments", postId)
Ensure you have removed the autopublish package, or it will ignore this filtering.

meteor js create mongodb database hook to store data from API at fixed interval

tldr - What is the best pattern create a 'proprietary database' with data from an API? In this case, using Meteor JS and collections in mongo db.
Steps
1. Ping API
2. Insert Data into Mongo at some interval
In lib/collections.js
Prices = new Mongo.Collection("prices");
Basic stock api call, in server.js:
Meteor.methods({
getPrice: function () {
var result = Meteor.http.call("GET", "http://api.fakestockprices.com/ticker/GOOG.json");
return result.data;
}
});
Assume the JSON is returned clean and tidy, and I want to store the entire object (how you manipulate what is returned is not important, storing the return value is)
We could manipulate the data in the Meteor.method function above but should we? In Angular services are used to call API, but its recommended to modularize and keep the API call in its own function. Lets borrow that, and Meteor.call the above getPrice.
Assume this also done in server.js (please correct).
Meteor.call("getPrice", function(error, result) {
if (error)
console.log(error)
var price = result;
Meteor.setInterval(function() {
Prices.insert(price);
}, 1800000); // 30min
});
Once in the db, a pub/sub could be established, which I'll omit and link to this overview.
You may want to take a look at the synced-cron package.
With a cron job it's pretty easy, just call your method:
// server.js
SyncedCron.start();
SyncedCron.add({
name: "get Price",
schedule: function(parser){
return parser.text('every 30 minutes');
},
job: function(){
return Meteor.call("getPrice");
}
});
Then in getPrice you can do var result = HTTP.call(/* etc */); and Prices.insert(result);. You would want some additional checks of course, as you have pointed out.

Meteor - Server-side API call and insert into mongodb every minute

I am in the process of learning Meteor while at the same time experimenting with the TwitchTV API.
My goal right now is to call the TwitchAPI every minute and then insert part of the json object into the mongo database. Since MongoDB matches on _id and Twitch uses _id as its key I am hoping subsequent inserts will either update existing records or create a new one if the _id doesnt exist yet.
The call and insert (at least the initial one) seem to be working fine. However, I can't seem to get the Meteor.setTimeout() function to work. The call happens when I start the app but does not continue occurring every minute.
Here is what I have in a .js. file in my server folder:
Meteor.methods({
getStreams: function() {
this.unblock();
var url = 'https://api.twitch.tv/kraken/streams?limit=3';
return Meteor.http.get(url);
},
saveStreams: function() {
Meteor.call('getStreams', function(err, res) {
var data = res.data;
Test.insert(data);
}
}
});
Deps.autorun(function(){
Meteor.setTimeout(function(){Meteor.call('saveStreams');}, 1000);
});
Any help or advice is appreciated.
I made the changes mentioned by #richsilv and #saimeunt and it worked. Resulting code:
Meteor.startup(function(){
Meteor.setInterval(function(){Meteor.call('saveStreams');}, 1000);
});

Categories