Firebase Realtime Database first query not responding - javascript

Hi there and thanks for reading this.
I'm learning how to work with Dialogflow and Firebase Realtime Database and I like these platforms a lot.
I created a very simple DB structure on Firebase with 7 fields and in my agent I query them with a very simple fulfillment.
It seems to be working but every "first query" that I do the next day seems to last about 5000ms so the DB doesn't respond: starting from the second query it works almost in real time so it seems to be sleeping or something.
In my today test at the first query I read this in the Dialogflow log: "webhook_latency_ms": 4663 but at least it worked, generally it doesn't.
It seems like there's some uncertainty about getting data from the DB.
Any suggestion would be very appreciated.
The realtime database structure is this:
serviceAccount
bitstream: "pluto"
cloud: "paperino"
data center: "gastone"
datacenter: "gastone"
ull: "bandabassotti"
vula: "minnie"
wlr: "pippo"
and this is how I query Firebase:
const servizi = agent.parameters.elencoServiziEntity;
return admin.database().ref("serviceAccount").once("value").then((snapshot) =>
{
var accountName = snapshot.child(`${servizi}`).val();
agent.add(`L'Account Manager del Servizio ${servizi} si chiama: ${accountName}`);
console.log(`${servizi}`);
});

The webhook latency isn't always related to the database call - it includes the time that may be required to start the webhook itself. If you're using Firebase Cloud Functions or the Dialogflow Built-In Code Editor (which uses Google Cloud Functions), there is a "cold start" time required to start the function. If your webhook is running somewhere else, on AWS Lambda for example, you may have network latency in addition to the cold start time.
There is very little you can do about this. If you're running with one of Google's Cloud Function solutions, make sure you're running them in the Central-1 region, which is close to where Dialogflow also runs. To avoid the cold start completely - run a server.
Usually, however, the latency and cold start time shouldn't be that long. Which suggests that your code is also taking a while to run. You may wish to look at your logs to see why execution time is taking so long - the call to the Firebase RTDB may be part of it, but there may be other things causing a slowdown that you don't show in your code.
One thing you are doing in your call to Firebase is pulling in the entire record, instead of just pulling in the one field that the user is asking for. This does require more data to be marshaled, which takes more time. (Is it taking a lot more time? Probably not. But milliseconds count.)
If you just need the one field from the record the user has asked for, you can get a reference to the child itself and then do the query on this reference. It might look like this:
const servizi = agent.parameters.elencoServiziEntity;
return admin.database()
.ref("serviceAccount")
.child(servizi)
.once("value")
.then((snapshot) => {
const accountName = snapshot.val();
agent.add(`L'Account Manager del Servizio ${servizi} si chiama: ${accountName}`);
console.log(`${servizi}`);
});

Related

sync data from mongoDB to firebase and vice-versa

My current situation:
I have created an application using React, NodeJS and Electron. Most of the users are a kind of offline users. They use my application offline.
Next plans:
Now, I am planning to create a mobile application for them. I plan to create that application using React-Native.
Since their database is offline, I planned to give them a sync to firebase button in desktop application. When he clicks on sync to firebase button, the data in their local mongodb should syncronize with firebase.
My thoughts:
when a new record is added to mongodb, I will store a new key with that record which will look like: new: true.
when a record is updated I will store a key named updated: true
similarly for delete...
And then when user presses Sync to firebase, I will search for those records and add/update/delete respective records on firebase and then I will remove those keys from mongodb database.
Problems in executing my thoughts:
At first it does not smell me a good thing as I think it is time consuming because I will perform operations on firebase as well as mongodb.
Another problem with this approach is that if I think the other way round, that when user add/update/delete a record from React-Native app, firebase will have those keys line new/updated/deleted and then when user presses sync button in desktop application, I will have to do same thing but in reverse.
Yet another problem is that if user accidently uninstalled my application and then reinstalls it, then what should I do?
And the biggest problem is managing all the things.
My Expectations:
So, I want a clean and maintainable approach. Does any one have any idea on how to sync data from mongodb to firebase and vice-versa?
Both database systems supports for some sort of operation log or trigger system. You can use these to live update changes to databases to sync them almost real time.
For MongoDB
You can use Oplog to see what changes made to database (insert/update/delete) and run a suitable function to sync firebase.
oplog
A capped collection that stores an ordered history of logical writes
to a MongoDB database. The oplog is the basic mechanism enabling
replication in MongoDB.
There are small libraries that help you easily subscribe to these events.
Example (mongo-oplog)
import MongoOplog from 'mongo-oplog'
const oplog = MongoOplog('mongodb://127.0.0.1:27017/local', { ns: 'test.posts' })
oplog.tail();
oplog.on('op', data => {
console.log(data);
});
oplog.on('insert', doc => {
console.log(doc);
});
oplog.on('update', doc => {
console.log(doc);
});
oplog.on('delete', doc => {
console.log(doc.o._id);
});
For Firebase
You can use Cloud Functions. With Cloud Functions you can watch triggers like Cloud Firestore Triggers or Realtime Database Triggers and run a function to sync MongoDB database.
With Cloud Functions, you can handle events in the Firebase Realtime
Database with no need to update client code. Cloud Functions lets you
run database operations with full administrative privileges, and
ensures that each change to the database is processed individually.
// Listens for new messages added to /messages/:pushId/original and creates an
// uppercase version of the message to /messages/:pushId/uppercase
exports.makeUppercase = functions.database.ref('/messages/{pushId}/original').onWrite((event) => {
// Grab the current value of what was written to the Realtime Database.
const original = event.data.val();
console.log('Uppercasing', event.params.pushId, original);
const uppercase = original.toUpperCase();
// You must return a Promise when performing asynchronous tasks inside a Functions such as
// writing to the Firebase Realtime Database.
// Setting an "uppercase" sibling in the Realtime Database returns a Promise.
return event.data.ref.parent.child('uppercase').set(uppercase);
});

Redis connection taking time to execute "mget " and "setex" commands in node,js & socket.io

I m facing issues with redis in node.js. I am using socket.io with redis (without pub-sub model i.e single connection). My redis operations take lot of time to execute. Whenever my number of socket connections increases, time for redis commands increases exponentially. Sometime it went 10+ secs.
I am struggling hard with this issue and am new to node.js.
How I come to know that Redis is taking time. I applied console logs before calling and after calling redis setex method.
P.S. : we run the same test with python script and result is acceptable. I dont know why this problem persist for node.js.
Already tried: https://redislabs.com/blog/redis-running-slowly-heres-what-you-can-do-about-it/
any help is appreciated guys!
UPDATE- Added the code snippet. I am trying to make a basic quiz app using socket.io. The flow is to fetch all questions from the DB and cache it in Redis. Then out of this entire question set, I am randomly picking 5 questions and setting it in Redis.
function populateUserQuestion(uid, cacheKey, languageId, cDS, cQ, totalRounds, callback) {
// First step is to get all questions from DB and then shuffle it for every user
userQ = userQ.concat(randomShuffle(filteredQuestion, numberOfQuestion)
// Shuffling and retrieving questions from DB doesn't take time
// Below is the code for setting the above questions for every user in Redis, which is taking upto 5 seconds when number of concurrent connections is high
redis.setex(
cacheKey, 1800,
JSON.stringify(userQ), function (reply) {
callback(userQ);
});
}

What is the most efficient way to make a batch request to a Firebase DB based on an array of known keys?

I need a solution that makes a Firebase DB API call for multiple items based on keys and returns the data (children) of those keys (in one response).
Since I don't need data to come real-time, some sort of standard REST call made once (rather than a Firebase DB listener), I think it would be ideal.
The app wouldn't have yet another listener and WebSocket connection open. However, I've looked through Firebase's API docs and it doesn't look like there is a way to do this.
Most of the answers I've seen always suggest making a composite key/index of some sort and filter accordingly using the composite key, but that only works for searching through a range. Or they suggest just nesting the data and not worrying about redundancy and disk space (and it's quicker), instead of retrieving associated data through foreign keys.
However, the problem is I am using Geofire and its query method only returns the keys of the items, not the items' data. All the docs and previous answers would suggest retrieving data either by the real-time SDK, which I've tried by using the once method or making a REST call for all items and filter with the orderBy, startAt, endAt params and filtering locally by the keys I need.
This could work, but the potential overhead of retrieving a bunch of items I don't need only to filter them out locally seems wasteful. The approach using the once listener seems wasteful too because it's a server roundtrip for each item key. This approach is kind of explained in this pretty good post, but according to this explanation it's still making a roundtrip for each item (even if it's asynchronously and through the same connection).
This poor soul asked a similar question, but didn't get many helpful replies (that really address the costs of making n number of server requests).
Could someone, once and for all explain the approaches on how this could be done and the pros/cons? Thanks.
Looks like you are looking for Cloud Functions. You can create a function called from http request and do every database read inside of it.
These function are executed in the cloud and their results are sent back to the caller. HTTP call is one way to trigger a Cloud Function but you can setup other methods (schedule, from the app with Firebase SDK, database trigger...). The data are not charged until they leave the server (so only in your request response or if you request a database of another region). Cloud Function billing is based on CPU used, number of invocations and running intances, more details on the quota section.
You will get something like :
const database = require('firebase-admin').database();
const functions = require('firebase-functions');
exports.getAllNodes = functions.https.onRequest((req, res) => {
let children = [ ... ]; // get your node list from req
let promises = [];
for (const i in children) {
promises.push(database.ref(children[i]).once('value'));
}
Promise.all(promises)
.then(result => {
res.status(200).send(result);
})
.catch(error => {
res.status(503).send(error);
});
});
That you will have to deploy with the firebase CLI.
I need a solution that makes a Firebase DB API call for multiple items based on keys and returns the data (children) of those keys (in one response).
One solution might be to set up a separate server to make ALL the calls you need to your Firebase servers, aggregate them, and send it back as one response.
There exists tools that do this.
One of the more popular ones recently spec'd by the Facebook team is GraphQL.
https://graphql.org/
Behind the scenes, you set up your graphql server to map your queries which would all make separate API calls to fetch the data you need to fit the query. Once all the API calls have been completed, graphql will then send it back as a response in the form of a JSON object.
This is how you can do a one time call to a document in javascript, hope it helps
// Get a reference to the database service
let database = firebase.database();
// one time call to a document
database.ref("users").child("demo").get().then((snapshot) => {
console.log("value of users->demo-> is", snapshot.node_.value_)
});

How to perform server validations based on query results with Firebase?

When inserting a record I need to be able to run one or more queries on the server which will reject the insert if it finds any results. Will Firebase allow me to do this? It can't be specified on the client or it could be easily subverted.
For a more concrete example, I have a Meteor app that currently let's me do rate limiting on votes with some pretty simple code. I would like to implement this in Firebase. (Please forgive the CoffeeScript)
#VoteFrequency =
votesPer: (sinceDelta, sinceUnit) ->
Votes.find(
pollId: #pollId
ip: #ip
createdAt:
$gte: moment().add(-sinceDelta, sinceUnit).toDate()
).count()
withinLimits: (ip, pollId) ->
#ip = ip
#pollId = pollId
# Allow x votes per y seconds
#votesPer(10, 'seconds') < 1 &&
#votesPer(1, 'hours') < 15 &&
#votesPer(1, 'days') < 150
As you can see, it queries the database for previous votes matching the IP address and more recent than a timestamp (calculated using a delta from current time - interval). If it finds any results for any of these limits, it returns false, which tells the caller not to insert the new vote.
To be clear, I'm not looking for a solution where I add my own server into the mix. Once I have to do that, FireBase loses much of its appeal to me at least.
From what I can tell so far, this doesn't appear to be something I can implement just with a browser / native client and firebase alone.
You cannot run your own code on Firebase's servers. So trying to map an existing three-tier solution to Firebase will require more than evaluating how to port each script.
As far as I can see you with these main options:
you implement the same logic in Firebase's security rules
you run this code on a server of your own that acts as a middle tier between your clients and Firebase
you run this code on a server of your own, that acts as a "bot" to a Firebase database.
I'll assume #1 is clear, though certainly not trivial. For example: Firebase's security rules don't have access to the IP address of the client, so you'll have to find a way to (securely) insert that into the data. Also: rate-limiting is possible in Firebase security rules, but not easy.
#2 is probably also clear. But it would keep you on your current three-tier architecture with custom middle-ware. You'd just be replacing your current data store with Firebase. If that's what you're looking for, this is definitely the simplest migration approach.
#3 is described in pattern 2 of this blog post. In this case you could consider letting the clients write their vote and IP address to a "staging" node. The bot-script then reads them from the staging area, validates that they are within the rules and writes to the official node (where regular clients don't have access).

MEAN.JS setInterval process for event loop (gets data from another server)

I have a mean.js server running that will allow a user to check their profile. I want to have a setInterval like process running every second, which based on a condition, retrieve data from another server and update the mongoDB (simple-polling / long-polling). This updates the values that the user sees as well.
Q : Is this event loop allowed on nodejs, if so, where does the logic go that would start the interval when the server starts? or can events only be caused by actions (eg, the user clicking their profile to view the data).
Q: What are the implications of having both ends reading and writing to the same DB? Will the collisions just overwrite each other or fault. Is there info on how much read/write would overload it?
I think you can safely do a mongoDB cronjob to update every x day/hour/minutes. In the case of user profile, I assume thats not a critical data which require you to update your DB in real time.
If you need to update in real time, then do a DB replication. Then you point it to a new DB thats replicated on a real time.

Categories