I tried both and all working. What are the difference?
import firebase from 'react-native-firebase';
const defaultApp = firebase.app();
defaultApp.database().ref('foobar').once('value', (snapshot) => {
// snapshot from default app
});
vs
import firebase from 'react-native-firebase';
firebase.database().ref('foobar').once('value', (snapshot) => {
// snapshot from default app
});
The two approaches are equivalent. The second one just relies on some hard-coded defaults, while the first is more explicit. This becomes especially apparent if you want to (for example) access to databases in a single app.
Our documentation explains this rather well, so I'll quote from there:
In most cases, you will only have to initialize a single, default app. You can access services off of that app in two equivalent ways:
// Initialize the default app
var defaultApp = firebase.initializeApp(defaultAppConfig);
console.log(defaultApp.name); // "[DEFAULT]"
// You can retrieve services via the defaultApp variable...
var defaultStorage = defaultApp.storage();
var defaultDatabase = defaultApp.database();
// ... or you can use the equivalent shorthand notation
defaultStorage = firebase.storage();
defaultDatabase = firebase.database();
Some use cases require you to create multiple apps at the same time. For example, you might want to read data from the Realtime Database of one Firebase project and store files in another project. Or you might want to authenticate one app while have another app be unauthenticated. The Firebase SDK allows you create multiple apps at the same time, each with their own configuration information.
// Initialize the default app
firebase.initializeApp(defaultAppConfig);
// Initialize another app with a different config
var otherApp = firebase.initializeApp(otherAppConfig, "other");
console.log(firebase.app().name); // "[DEFAULT]"
console.log(otherApp.name); // "other"
// Use the shorthand notation to retrieve the default app's services
var defaultStorage = firebase.storage();
var defaultDatabase = firebase.database();
// Use the otherApp variable to retrieve the other app's services
var otherStorage = otherApp.storage();
var otherDatabase = otherApp.database();
Note: Each app instance has its own configuration options and authentication state.
you do not need to call that method, unless you are using more than one firebase app instance in your application
Related
I am following the Firebase tutorial on how to implement Algolia with Firebase: https://firebase.google.com/docs/firestore/solutions/search
I am currently stuck on the indexing part of the tutorial as I have errors coming from the firebase cloud-functions logs.
This is the output of the cloud-functions log
and this is the code I wrote
const functions = require('firebase-functions');
const algoliasearch = require("algoliasearch");
const ALGOLIA_ID = functions.config().algolia.app;
const ALGOLIA_ADMIN_KEY = functions.config().algolia.key;
const ALGOLIA_SEARCH_KEY = functions.config().algolia.search_key;
const ALGOLIA_INDEX_NAME = 'users';
const client = algoliasearch(ALGOLIA_ID, ALGOLIA_ADMIN_KEY);
// Update the search index every time a blog post is written.
exports.onUserCreated = functions.firestore.document('organisations/40R0LMA6ALZgF7KjHJMc/employees/{userId}').onCreate((snap, context) => {
// Get the note document
const user = snap.data();
// Add an 'objectID' field which Algolia requires
user.objectID = snap.id;
console.log(user.objectID)
// Write to the algolia index
const index = client.initIndex(ALGOLIA_INDEX_NAME);
return index.saveObject(user);
});
It seems that you are not correctly setting the different environment variables used in this example.
As explained in the doc, to get the value of the algolia.app environment variable when you do const ALGOLIA_ID = functions.config().algolia.app; you need to previously set its value as follows:
firebase functions:config:set algolia.app="THE_ALGOLIA_ID"
Since you need to set several variables, you can set them in one instruction, as follows:
firebase functions:config:set algolia.app="THE_ALGOLIA_ID" algolia.key="THE_ALGOLIA_ADMIN_KEY" ...
As explained in the doc, "to inspect what's currently stored in environment config for your project, you can use firebase functions:config:get" in the CLI.
I've started to develop a desktop app with node and electron. It has a package, which is implementing connection with some API. It is structured as one base class, and some derrived classes in this way:
ApiBase
ApiAuth extends ApiBase
ApiAuth.login()
ApiAuth.logout()
etc...
ApiTasks extends ApiBase
ApiTasks.getTaskList()
etc...
etc...
And now, i want to make nice and convinient way to use these classes in my app. So i need to create some entry point, which will provide an access to my API implementation. But, i do not have much expirience to make it right.
I thought about something like this:
index.js:
const ApiAuth = require('./amazing-time-api-auth');
const ApiTasks = require('./amazing-time-api-tasks');
apiAuth = new ApiAuth('www.sample-host.com');
apiTasks = new ApiTasks('www.sample-host.com');
module.exports = {
login: apiAuth.login,
logout: apiAuth.logout,
getTaskList: apiTasks.getTaskList,
etc...
}
somwhere at the app:
const api = require("./lib/someApi");
// need to get task list for some reason
api.getTaskList(param1, param2)
But there are some problems with this approach i managed:
it is a problem to pass host param to the constructors in index.js dynamicly
i am not sure if creating this instances everytime requiring index.js is a rigth thing
So i want to know about some approches i can use here, because i do now even know where to start research. Thank you.
I think that you identified some of the most crucial decisions with this:
it is a problem to pass host param to the constructors in index.js dynamicly
IMO Configuration and the interface are important considerations. Even though it can be refactored after the fact an easy to configure and consume interface will help reduce adoption of your library. As you pointed out the configuration is static right now and very brittle. Ie a change to the URL will cascade to all clients and require all clients to update.
A first intuitive alternative may be to allow dynamic configuration of the current structure:
apiAuth = new ApiAuth(process.env.API_AUTH_URL || 'www.sample-host.com');
apiTasks = new ApiTasks(process.env.API_TASKS_URL || 'www.sample-host.com');
While this allows client to dynamically configure the URL, the configuration is "implicit". IMO this is unintuitive and difficult to document. Also it's not explicit and requires a client to look in the code to see the environmental variables and instantiation flow.
I would favor exposing these classes to the client directly. I would consider this approach "explicit" as it forces the client to explicitly configure/instantiate your components. I think it's like providing your clients with primitives and allowing them to compose, build, and configure them in whatever way they want:
const ApiAuth = require('./amazing-time-api-auth');
const ApiTasks = require('./amazing-time-api-tasks');
module.exports = {
auth: ApiAuth,
tasks: ApiTasks
}
This automatically namespaces the api behind its functions (auth|tasks) AND requires that the client instantiatae the classes before using:
const api = require("./lib/someApi");
const auth = new api.auth(process.env.SOMETHING, 'some-url');
This pulls the configuration further out in the architecture. It forces the client to decide how it wants to get the URL and explicitly instantiate the library. What if one of your clients doesn't use login/logout? This may be more flexible in that case.
i am not sure if creating this instances everytime requiring index.js is a rigth thing
If instantiation should remain hidden, another alternative would be to provide a builder function in order to encapsulate it:
const ApiAuth = require('./amazing-time-api-auth');
const ApiTasks = require('./amazing-time-api-tasks');
apiAuth = new ApiAuth('www.sample-host.com');
apiTasks = new ApiTasks('www.sample-host.com');
module.exports = {
auth: {
build: (url) => {
return new ApiAuth(url);
}
},
tasks: {
build: (url) => {
return new ApiTasks(url);
}
}
}
This should still hide each class but still allows the client to decide how it configures each class:
const api = require("./lib/someApi");
const auth = api.auth.build('my-url');
auth.login();
I cannot find clear information on how to manage database connections (MongoDB in my case) from an Azure function written in Javascript.
The Microsoft document below says to not create a connection for each invocation of the function by using static variables in C# using .NET Framework Data Provider for SQL Server and the pooling is handled by the client connection. It does not describe how to do this in Javascript.
https://learn.microsoft.com/en-us/azure/azure-functions/manage-connections
A solution of creating a global variable to hold the database client between invocations is described here but the author is not confident this is the correct way to do it.
http://thecodebarbarian.com/getting-started-with-azure-functions-and-mongodb.html
Has anyone used this in production or understand if this is the correct approach?
Yes, there's a very close equivalence between C#/SQL storing a single SqlConnection instance in a static variable and JS/MongoDB storing a single Db instance in a global variable. The basic pattern for JS/MongoDB in Azure Functions is (assuming you're up to date for async/await - alternatively you can use callbacks as per your linked article):
// getDb.js
let dbInstance;
module.exports = async function() {
if (!dbInstance) {
dbInstance = await MongoClient.connect(uri);
}
return dbInstance;
};
// function.js
const getDb = require('./getDb.js');
module.exports = async function(context, trigger) {
let db = await getDb();
// ... do stuff with db ..
};
This will mean you only instantiate one Db object per host instance. Note this isn't one per Function App - if you're using a dedicated App Service Plan then there will be the number of instances you've specified in the plan, and if you're using a Consumption Plan then it'll vary depending on how busy your app is.
Recently I came across a slightly different problem. Here's the deal: I'm using an API that requires me to use the same instance across my whole application.
The problem is that my application runs in different tabs and different browsers at the same time and with that in mind my application keeps bootstrapping and creating new instances of the object that I need to use.
I've tried to create a service and inject in the APP module but at some point, a new instance will be generated.
Now I'm trying to use local storage to save the instance but when I retrieve my object I can't call the functions that belong to the object.
let storedObject = localStorage.getItem("storedObject");
if(storedObject == null) {
this.storeInstance();
} else {
let instancedObj = JSON.parse(storedObject);
instancedObj.somefunction(); // THIS DOESN'T WORK
}
storeInstance() {
const objThatNeedsToBeTheSame = new TestObject();
// key / value
localStorage.setItem("storedObject", JSON.stringify(objThatNeedsToBeTheSame));
}
I think this is a good use case for firebase real-time API database, and auth by a token.
I haven't been able to find any documents outlining if it's a bad / good idea to keep a reference to a collection so that it can be re-used after the db connection has been established.
for instance, in our database.ts which is called during our server start-up. It will grab the collections and store it for use in the module which can then be references throughout the project.
example of storing the collection
/*database.ts*/
//module to keep 1 connection alive and allow us to have 1 instance of our collections.
import {MongoClient, Db, Collection } from 'mongodb';
let uri: string = 'connection_string';
export let db: Db;
export let usrCollection: Collection;
export let bookCollection: Collection;
export function initDb(cb: any) {
MongoClient.connect(uri, function(err, database) {
//handle err
db = database;
cb(); //returns now since db is assigned.
});
}
export function initCollections() {
usrCollection = db.collection('users'); //non-async
bookCollection = db.collection('book'); //non-async
}
/*app.ts*/
//in our express app when it start up
import {db, initCollections, initDb} from './database';
initDb(function() {
//this will hold up our server from starting BUT
//there is no reason to make it async.
initCollections();
});
what are some of the short coming of this pattern/build? what can I improve and what should I avoid for performance hits specificially with managing the collections.would this pattern keep my source code bug free or easy to find the bugs and extensible for the future? or is there even a better pattern - please no ext library aside from the native mongodb driver for node.js solutions, such as mongoose.
IMO, this is not a good practise. Users of database.ts will have to make sure that they have initialized these collections before using them. I will not even export any internal collections, since user should never care how the database is structured. You should export some operation functions like function addUser(user: User) {/.../}. There is a neat code sample created by MS.
//Edit. Yes, you can store collections reference internally so that you don't need to type collection names every time. It's easy to introduce bugs by ignoring singular and plural. The best practise would be:
import mongodb = require('mongodb');
const server = new mongodb.Server('localhost', 27017, {auto_reconnect: true})
const db = new mongodb.Db('mydb', server, { w: 1 });
db.open(function() {});
const userCollection =db.collection("users");
const bookCollection = db.collection("book");