Recently I came across a slightly different problem. Here's the deal: I'm using an API that requires me to use the same instance across my whole application.
The problem is that my application runs in different tabs and different browsers at the same time and with that in mind my application keeps bootstrapping and creating new instances of the object that I need to use.
I've tried to create a service and inject in the APP module but at some point, a new instance will be generated.
Now I'm trying to use local storage to save the instance but when I retrieve my object I can't call the functions that belong to the object.
let storedObject = localStorage.getItem("storedObject");
if(storedObject == null) {
this.storeInstance();
} else {
let instancedObj = JSON.parse(storedObject);
instancedObj.somefunction(); // THIS DOESN'T WORK
}
storeInstance() {
const objThatNeedsToBeTheSame = new TestObject();
// key / value
localStorage.setItem("storedObject", JSON.stringify(objThatNeedsToBeTheSame));
}
I think this is a good use case for firebase real-time API database, and auth by a token.
Related
I need to maintain a common variable among 2 functions in Twilio. But it's not working as expected.
I tried to use variable inside memory like this:-
let memory = JSON.parse(event.Memory);
if(memory.twilio.counter === null) {
memory.twilio.counter = 0;
} else {
memory.twilio.counter = memory.twilio.counter + 1;
}
Is it not the correct way?
If not, is there any alternative?
Memory is a object presented by Twilio Autopilot, not Twilio Functions. If you want to share state between Functions (not using Autopilot), you need to place that state into external storage like Twilio Sync or Airtable, etc.
Sync is good is there are not a lot of read/write operations. See Tips for building with Sync API below.
Use Twilio Sync to create, read, update, delete persistent data
Tips for building with Sync API
I cannot find clear information on how to manage database connections (MongoDB in my case) from an Azure function written in Javascript.
The Microsoft document below says to not create a connection for each invocation of the function by using static variables in C# using .NET Framework Data Provider for SQL Server and the pooling is handled by the client connection. It does not describe how to do this in Javascript.
https://learn.microsoft.com/en-us/azure/azure-functions/manage-connections
A solution of creating a global variable to hold the database client between invocations is described here but the author is not confident this is the correct way to do it.
http://thecodebarbarian.com/getting-started-with-azure-functions-and-mongodb.html
Has anyone used this in production or understand if this is the correct approach?
Yes, there's a very close equivalence between C#/SQL storing a single SqlConnection instance in a static variable and JS/MongoDB storing a single Db instance in a global variable. The basic pattern for JS/MongoDB in Azure Functions is (assuming you're up to date for async/await - alternatively you can use callbacks as per your linked article):
// getDb.js
let dbInstance;
module.exports = async function() {
if (!dbInstance) {
dbInstance = await MongoClient.connect(uri);
}
return dbInstance;
};
// function.js
const getDb = require('./getDb.js');
module.exports = async function(context, trigger) {
let db = await getDb();
// ... do stuff with db ..
};
This will mean you only instantiate one Db object per host instance. Note this isn't one per Function App - if you're using a dedicated App Service Plan then there will be the number of instances you've specified in the plan, and if you're using a Consumption Plan then it'll vary depending on how busy your app is.
I'm trying to create a tool for editing files containing a object that is related to my companies business logic. I'm using electron to do so.
I've created a javascript class which represents the object, handles its internals, and provides buisness functions on it:
class Annotation {
constructor() {
this._variables = []
this._resourceGenerators = []
}
get variables() {
return this._variables
}
get resourceGenerators() {
return this._resourceGenerators
}
save(path) {
...
}
static load(path) {
...
}
};
module.exports = Annotation;
I create the object in my main process, and I have an event handler which gives render processes access to it:
const {ipcMain} = require('electron')
const Annotation = require('./annotation.js');
... Do electron window stuff here ...
var annotation = new Annotation()
ipcMain.on('getAnnotation', (event, path) => {
event.returnValue = annotation
})
I've just found out that sending an object through ipcMain.sendSync uses JSON.stringify to pass the annotation, meaning it looses the getters/functions on it.
I'm fairly new to web/electron development; what is the proper way of handling this? Previously I had handlers in main for dealing with most of the functions that the render processes needed, but main started to become very bloated, so I'm trying to refactor it somewhat.
TL; DR: RECONSTRUCT OBJECT ON RECEIVER SIDE.
Description: Electron's main architectural design is based on multi-process, separating main (node.js) and each renderer (chromium) processes and allow to communicate between processes via IPC mechanism. And due to several reason (efficiency, performance, security, etcs) Electron's OOTO IPC only allows serializable POJO to be sent / received. Once receiver have those data, you may need reconstruct desired object from there.
If your intention around access is to share references like true singleton, that's not available.
The first thing I would suggest is that in most cases, you don't need to transfer anything to the main process. The main process is mostly for creating windows and accessing Electron API's which are restricted to the main process. Everything else should and can be done from the renderer including access to all node modules. You can write files, access databases, etc all from the renderer.
Read this article about the differences between the main and renderer processes and what you should be using each for.
I'm developing a web app backed with firebase realtime database.
The app's frontend is quite complex and there are several methods that write data to the db. I have several utils that look like this:
var utils = {
setSomething: function(id, item) {
var myRef = firebase.database().ref('my/path');
myRef.set(item).then(something);
}
}
The question here is: is it okay to create a new Ref inside the method (and thereby, creating a new ref with each call) or should I "cache" the ref somewhere else (just like we cache jquery objects).
I could do something like this first:
var cachedRefs = {
myRef: firebase.database().ref('my/path'),
yourRef: firebase.database().ref('your/path'),
herRef: firebase.database().ref('her/path')
}
And then the former method could be rewritten as:
var utils = {
setSomething: function(id, item) {
cachedRefs.myRef.set(item).then(something);
}
}
Is there any performance gain besides having less code repetition?
firebaser here
References just contain the location in the database. they are cheap.
Adding the first listener to a reference requires that we start synchronizing the data, so that is as expensive as the data you listen to. Adding extra listeners is then relatively cheap, since we de-duplicate the data synchronization across listeners.
I wanted to know if its good practice to use it like following since I used a global field cacheObj
I need to parse the data and share it between other modules,any module can take any property but only the first module which called to this parser is responsible to provide the data to parse(I need to do this parse just once and share properties in different modules)
This code is from other SO post and I want to use it
var Parser = require('myParser'),
_ = require('lodash');
var cacheObj; // <-- singleton, will hold value and will not be reinitialized on myParser function call
function myParser(data) {
if (!(this instanceof myParser)) return new myParser(data);
if (!_.isEmpty(cacheObj)) {
this.parsedData = cacheObj;
} else {
this.parsedData = Parser.parse(data);
cacheObj = this.parsedData;
}
}
myParser.prototype = {
//remove `this.cacheObj`
getPropOne: function () {
return this.parsedData.propOne;
},
getPropTwo: function () {
return this.parsedData.propTwo;
}
};
module.exports = myParser;
It kindda looks like the Context Object pattern, which is used for maintaining state and for sharing information. Some consider it a bad practice and prefer Singleton when it comes to share the object between layers, but if suites your case (in the same module) - my advice is to use it.
UPDATE
The main reason why you shouldn't use ContextObject through your layes is because it binds all sub-systems together( one object is referencing everything else). While Singleton is not just for creating objects, it is also services as access point that can be loaded by the corresponding sub-system. Having a Singleton represent every service access point allows for seamless vertical integration of cooperating components/modules. Simple code example:
Singleton:
// returns the "global" time
var time = Clock.getInstance().getTime();
Context object:
// allows different timezones to coexist within one application
var time = context.getTimezoneOffset().getTime();