For a project, I am trying to deploy Google Cloud functions as well as Firebase Cloud functions from the same file. That is, I have initiated a Firebase project and in that project file I'm trying to deploy a Google Cloud function for the same project.
The problem I'm facing is that the config for G Cloud function is to be hardcoded in the code itself because unlike Firebase Cloud functions, it is unable to get it from functions.config(). Is there a way in which we can get the config for both the platforms using a single method? Or an alternative way so that for G Cloud functions we don't have to hard code the config again and again?
an example of what hard coded config means is as follows.
For establishing connection to mysql-
function getMySQLConfig() {
if (!mysqlConfig) {
// const config = functions.config();
//const connectionName = config.config_server_sql.connectionName;//
mysqlConfig = {
// config we get from functions.config()
// host: config.config_server_sql.ip,
// port: config.config_server_sql.port,
// user: config.config_server_sql.dbUser,
// password: config.config_server_sql.password,
// database: config.config_server_sql.dbName,
// multipleStatements: true,
// charset: "utf8mb4_unicode_ci",
//hard coded config
dbName: "config_db",
ip: ‘IP_ADDRESS,
dbUser: "root",
port: 3306,
connectionName: ‘SAMPLE_NAME’,
password: ‘PASSWORD’,
...configSetting
};
// mysqlConfig.socketPath = `/cloudsql/${connectionName}`;
}
return mysqlConfig;
}
Related
I have been working with nodejs google cloud functions for a while. I have been facing a weird issue where I can't connect to database servers and there is no error logged not even timeout. I am using node v14.2.0 with pg as postgres library. My nodejs code is
const { Client } = require('pg');
let connectionSetting = {
host: "host",
user: "user",
database: "db_name",
password: "password",
};
const client = new Client(connectionSetting);
console.log(connectionSetting);
client.connect(err => {
if (err) {
console.error(`Error connecting to db -> ${err}`);
}
console.log("Connection established with PgSql DB ");
});
There are no console logs or whatever.
This same code is working on other systems. The database is remote database hosted on gcp and I'm able to connect to it using tablePlus as GUI client.
Any help appreciated.
I found the issue. It has to do with the node version. I was using node current 14.2.0 so I installed node lts 12 and then everything works fine.
Thanks for all help
Summary of Problem
I'm hosting my Node.js server that uses Firebase on Heroku and when I try to run on Heroku, I get the error below that it can't load my credentials.
It works perfectly when running on my local machine. I'm using the firebase-admin npm package to configure my firebase connection/instance.
Has anyone encountered this before? If so, I'd love your help!
Error from Heroku
Error: Could not load the default credentials. Browse to https://cloud.google.com/docs/authentication/getting-started for more information.
Code
Firebase Admin Config File
This is the file I'm using to configure my Firebase admin instance
const admin = require('firebase-admin');
admin.initializeApp({
credential: admin.credential.applicationDefault(),
databaseURL: "https://esports-competition-2.firebaseio.com"
}); //this also allows me to use Google OAuth2 refresh token
const db = admin.firestore();
module.exports = db;
Function to save data to firebase
const db = require("../../configs/firebaseConfig");
async function firestorePush(userId, eventType, data) {
try {
//read database
//if userId contains eventType singleEntry then remove from database
const timeStamp = new Date();
userId = userId.toString();
const userDoc = db.collection("pushData").doc(userId);
const pushData = await userDoc.set(
{
event: {
eventType,
data,
timeStamp
}
},
{ merge: true }
);
console.log("Document set in FireStore", pushData);
} catch (err) {
console.log("errpr pushing to firebase", err);
}
}
According to the documentation on admin.credential.applicationDefault():
Google Application Default Credentials are available on any Google infrastructure, such as Google App Engine and Google Compute Engine.
Since Heroku is not Google infrastructure, you will have to initialize the Admin SDK with one of the other options shown in the documentation on initializing the SDK.
I am trying to use mup to deploy a meteor app to my DigitalOcean droplet.
What I have done so far
Followed instructions on "Meteor-Up" website http://meteor-up.com/getting-started.html.
Installed mup via "npm install --global mup"
Created ".deploy" folder in my app directory. Ran "mup init".
Configured file "mup.js" file for my app, ran "mup setup".
Here is where I ran into an error. Upon running "mup setup", I am hit with the following error. [
What I tried:
I suspected that there could have been an issue with my syntax when configuring the mup.js file. After double-checking and not finding any error, I decided to re-install mup, and try running "mup setup" without modifying the "mup.js" file. However, I still receive the same error message.
Furthermore, after running "mup init", I can no longer run "mup" either, as I receive the same error as seen above. I suspect therefore that the issue is with the mup.js file. I have attached the generic version provided by meteor-up below (which still causes the error seen above).
module.exports = {
servers: {
one: {
// TODO: set host address, username, and authentication method
host: '1.2.3.4',
username: 'root',
// pem: './path/to/pem'
// password: 'server-password'
// or neither for authenticate from ssh-agent
}
},
app: {
// TODO: change app name and path
name: 'app',
path: '../app',
servers: {
one: {},
},
buildOptions: {
serverOnly: true,
},
env: {
// TODO: Change to your app's url
// If you are using ssl, it needs to start with https://
ROOT_URL: 'http://app.com',
MONGO_URL: 'mongodb://localhost/meteor',
},
// ssl: { // (optional)
// // Enables let's encrypt (optional)
// autogenerate: {
// email: 'email.address#domain.com',
// // comma separated list of domains
// domains: 'website.com,www.website.com'
// }
// },
docker: {
// change to 'abernix/meteord:base' if your app is using Meteor 1.4 - 1.5
image: 'abernix/meteord:node-8.4.0-base',
},
// Show progress bar while uploading bundle to server
// You might need to disable it on CI servers
enableUploadProgressBar: true
},
mongo: {
version: '3.4.1',
servers: {
one: {}
}
}
};
Any help would be greatly appreciated!
Thank you
The error dialog you posted shows a syntax error at line 10, character 5.
If you take a look:
module.exports = {
servers: {
one: {
// TODO: set host address, username, and authentication method
host: '1.2.3.4',
username: 'root',
// pem: './path/to/pem'
// password: 'server-password'
// or neither for authenticate from ssh-agent
}
^^^ This character
},
It's a closing brace which JS was not expecting. So why was it unexpected, lets move back to the last valid syntax:
module.exports = {
servers: {
one: {
// TODO: set host address, username, and authentication method
host: '1.2.3.4',
username: 'root',
^^^ This character
// pem: './path/to/pem'
// password: 'server-password'
// or neither for authenticate from ssh-agent
}
},
Well, looks like a comma which isn't followed by another key-value pair. Also known as a syntax error.
Take the comma out and things should be fine again!
I faced this same issue today. The problem is that Windows is trying to execute the mup.js file as a JScript script.
Here is the solution from the Meteor Up Common Problems page:
Mup silently fails, mup.js file opens instead, or you get a Windows script error
If you are using Windows, make sure you run commands with mup.cmd instead of mup , or use PowerShell.
That is, instead of mup setup, run mup.cmd setup.
I have the following code in a file called knexfile.js
module.exports = {
development: {
client: 'mysql',
connection: {
database: 'myDatabase',
timezone: 'Z',
user: 'root',
password: 'myPassword',
host: '127.0.0.1'
},
pool: {
min: 2,
max: 10
},
migrations: {
tableName: 'myMigrationTable'
}
}
};
myPassword from the code above is in plaintext. On my production server, I definitely don't want my password in plaintext in my code that my application uses to authenticate with my database. I also wouldn't want it laying around in a file in plaintext on my server.
Is there a way in knex or node to easily handle securely logging into my database? Should I just simply encrypt my password, leave it in a file on my server, and decrypt it using my webapp when it's going to log in?
Best practice would be using environment variable.
knex = require('knex')({
client: 'mysql',
connection: process.env.DATABASE_URL
})
I have a public git[hub] project, and am now ready to switch it from development to production. We are in the research field, so we like to share our code too!
I have a server.js file that we start with node server.js like most tutorials.
In it, there is connection information for the SQL server, and the location of the HTTPS certificates. It looks something like this:
var https = require('https');
var express = require('express');
var ... = require('...');
var fs = require('fs');
var app = express();
var Sequelize = require('sequelize'),
// ('database', 'username', 'password');
sequelize = new Sequelize('db', 'uname', 'pwd', {
logging: function () {},
dialect: 'mysql',
…
});
…
var secureServer = https.createServer({
key: fs.readFileSync('./location/to/server.key'),
cert: fs.readFileSync('./location/to/server.crt'),
ca: fs.readFileSync('./location/to/ca.crt'),
requestCert: true,
rejectUnauthorized: false
}, app).listen('8443', function() {
var port = secureServer.address().port;
console.log('Secure Express server listening at localhost:%s', port);
});
In PHP you can have the connection information in another file, then import the files (and therefore variables) into scope to use. Is this possible for the SQL connection (db, uname, pwd) and the file locations of the certs (just to be safe) so that we can commit the server.js file to git and ignore/not follow the secret file?
You can do this in a lot of different ways. One would be to use environment variables like MYSQL_USER=foo MYSQL_PASSWD=bar node server.js and then use process.env.MYSQL_USER in the code.
You can also read from files as you have suggested. You can do require("config.json") and node will automatically parse and import the JSON as JavaScript constructs. You can then .gitignore config.json and perhaps provide an example.config.json.
If you want to support both of these at once there is at least one library that allows you to do this simply: nconf.
You can always just store the configuration information in a JSON file. Node natively supports JSON files. You can simply require it:
var conf = require('myconfig.json');
var key = fs.readFileSync(conf.ssl_keyfile);
There are also 3rd party libraries for managing JSON config files that add various features. I personally like config.json because it allows you to publish a sample config file with empty values then, without modifying the sample config file, you can override those values using a .local.json file. It makes it easier to deal with config files in repos and also makes it easier to publish changes to the config file.
Here is great writeup about how you should organise your deployments
Basically all application critical variables like db password, secret keys, etc., should be accessible via environment variables.
You could do something like this
// config.js
const _ = require('lodash');
const env = process.env.NODE_ENV || 'development';
const config = {
default: {
mysql: {
poolSize: 5,
},
},
development: {
mysql: {
url: 'mysql://localhost/database',
},
},
production: {
mysql: {
url: process.env.DB_URI,
},
},
};
module.exports = _.default(config.default, config[env]);
// app.js
const config = require('./config');
// ....
const sequelize = new Sequelize(config.mysql.url);
Code is not perfect, but should be enough to get the idea.