How to run Google Cloud SQL only when I need it? - javascript

Google Cloud SQL advertises that it's only $0.0150 per hour for the smallest machine type, and I'm being charged for every hour, not just hours that I'm connected. Is this because I'm using a pool? How do I setup my backend so that it queries the cloud db only when needed so I don't get charged for every hour of the day?
const mysql = require('mysql');
const pool = mysql.createPool({
host : process.env.SQL_IP,
user : 'root',
password : process.env.SQL_PASS,
database : 'mydb',
ssl : {
[redacted]
}
});
function query(queryStatement, cB){
pool.getConnection(function(err, connection) {
// Use the connection
connection.query(queryStatement, function (error, results, fields) {
// And done with the connection.
connection.destroy();
// Callback
cB(error,results,fields);
});
});
}

This is not so much about the pool as it is about the nature of Cloud SQL. Unlike App Engine, Cloud SQL instances are always up. I learned this the hard way one Saturday morning when I'd been away from the project for a week. :)
There's no way to spin them down when they're not being used, unless you explicitly go stop the service.
There's no way to schedule a service stop, at least within the GCP SDK. You could alway write a cron job, or something like that, that runs a little gcloud sql instances patch [INSTANCE_NAME] --activation-policy NEVER command at, for example, 6pm local time, M-F. I was too lazy to do that, so I just set a calendar reminder for myself to shut down my instance at the end of my workday.
Here's the MySQL Instance start/stop/restart page for the current SDK's docs:
https://cloud.google.com/sql/docs/mysql/start-stop-restart-instance
On an additional note, there is an ongoing 'Feature Request' in the GCP Platform to start/stop the Cloud SQL (2nd Gen), according to the traffic as well. You can also visit the link and provide your valuable suggestions/comments there as well.

I took the idea from #ingernet and created a cloud function which starts/stops the CloudSQL instance when needed. It can be triggered via a scheduled job so you can define when the instance goes up or down.
The details are here in this Github Gist (inspiration taken from here). Disclaimer: I'm not a python developer so there might be issues in the code, but at the end it works.
Basically you need to follow these steps:
Create a pub/sub topic which will be used to trigger the cloud function.
Create the cloud function and copy in the code below.
Make sure to set the correct project ID in line 8.
Set the trigger to Pub/Sub and choose the topic created in step 1.
Create a cloud scheduler job to trigger the cloud function on a regular basis.
Choose the frequency when you want the cloud function to be triggered.
Set the target to Pub/Sub and define the topic created in step 1.
The payload should be set to start [CloudSQL instance name] or stop [CloudSQL instance name] to start or stop the specified instance (e.g. start my_cloudsql_instance will start the CloudSQL instance with the name my_cloudsql_instance)
Main.py:
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
import base64
from pprint import pprint
credentials = GoogleCredentials.get_application_default()
service = discovery.build('sqladmin', 'v1beta4', credentials=credentials, cache_discovery=False)
project = 'INSERT PROJECT_ID HERE'
def start_stop(event, context):
print(event)
pubsub_message = base64.b64decode(event['data']).decode('utf-8')
print(pubsub_message)
command, instance_name = pubsub_message.split(' ', 1)
if command == 'start':
start(instance_name)
elif command == 'stop':
stop(instance_name)
else:
print("unknown command " + command)
def start(instance_name):
print("starting " + instance_name)
patch(instance_name, "ALWAYS")
def stop(instance_name):
print("stopping " + instance_name)
patch(instance_name, "NEVER")
def patch(instance, activation_policy):
request = service.instances().get(project=project, instance=instance)
response = request.execute()
dbinstancebody = {
"settings": {
"settingsVersion": response["settings"]["settingsVersion"],
"activationPolicy": activation_policy
}
}
request = service.instances().patch(
project=project,
instance=instance,
body=dbinstancebody)
response = request.execute()
pprint(response)
Requirements.txt
google-api-python-client==1.10.0
google-auth-httplib2==0.0.4
google-auth==1.19.2
oauth2client==4.1.3

Related

Better way to schedule cron jobs based on job orders from php script

So I wrote simple video creator script in NodeJS.
It's running on scheduled cron job.
I have a panel written in PHP, user enter details and clicks "Submit new Video Job" Button.
This new job is saving to DB with details, jobId and status="waiting" data.
PHP API is responsible for returning 1 status at a time, checks status="waiting" limits query to 1 then returns data with jobID when asked
Video Creation Script requests every x seconds to that API asks for new job is available.
It has 5 tasks.
available=true.
Check if new job order available (With GET Request in every 20 seconds), if has new job;
available=false
Get details (name, picture url, etc.)
Create video with details.
Upload Video to FTP
Post data to API to update details. And Mark that job as "done"
available=true;
These tasks are async so everytask has to be wait previous task to be done.
Right now, get or post requesting api if new job available in every 20 seconds (Time doesnt mattter) seems bad way to me.
So any way / package / system to accomplish this behavior?
Code Example:
const cron = require('node-cron');
let available=true;
var scheduler = cron.schedule(
'*/20 * * * * *',
() => {
if (available) {
makevideo();
}
},
{
scheduled: false,
timezone: 'Europe/Istanbul',
}
);
let makevideo = async () => {
available = false;
let {data} = await axios.get(
'https://api/checkJob'
);
if (data == 0) {
console.log('No Job');
available = true;
} else {
let jobid = data.id;
await createvideo();
await sendToFTP();
await axios.post('https://api/saveJob', {
id: jobid,
videoPath: 'somevideopath',
});
available = true;
}
};
scheduler.start();
RabbitMQ is also a good queueing system.
Why ?
It's really well documented (examples for many languages including javascript & php).
Tutorials are simple while they're exposing real use cases.
It has a REST API.
It ships with a monitoring UI.
How to use it to solve your problem ?
On the job producer side : send messages (jobs) to a queue by following tutorial 1
To consume jobs with your nodejs process : see RabbitMQ's tutorial 2
Other suggestions :
Use a prefetch value of 1 and publisher confirms so you can ensure that an instance of consumer will not receive messages while there's a job running.
Roadmap for a quick prototype : tutorial 1... then tutorial 2 x). After sending and receiving messages you can explore the options you can set on queues and messages
Nodejs package : http://www.squaremobius.net/amqp.node/
PHP package : https://github.com/php-amqplib/php-amqplib
While it is possible to use the database as a queue, it is commonly known as an anti-pattern (next to using the database for logging), and as you are looking for:
So any way / package / system to accomplish this behavior?
I use the free-form of your question thanks to the placed bounty to suggest: Beanstalk.
Beanstalk is a simple, fast work queue.
Its interface is generic, but was originally designed for reducing the latency of page views in high-volume web applications by running time-consuming tasks asynchronously.
It has client libraries in the languages you mention in your question (and many more), is easy to develop with and to run in production.
What you are doing in a very standard system design paradigm, done with Apache Kafka or any queue based implementation(ex, RabbitMQ). You can check out about Kafka/rabbitmq but basically Not going into details:
There is a central Queue.
When user submits a job the job gets added to the Queue.
The video processor runs indefinitely subscribing to the queue.
You can go ahead and look up : https://www.gentlydownthe.stream/ and you will recognize the similarities on what you are doing.
Here you don't need to poll yourself, you need to subscribe to an event and the other things will be managed by the respective queues.

Giveaway commands does not save (Heroku) discord.js

I am making a giveaway command, but whenever I restart all dynos in heroku it seems the giveaway just froze(Never ends the giveaway) and when I do !gdelete {messageid} It says there is no giveaway for {messageid} any idea why and how to fix it. I have tried using quick.db but still the same and I am quite new to heroku and coding discord bot. Im using node.js
const { GiveawaysManager } = require("discord-giveaways");
const manager = new GiveawaysManager(bot, {
storage: "./giveaways.json",
updateCountdownEvery: 10000,
default: {
botsCanWin: false,
embedColor: "#FF0000",
reaction: "🎉"
}
})
bot.giveawaysManager = manager;
Heres the code
And heres the gstart command: https://pastebin.com/9tBjpVEY
The issue is caused by Heroku, which doesn't store local files when you're not running the app. Every time you restart a dyno Heroku deletes everything and rebuilds it: that means that if you save your files locally when it restarts they'll get deleted.
To solve this issue you need either to switch to another service or to create some form of backup for your file.
You could also use a remote database, but I don't know how that could be implemented with the discord-giveaways package.
I had the same issue and I think that it can be solved by doing this:
Instead of using quick.db, you can use quickmongo which just the same as quick.db and discord-giveaways also has an example of it. Although there is one change that you need to make. The example of quickmongo also shows a local way to store the files but instead of typing the localhost string, replace it with the MongoDB Compass connection string of your MongoDB cluster and give the new collection the same name which is giveaways.
In order to get the connection string, log in to your MongoDB account and create a cluster. After creating the cluster, click the connect button on the cluster and then select Connect using MongoDB Compass. From there you will see a connection string. Copy and paste that string in the place where there was the localhost string. Then replace <password> with your account's password which is your password with your username. Also, replace the test at the end with giveaways and you are good to go. After running the code, you would also see a collection named giveaways in the Collections Tab inside your cluster.
Example:
const db = new Database('connectionLink/giveaways');
db.once('ready', async () => {
if ((await db.get('giveaways')) === null) await db.set('giveaways', []);
console.log('Giveaway Database Loaded');
});

Real time notifications node.js

I'm developing a calendar application with Node.js, express.js and Sequelize.
The application is simple, you can create tasks in your calendar, but you can also assign some tasks to others users of the system
I need to create a notification system with socket.io, but I don't have experience with websockets. My big doubt is how can I make my server send a notification to the user that you assign the task?
My ports configurations is on a folder called bin/www, my express routes are defined on a file called server.js
Any Idea?
I want to introduce you to ready to use backend system that enables you to easily build modern web applications with cool functionalities:
Persisted data: store your data and perform advanced searches on it.
Real-time notifications: subscribe to fine-grained subsets of data.
User Management: login, logout and security rules are no more a burden.
With this, you can focus to your main application development.
You can look at Kuzzle, wich is one project I working on:
First, start the service:
http://docs.kuzzle.io/guide/getting-started/#running-kuzzle-automagically
Then in your calendar application you can the javascript sdk
At this point you can create a document:
const
Kuzzle = require('kuzzle-sdk'),
kuzzle = new Kuzzle('http://localhost:7512');
const filter = {
equals: {
user: 'username'
}
}
// Subscribe every changes in calendar collection containing a field `user` equals to `username`
kuzzle
.collection('calendar', 'myproject')
.subscribe(filter, function(error, result) {
// triggered each time a document is updated/created !
// Here you can display a message in your application for instance
console.log('message received from kuzzle:', result)
})
// Each time you have to create a new task in your calendar, you can create a document that represent your task and persist it with kuzzle
const task = {
date: '2017-07-19T16:07:21.520Z',
title: 'my new task',
user: 'username'
}
// Creating a document from another app will notify all subscribers
kuzzle
.collection('calendar', 'myproject')
.createDocument(task)
I think this can help you :)
Documents are served though socket.io or native websockets when available
Don't hesitate to ask question ;)
As far as I can understand you need to pass your socket.io instance to other files, right ?
var sio = require('socket.io');
var io = sio();
app.io = io;
And you simply attach it to your server in your bin/www file
var io = app.io
io.attach(server);
Or what else I like to do, is adding socket.io middleware for express
// Socket.io middleware
app.use((req, res, next) => {
req.io = io;
next();
});
So you can access it in some of your router files
req.io.emit('newMsg', {
success: true
});

RabbitMQ amqp.node integration with nodejs express

The official RabbitMQ Javascript tutorials show usage of the amqp.node client library
amqp.connect('amqp://localhost', function(err, conn) {
conn.createChannel(function(err, ch) {
var q = 'hello';
ch.assertQueue(q, {durable: false});
// Note: on Node 6 Buffer.from(msg) should be used
ch.sendToQueue(q, new Buffer('Hello World!'));
console.log(" [x] Sent 'Hello World!'");
});
});
However, I find it's hard to reuse this code elsewhere. In particular, I don't know how to exports the channel object since it's in a callback. For example in my NodeJs/Express App:
app.post('/posts', (req, res) => {
-- Create a new Post
-- Publish a message saying that a new Post has been created
-- Another 'newsfeed' server consume that message and update the newsfeed table
// How do I reuse the channel 'ch' object from amqp.node here
});
Do you guys have any guidance on this one? Suggestion of other libraries is welcomed (Since I'm starting out, ease of use is what I considered the most important)
amqp.node is a low-level API set that does minimal translation from AMQP to Node.js. It's basically a driver that should be used from a more friendly API.
If you want a DIY solution, create an API that you can export from your module and manage the connection, channel and other objects from within that API file.
But I don't recommend doing it yourself. It's not easy to get things right.
I would suggest using a library like Rabbot (https://github.com/arobson/rabbot/) to handle this for you.
I've been using Rabbot for quite some time now, and I really like the way it works. It pushes the details of AMQP off to the side and lets me focus on the business value of my applications and the messaging patterns that I need, to build featurs.
As explained in the comments, you could use the module.exports to expose the newly created channel. Of course this will be overridden each time you create a new channel, unless you want to keep an array of channels or some other data structure.
Assuming this is in a script called channelCreator.js:
amqp.connect('amqp://localhost', function(err, conn) {
conn.createChannel(function(err, ch) {
var q = 'hello';
ch.assertQueue(q, {durable: false});
//this is where you can export the channel object
module.exports.channel = ch;
//moved the sending-code to some 'external script'
});
});
In the script where you may want to use the "exported" channel:
var channelCreator = require("<path>/channelCreator.js");
//this is where you can access the channel object:
if(channelCreator.channel){
channelCreator.channel.sendToQueue('QueueName', new Buffer('This is Some Message.'));
console.log(" [x] Sent 'Message'");
}
Hope this helps.

running a mysql query using mysql-npm on AWS

Hi guys I have a problem that i don't really have idea how to solve. it's also a bit strange :/
Basically I have created this Lambda function to connect to a mysql DB using the node package 'mysql'.
If i run the function from command line on my pc using the command 'sls function run function1' and make different queries everything is fine.
But when I call the function from a web browser using the link, I have to refresh the page 2 times to get the right result because at the first refresh the server respond with the old result.
I have noticed that from the command line I always have different thredID while from webbrowser is always the same.
Also I don't close the connection in the lambda function code because everything is fine if i run the function from command line but from browser I can only make 2 queries and then I get a message that say that I cannot use a closed connection.
So it seems like Lambda store the old query result when I call it from web browser.
Obviously I'm making same stupid mistake but I don't know how to solve it.
Does anyone have an idea?
Thanks :)
'use strict';
//npm packages
var mysql=require('mysql');
var deasync = require('deasync');
//variables
var goNext=false; //use to synchronize deasync
var error=false; //it becomes TRUE if an error occured during the connection to the DB
var dataColumnTable; //the data thet you extract from the query to the DB
var errorMessage;
//----------------------------------------------------------------------------------------------------------------
//always same credentials
var connection = mysql.createConnection({
host : 'hostAddress',
user : 'Puser',
password : 'password',
port : '3306',
database : 'database1',
});
//----------------------------------------------------------------------------------------------------------------
module.exports.handler = function(event, context) {
var Email=event.email;
connection.query('SELECT City, Address FROM Person WHERE E_Mail=?', Email, function(err, rows) {
if(err){
console.log("Cannot connect to DB");
console.log(err);
error=true;
errorMessage=err;
}
else{
console.log("data from column acquired!");
dataColumnTable=rows;
}
//connection.end(function(err) {
// connection.destroy();
//});
//console.log("Connection closed!");
goNext=true;
});
require('deasync').loopWhile(function(){return goNext!=true;});
//----------------------------------------------------------------------------------------------------------------
if(error==true)
return callback('Error '+ errorMessage);
else
return callback(null,dataColumnTable); //return a JsonFile
//fine headler
};
Disclaimer: I'm not very familiar with AWS and/or AWS Lambda.
http://docs.aws.amazon.com/lambda/latest/dg/programming-model-v2.html states (emphasis mine):
Your Lambda function code must be written in a stateless style, and have no affinity with the underlying compute infrastructure. Your code should expect local file system access, child processes, and similar artifacts to be limited to the lifetime of the request. Persistent state should be stored in Amazon S3, Amazon DynamoDB, or another cloud storage service. Requiring functions to be stateless enables AWS Lambda to launch as many copies of a function as needed to scale to the incoming rate of events and requests. These functions may not always run on the same compute instance from request to request, and a given instance of your Lambda function may be used more than once by AWS Lambda.
Opening a connection and storing it in a variable outside your handler function is state. The connection will likely be closed between requests or even before your first request. Your lambda function may be reused (hence identical thread ids).
My assumption would be (and an attempt to solve this problem), that you need to create the connection on every request (i.e., inside your handler) and may not expect any value be as initialized or as on last request. (except for constants probably).

Categories