My Scenario,
I am working on a desktop based application. Where My big challenge is keeping data into relational DB(Offline) and sync accordingly(Company have their own syncing algorithm). I am using Electron and VueJS as client side. For building the desktop app I'm using electron-builder. I am able to write migrations with raw SQL or various ORM.
What I want?
While I'll install into a desktop, I want to create the database file and apply all the migrations on the client's computer. I just don't know that part how to do that. I also looked into Electron Builder Docs. But didn't understand. I need an example, any idea.
Please help me. Thanks
After doing a lot of research I found an awesome solution provided by sequalize.js. I found a library Umzug Github. Let's look at the implementation...
/**
* Created by Ashraful Islam
*/
const path = require('path');
const Umzug = require('umzug');
const database = /* Imported my database config here */;
const umzug = new Umzug({
storage: 'sequelize',
storageOptions: {
sequelize: database
},
// see: https://github.com/sequelize/umzug/issues/17
migrations: {
params: [
database.getQueryInterface(), // queryInterface
database.constructor, // DataTypes
function () {
throw new Error('Migration tried to use old style "done" callback. Please upgrade to "umzug" and return a promise instead.');
}
],
path: './migrations',
pattern: /\.js$/
},
logging: function () {
console.log.apply(null, arguments);
}
});
function logUmzugEvent(eventName) {
return function (name, migration) {
console.log(`${name} ${eventName}`);
}
}
function runMigrations() {
return umzug.up();
}
umzug.on('migrating', logUmzugEvent('migrating'));
umzug.on('migrated', logUmzugEvent('migrated'));
umzug.on('reverting', logUmzugEvent('reverting'));
umzug.on('reverted', logUmzugEvent('reverted'));
module.exports = {
migrate: runMigrations
};
Idea behind the scene
I clearly declare the migration directory. Also, define the file matching pattern. Umzug just read files from there and run the DB migration. An example migration file is following...
// 000_Initial.js
"use strict";
module.exports = {
up: function(migration, DataTypes) {
return migration.createTable('Sessions', {
sid: {
type: DataTypes.STRING,
allowNull: false
},
data: {
type: DataTypes.STRING,
allowNull: false
},
createdAt: {
type: DataTypes.DATE
},
updatedAt: {
type: DataTypes.DATE
}
}).then(function() {
return migration.addIndex('Sessions', ['sid']);
});
},
down: function(migration, DataTypes) {
return migration.dropTable('Sessions');
}
};
Related
I'm learning Strapi and following these docs: https://docs.strapi.io/developer-docs/latest/developer-resources/database-apis-reference/rest/relations.html#connect
I expect to be able to specify the name of a relationship and then connect 1 or more records by ID during an update.
My code is as follows:
// DOES NOT WORK
/**
* Create the file record at root folder with file info.
*/
const file = await strapi.query(FILE_MODEL_UID).create({
data: {
folderPath: "/",
...fileInfo,
},
});
/**
* Update content-source record with new status.
*/
await strapi.query("api::content-source.content-source").update({
where: { uuid: jobName },
data: {
assets: {
connect: [file.id],
},
},
});
This operation fails to 1) throw an error, and b) establish the connection. However, using the set operation works, while overwriting any prior relationships.
// DOES WORK
/**
* Create the file record at root folder with file info.
*/
const file = await strapi.query(FILE_MODEL_UID).create({
data: {
folderPath: "/",
...fileInfo,
},
});
/**
* Update content-source record with new status.
*/
await strapi.query("api::content-source.content-source").update({
where: { uuid: jobName },
data: {
assets: {
set: [file.id],
},
},
});
Is there something obvious I'm missing, or is this a bug?
I try to create a dataflow job to index a bigquery table into elasticSearchwith the node package google-cloud/dataflow.v1beta3.
The job is working fine when it's created and launched from the google cloud console, but I have the following error when I try it in node:
Error: 3 INVALID_ARGUMENT: (b69ddc3a5ef1c40b): Cannot set worker pool zone. Please check whether the worker_region experiments flag is valid. Causes: (b69ddc3a5ef1cd76): An internal service error occurred.
I tried to specify the experiments params in various ways but I always end up with the same error.
Does anyone managed to get a similar dataflow job working? Or do you have information about dataflow experiments?
Here is the code:
const { JobsV1Beta3Client } = require('#google-cloud/dataflow').v1beta3
const dataflowClient = new JobsV1Beta3Client()
const response = await dataflowClient.createJob({
projectId: 'myGoogleCloudProjectId',
location: 'europe-west1',
job: {
launch_parameter: {
jobName: 'indexation-job',
containerSpecGcsPath: 'gs://dataflow-templates-europe-west1/latest/flex/BigQuery_to_Elasticsearch',
parameters: {
inputTableSpec: 'bigQuery-table-gs-adress',
connectionUrl: 'elastic-endpoint-url',
index: 'elastic-index',
elasticsearchUsername: 'username',
elasticsearchPassword: 'password'
}
},
environment: {
experiments: ['worker_region']
}
}
})
Thank you very much for your help.
After many attempts I manage yesterday to find how to specify the worker region.
It looks like this:
await dataflowClient.createJob({
projectId,
location,
job: {
name: 'jobName',
type: 'Batch',
containerSpecGcsPath: 'gs://dataflow-templates-europe-west1/latest/flex/BigQuery_to_Elasticsearch',
pipelineDescription: {
inputTableSpec: 'bigquery-table',
connectionUrl: 'elastic-url',
index: 'elastic-index',
elasticsearchUsername: 'username',
elasticsearchPassword: 'password',
project: projectId,
appName: 'BigQueryToElasticsearch'
},
environment: {
workerPools: [
{ region: 'europe-west1' }
]
}
}
})
It's not working yet, I need to find the correct way to provide the other parameters, but now the dataflow job is created in the google cloud console.
For anyone who would be struggling with this issue, I finally found how to launch a dataflow job from a template.
There is a function launchFlexTemplate that work the same way as the job creation in the google cloud console.
Here is the final function working correctly:
const { FlexTemplatesServiceClient } = require('#google-cloud/dataflow').v1beta3
const response = await dataflowClient.launchFlexTemplate({
projectId: 'google-project-id',
location: 'europe-west1',
launchParameter: {
jobName: 'job-name',
containerSpecGcsPath: 'gs://dataflow-templates-europe-west1/latest/flex/BigQuery_to_Elasticsearch',
parameters: {
apiKey: 'elastic-api-key', //mandatory but not used if you provide username and password
connectionUrl: 'elasticsearch endpoint',
index: 'elasticsearch index',
elasticsearchUsername: 'username',
elasticsearchPassword: 'password',
inputTableSpec: 'bigquery source table', //projectid:datasetId.table
//parameters to upsert elasticsearch index
propertyAsId: 'table index use for elastic _id',
usePartialUpdate: true,
bulkInsertMethod: 'INDEX'
}
}
Consider the following code within gatsby-config.js:
module.exports = {
plugins: [
{
resolve: `gatsby-source-fetch`,
options: {
name: `brands`,
type: `brands`,
url: `${dynamicURL}`, // This is the part I need to be dynamic at run/build time.
method: `get`,
axiosConfig: {
headers: { Accept: "text/csv" },
},
saveTo: `${__dirname}/src/data/brands-summary.csv`,
createNodes: false,
},
},
],
}
As you can see above, the URL for the source plugin is something that I need to be dynamic. The reason for this is that the file URL will change every time it's updated in the CMS. I need to query the CMS for that field and get its CDN URL before passing to the plugin.
I tried adding the following to the top of gatsby-config.js but I'm getting errors.
const axios = require("axios")
let dynamicURL = ""
const getBrands = async () => {
return await axios({
method: "get",
url: "https://some-proxy-url-that-returns-json-with-the-csv-file-url",
})
}
;(async () => {
const brands = await getBrands()
dynamicURL = brands.data.summary.url
})()
I'm assuming this doesn't work because the config is not waiting for the request above to resolve and therefore, all we get is a blank URL.
Is there any better way to do this? I can't simply supply the source plugin with a fixed/known URL ahead of time.
Any help greatly appreciated. I'm normally a Vue.js guy but having to work with React/Gatsby and so I'm not entirely familiar with it.
I had similar requirement where I need to set siteId of gatsby-plugin-matomo dynamically by fetching data from async api. After searching a lot of documentation of gatsby build lifecycle, I found a solution.
Here is my approach -
gatsby-config.js
module.exports = {
siteMetadata: {
...
},
plugins: {
{
resolve: 'gatsby-plugin-matomo',
options: {
siteId: '',
matomoUrl: 'MATOMO_URL',
siteUrl: 'GATSBY_SITE_URL',
dev: true
}
}
}
};
Here siteId is blank because I need to put it dynamically.
gatsby-node.js
exports.onPreInit = async ({ actions, store }) => {
const { setPluginStatus } = actions;
const state = store.getState();
const plugin = state.flattenedPlugins.find(plugin => plugin.name === "gatsby-plugin-matomo");
if (plugin) {
const matomo_site_id = await fetchMatomoSiteId('API_ENDPOINT_URL');
plugin.pluginOptions = {...plugin.pluginOptions, ...{ siteId: matomo_site_id }};
setPluginStatus({ pluginOptions: plugin.pluginOptions }, plugin);
}
};
exports.createPages = async function createPages({ actions, graphql }) {
/* Create page code */
};
onPreInit is a gatsby lifecycle method which is executing just after plugin loaded from config. onPreInit lifecycle hook has some built in methods.
store is the redux store where gatsby is storing all required information for build process.
setPluginStatus is a redux action by which plugin data can be modified in redux store of gatsby.
Here the important thing is onPreInit lifecycle hook has to be called in async way.
Hope this helps someone in future.
Another approach that may work for you is using environment variables as you said, the URL is known so, you can add them in a .env file rather than a CSV.
By default, Gatsby uses .env.development for gatsby develop and a .env.production for gatsby build command. So you will need to create two files in the root of your project.
In your .env (both and .env.development and .env.production) just add:
DYNAMIC_URL: https://yourUrl.com
Since your gatsby-config.js is rendered in your Node server, you don't need to prefix them by GATSBY_ as the ones rendered in the client-side needs. So, in your gatsby-config.js:
module.exports = {
plugins: [
{
resolve: `gatsby-source-fetch`,
options: {
name: `brands`,
type: `brands`,
url: process.env.DYNAMIC_URL, // This is the part I need to be dynamic at run/build time.
method: `get`,
axiosConfig: {
headers: { Accept: "text/csv" },
},
saveTo: `${__dirname}/src/data/brands-summary.csv`,
createNodes: false,
},
},
],
It's important to avoid tracking those files in your Git repository since you don't want to expose this type of data.
I have created a sample.js file with the following code
var mysql = require('mysql');
Typically, I would connect to my online database using:
var pool = mysql.createPool({
host: 'den1.mysql5.gear.host',
user: 'myst',
password: 'hidden',
database: "myst"
});
and then do
var connection = pool.getConnection(function(err, connection) {
//do whatever like connection.query
});
How can I create a local database file and access that, instead of using server side databases?
Edit: USING ONLY MySQL!
If you do not know, please do not answer. I am not looking for an alternative (since most alternatives cause node to delete packages needed by discord.js for some reason).
MySQL is quite heavy to implement database on front end as there is size and speed limitations. I'll prefer using it on back end only but if you want to use database on front-end you can use db.js. There is indexDB presently present in most of the modern browser. The db.js is a wrapper around that consumes it to implement database on front end. Here's sample provided on the documentation.
<script src='/scripts/db.js'></script>
var server;
db.open( {
server: 'my-app',
version: 1,
schema: {
people: {
key: { keyPath: 'id' , autoIncrement: true },
// Optionally add indexes
indexes: {
firstName: { },
answer: { unique: true }
}
}
}
} ).done( function ( s ) {
server = s
} );
I'm using Sails.js version 0.10.x, and am just starting to try out it's associactions stuff.
In my scenario I have a User who has many Documents.
so in /api/models/User.js I have:
module.exports = {
// snipped out bcrypt stuff etc
attributes: {
email: {
type: 'string',
unique: true,
index: true,
required: true
},
documents: {
collection: 'document',
via: 'owner'
},
}
};
and in /api/models/Document.js I have:
module.exports = {
attributes: {
name: 'string',
owner: {
model: 'user'
}
}
};
In my DocumentController I have the following:
fileData = {
name: file.name,
owner: req.user
}
Document.create(fileData).exec(function(err, savedFile){
if (err) {
next(err);
} else {
results.push({
id: savedFile.id,
url: '/files/' + savedFile.name,
document: savedFile
});
next();
}
});
Looking in my local mongo database via the command line I can see that the documents have the owner field set as follows "owner" : ObjectId("xxxxxxxxxxxxxxxxxxxxxxxx") which is as expected.
However when I inspect the req.user object later in the DocumentController via sails.log.debug("user has documemts", req.user.documents); I see
debug: user has documents [ add: [Function: add], remove: [Function: remove] ]
And not an array of Document objects.
In my resulting slim template
if req.user.documents.length > 0
ul
for doc in req.user.documents
li= doc.toString()
else
p No Documents!
I always get "No Documents!"
I seem to be missing something obvious but I'm not sure what that is.
I worked this out by wading through the Waterline source code.
Firstly, as I hoped, both sides of the association are affected by the creation of the Document instance, and I simply needed to reload my user.
Within the controller this is as simple as User.findOne(req.user.id).populateAll().exec(...)
I also modified my passport service helper as follows
function findById(id, fn) {
User.findOne(id).populateAll().exec(function (err, user) {
if (err) return fn(null, null);
return fn(null, user);
});
}
function findByEmail(email, fn) {
User.findOne({email: email}).populateAll().exec(function (err, user) {
if (err) return fn(null, null);
return fn(null, user);
});
}
Now the user, and its associations, are loaded properly per request.
I had to dig through the source to find the populateAll() method as it's not actually documented anywhere I could find. I could also have used populate('documents') instead but I am about to add other associations to the User so need populateAll() to load all the relevant associations.
Waterline associations docs
Waterline /lib/waterline/query/deferred.js#populateAll