Heroku recently posted a list of some good tips for postgres. I was most intreged by the Track the Source of Your Queries section. I was curious if this was something that's possible to use with Sequelize. I know that sequelize has hooks, but wasn't sure if hooks could be used to make actual query string adjustments.
I'm curious if it's possible to use a hook or another Sequelize method to append a comment to Sequelize query (without using .raw) to keep track of where the query was called from.
(Appending and prepending to queries would also be helpful for implementing row-level security, specifically set role / reset role)
Edit: Would it be possible to use sequelize.fn() for this?
If you want to just insert a "tag" into the SQL query you could use Sequelize.literal() to pass a literal string to the query generator. Adding this to options.attributes.include will add it, however it will also need an alias so you would have to pass some kind of value as well.
Model.findById(id, {
attributes: {
include: [
[Sequelize.literal('/* your comment */ 1'), 'an_alias'],
],
},
});
This would produce SQL along the lines of
SELECT `model`.`id`, /* your comment */ 1 as `an_alias`
FROM `model` as `model`
WHERE `model`.`id` = ???
I played around with automating this a bit and it probably goes beyond the scope of this answer, but you could modify the Sequelize.Model.prototype before you create a connection using new Sequelize() to tweak the handling of the methods. You would need to do this for all the methods you want to "tag".
// alias findById() so we can call it once we fiddle with the input
Sequelize.Model.prototype.findById_untagged = Sequelize.Model.prototype.findById;
// override the findbyId() method so we can intercept the options.
Sequelize.Model.prototype.findById = function findById(id, options) {
// get the caller somehow (I was having trouble accessing the call stack properly)
const caller = ???;
// you need to make sure it's defined and you aren't overriding settings, etc
options.attributes.include.push([Sequelize.literal('/* your comment */ 1'), 'an_alias']);
// pass it off to the aliased method to continue as normal
return this.findById_untagged(id, options);
}
// create the connection
const connection = new Sequelize(...);
Note: it may not be possible to do this automagically as Sequelize has use strict so the arguments.caller and arguments.callee properties are not accessible.
2nd Note: if you don't care about modifying the Sequelize.Model prototypes you can also abstract your calls to the Sequelize methods and tweak the options there.
function Wrapper(model) {
return {
findById(id, options) {
// do your stuff
return model.findById(id, options);
},
};
}
Wrapper(Model).findById(id, options);
3rd Note: You can also submit a pull request to add this functionality to Sequelize under a new option value, like options.comment, which is added at the end of the query.
This overrides the sequelize.query() method that's internally used by Sequelize for all queries to add a comment showing the location of the query in the code. It also adds the stack trace to errors thrown.
const excludeLineTexts = ['node_modules', 'internal/process', ' anonymous ', 'runMicrotasks', 'Promise.'];
// overwrite the query() method that Sequelize uses internally for all queries so the error shows where in the code the query is from
sequelize.query = function () {
let stack;
const getStack = () => {
if (!stack) {
const o = {};
Error.captureStackTrace(o, sequelize.query);
stack = o.stack;
}
return stack;
};
const lines = getStack().split(/\n/g).slice(1);
const line = lines.find((l) => !excludeLineTexts.some((t) => l.includes(t)));
if (line) {
const methodAndPath = line.replace(/(\s+at (async )?|[^a-z0-9.:/\\\-_ ]|:\d+\)?$)/gi, '');
if (methodAndPath) {
const comment = `/* ${methodAndPath} */`;
if (arguments[0]?.query) {
arguments[0].query = `${comment} ${arguments[0].query}`;
} else {
arguments[0] = `${comment} ${arguments[0]}`;
}
}
}
return Sequelize.prototype.query.apply(this, arguments).catch((err) => {
err.fullStack = getStack();
throw err;
});
};
Related
In general I have a goal to restrict any update of an entity if it's binded to anything.
To be specific I have two models: Order and Good. They have many-to-many relation.
const Good = sequelize.define('good' , { name:Sequelize.STRING });
const Order = sequelize.define('order' , { });
//M:N relation
Good.belongsToMany(Order, { through: 'GoodsInOrders' });
Order.belongsToMany(Good, { through: 'GoodsInOrders' });
I have tried to set onUpdate: 'NO ACTION' and onUpdate: 'RESTRICT' inside belongsToMany association defining but it has no effect.
Here is the code to reproduce my manipulations with goods and order
//creating few goods
const good1 = await Good.create({ name:'Coca-Cola' });
const good2 = await Good.create({ name:'Hamburger' });
const good3 = await Good.create({ name:'Fanta' });
//creating an order
const order = await Order.create();
//adding good1 and good2 to the order
await order.addGoods([good1,good2]);
//It's ok to update good3 since no orders contains It
await good3.update( { name:'Pepsi' });
//But I need to fire an exeption if I try to update any goods belonged to order
//There should be an error because we previously added good1 to order
await good1.update( { name:'Sandwich' });
I have no clue how to restrict it in a simple way.
Surely we always can add beforeUpdate hook on Good model but I would like to avoid this kind of complications.
I will be glad to any ideas.
As I said I was looking for simpler alternative to hooks but many researches after all led me to nowhere.
My solution was to declare the beforeUpdate hook inside Good model so inital definition
const Good = sequelize.define('good' , { name : Sequelize.STRING } );
was transformed into this
const Good = sequelize.define('good', {
name: Sequelize.STRING
}, {
hooks: {
beforeUpdate: async (instance, options) => {
if ((await instance.getOrders()).length)
return Promise.reject('error');
return Promise.resolve();
}
}
});
So here we add the hook that fires every time before update particular good.
It makes inner request of list of orders that contains current good.
(await instance.getOrders()).length
And depends on the result it either fire an exception if the list isn't empty or just return resolved promise that means that everything ok and current good can be updated.
Background: Been trying for the last 2 day to resolve this myself by looking at various examples from both this website and others and I'm still not getting it. Whenever I try adding callbacks or async/await I'm getting no where. I know this is where my problem is but I can't resolve it myself.
I'm not from a programming background :( Im sure its a quick fix for the average programmer, I am well below that level.
When I console.log(final) within the 'ready' block it works as it should, when I escape that block the output is 'undefined' if console.log(final) -or- Get req/server info, if I use console.log(ready)
const request = require('request');
const ready =
// I know 'request' is deprecated, but given my struggle with async/await (+ callbacks) in general, when I tried switching to axios I found it more confusing.
request({url: 'https://www.website.com', json: true}, function(err, res, returnedData) {
if (err) {
throw err;
}
var filter = returnedData.result.map(entry => entry.instrument_name);
var str = filter.toString();
var addToStr = str.split(",").map(function(a) { return `"trades.` + a + `.raw", `; }).join("");
var neater = addToStr.substr(0, addToStr.length-2);
var final = "[" + neater + "]";
// * * * Below works here but not outside this block* * *
// console.log(final);
});
// console.log(final);
// returns 'final is not defined'
console.log(ready);
// returns server info of GET req endpoint. This is as it is returning before actually returning the data. Not done as async.
module.exports = ready;
Below is an short example of the JSON that is returned by website.com. The actual call has 200+ 'result' objects.
What Im ultimately trying to achieve is
1) return all values of "instrument_name"
2) perform some manipulations (adding 'trades.' to the beginning of each value and '.raw' to the end of each value.
3) place these manipulations into an array.
["trades.BTC-26JUN20-8000-C.raw","trades.BTC-25SEP20-8000-C.raw"]
4) export/send this array to another file.
5) The array will be used as part of another request used in a websocket connection. The array cannot be hardcoded into this new request as the values of the array change daily.
{
"jsonrpc": "2.0",
"result": [
{
"kind": "option",
"is_active": true,
"instrument_name": "26JUN20-8000-C",
"expiration_timestamp": 1593158400000,
"creation_timestamp": 1575305837000,
"contract_size": 1,
},
{
"kind": "option",
"is_active": true,
"instrument_name": "25SEP20-8000-C",
"expiration_timestamp": 1601020800000,
"creation_timestamp": 1569484801000,
"contract_size": 1,
}
],
"usIn": 1591185090022084,
"usOut": 1591185090025382,
"usDiff": 3298,
"testnet": true
}
Looking your code we find two problems related to final and ready variables. The first one is that you're trying to console.log(final) out of its scope.
The second problem is that request doesn't immediately return the result of your API request. The reason is pretty simple, you're doing an asynchronous operation, and the result will only be returned by your callback. Your ready variable is just the reference to your request object.
I'm not sure about what is the context of your code and why you want to module.exports ready variable, but I suppose you want to export the result. If that's the case, I suggest you to return an async function which returns the response data instead of your request variable. This way you can control how to handle your response outside the module.
You can use the integrated fetch api instead of the deprecated request. I changed your code so that your component exports an asynchronous function called fetchData, which you can import somewhere and execute. It will return the result, updated with your logic:
module.exports = {
fetchData: async function fetchData() {
try {
const returnedData = await fetch({
url: "https://www.website.com/",
json: true
});
var ready = returnedData.result.map(entry => entry.instrument_name);
var str = filter.toString();
var addToStr = str
.split(",")
.map(function(a) {
return `"trades.` + a + `.raw", `;
})
.join("");
var neater = addToStr.substr(0, addToStr.length - 2);
return "[" + neater + "]";
} catch (error) {
console.error(error);
}
}
}
I hope this helps, otherwise please share more of your code. Much depends on where you want to display the fetched data. Also, how you take care of the loading and error states.
EDIT:
I can't get responses from this website, because you need an account as well as credentials for the api. Judging your code and your questions:
1) return all values of "instrument_name"
Your map function works:
var filter = returnedData.result.map(entry => entry.instrument_name);
2)perform some manipulations (adding 'trades.' to the beginning of each value and '.raw' to the end of each value.
3) place these manipulations into an array. ["trades.BTC-26JUN20-8000-C.raw","trades.BTC-25SEP20-8000-C.raw"]
This can be done using this function
const manipulatedData = filter.map(val => `trades.${val}.raw`);
You can now use manipulatedData in your next request. Being able to export this variable, depends on the component you use it in. To be honest, it sounds easier to me not to split this logic into two separate components - regarding the websocket -.
I want to pass an object to a SQL query.
I know this works:
connection.query('SELECT * FROM projects WHERE status = ?',
['active']
)
What is the correct syntax to use named object properties as parameters instead? Something like this:
connection.query('SELECT * FROM projects WHERE status = :status ',
{ status: 'active' }
)
This possibility is not available out-of-the-box, but the documentation of mysqljs (which promise-mysql relies on) explains how this can be achieved by assigning a custom function to connection.config.queryFormat:
If you prefer to have another type of query escape format, there's a
connection configuration option you can use to define a custom format
function. You can access the connection object if you want to use the
built-in .escape() or any other connection function.
Here's an example of how to implement another format:
connection.config.queryFormat = function (query, values) {
if (!values) return query;
return query.replace(/\:(\w+)/g, function (txt, key) {
if (values.hasOwnProperty(key)) {
return this.escape(values[key]);
}
return txt;
}.bind(this));
};
connection.query("UPDATE posts SET title = :title", { title: "Hello MySQL" });
I am using Knex.JS migration tools. However, when creating a table, I'd like to have a column named updated_at that is automatically updated when a record is updated in the database.
For example, here is a table:
knex.schema.createTable('table_name', function(table) {
table.increments();
table.string('name');
table.timestamp("created_at").defaultTo(knex.fn.now());
table.timestamp("updated_at").defaultTo(knex.fn.now());
table.timestamp("deleted_at");
})
The created_at and updated_at column defaults to the time the record is created, which is fine. But, when that record is updated, I'd like the updated_at column to show the new time that it was updated at automatically.
I'd prefer not to write in raw postgres.
Thanks!
With Postgres, you'll need a trigger. Here's a method I've used successfully.
Add a function
If you have multiple migration files in a set order, you might need to artificially change the datestamp in the filename to get this to run first (or just add it to your first migration file). If you can't roll back, you might need to do this step manually via psql. However, for new projects:
const ON_UPDATE_TIMESTAMP_FUNCTION = `
CREATE OR REPLACE FUNCTION on_update_timestamp()
RETURNS trigger AS $$
BEGIN
NEW.updated_at = now();
RETURN NEW;
END;
$$ language 'plpgsql';
`
const DROP_ON_UPDATE_TIMESTAMP_FUNCTION = `DROP FUNCTION on_update_timestamp`
exports.up = knex => knex.raw(ON_UPDATE_TIMESTAMP_FUNCTION)
exports.down = knex => knex.raw(DROP_ON_UPDATE_TIMESTAMP_FUNCTION)
Now the function should be available to all subsequent migrations.
Define a knex.raw trigger helper
I find it more expressive not to repeat large chunks of SQL in migration files if I can avoid it. I've used knexfile.js here but if you don't like to complicate that, you could define it wherever.
module.exports = {
development: {
// ...
},
production: {
// ...
},
onUpdateTrigger: table => `
CREATE TRIGGER ${table}_updated_at
BEFORE UPDATE ON ${table}
FOR EACH ROW
EXECUTE PROCEDURE on_update_timestamp();
`
}
Use the helper
Finally, we can fairly conveniently define auto-updating triggers:
const { onUpdateTrigger } = require('../knexfile')
exports.up = knex =>
knex.schema.createTable('posts', t => {
t.increments()
t.string('title')
t.string('body')
t.timestamps(true, true)
})
.then(() => knex.raw(onUpdateTrigger('posts')))
exports.down = knex => knex.schema.dropTable('posts')
Note that dropping the table is enough to get rid of the trigger: we don't need an explicit DROP TRIGGER.
This all might seem like a lot of work, but it's pretty "set-and-forget" once you've done it and handy if you want to avoid using an ORM.
You can create a knex migration using timestamps:
exports.up = (knex, Promise) => {
return Promise.all([
knex.schema.createTable('table_name', (table) => {
table.increments();
table.string('name');
table.timestamps(false, true);
table.timestamp('deleted_at').defaultTo(knex.fn.now());
})
]);
};
exports.down = (knex, Promise) => {
return Promise.all([
knex.schema.dropTableIfExists('table_name')
]);
};
With timestamps a database schema will be created which adds a created_at and updated_at column, each containing an initial timestamp.
To keep the updated_at column current, you'll need knex.raw:
table.timestamp('updated_at').defaultTo(knex.raw('CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP'));
To skip the knex.raw solution, I suggest using a high level ORM like Objection.js. With Objection.js you could implement your own BaseModel which then updates the updated_at column:
Something.js
const BaseModel = require('./BaseModel');
class Something extends BaseModel {
constructor() {
super();
}
static get tableName() {
return 'table_name';
}
}
module.exports = Something;
BaseModel
const knexfile = require('../../knexfile');
const knex = require('knex')(knexfile.development);
const Model = require('objection').Model;
class BaseModel extends Model {
$beforeUpdate() {
this.updated_at = knex.fn.now();
}
}
module.exports = BaseModel;
Source: http://vincit.github.io/objection.js/#timestamps
This is my way of doing that in Mysql 5.6+
The reason I didn't use table.timestamps is because I use DATETIME instead of timestamp.
table.dateTime('created_on')
.notNullable()
.defaultTo(knex.raw('CURRENT_TIMESTAMP'))
table.dateTime('updated_on')
.notNullable()
.defaultTo(knex.raw('CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP'))
This is not a feature of Knex. Knex only creates the columns, but does not keep them up to date for you.
If you use, the Bookshelf ORM, however, you can specify that a table has timestamps, and it will set & update the columns as expected:
Bookshelf docs
Github issue
exports.up = (knex) => {
return knex.raw(create or replace function table_name_update() RETURNS trigger AS $$ begin new.updated_at = now(); RETURN NEW; end; $$ language 'plpgsql'; create or replace trigger tg_table_name_update on table_name before update for each row execute table_name_update();)
};
exports.down = (knex) => {
return knex.raw(drop table if exists table_name; drop function if exists table_name_update;)
};
You can directly use this function
table.timestamps()
This will create the 'created_at' and 'updated_at' columns by default and update them accordingly
https://knexjs.org/#Schema-timestamps
I've a module which is using parsing ( a parsing functionality), other modules should query this parser values.
my question is
how should I build it (design aspects ) ?
which method should init the parser (the first method that call it
to get specific value)
This is sample code which return two object from the parser but I dont think that this is the right way to do that since maybe I'll need to provide additional properties
the is the module parse
parse = function (data) {
var ymlObj = ymlParser.parse(data);
return {
web: ymlObj.process_types.web,
con: ymlObj.con
}
};
If I understood you right you can just make simple module with getters and setter.
(parse.js)
var ymlObj = {};
function Parse() {}
Parse.prototype.setData = function (data) {
ymlObj = data;
}
Parse.prototype.getWeb = function () {
return ymlObj.process_types.web;
}
Parse.prototype.getCon = function () {
return ymlObj.con;
}
module.exports = new Parse();
(parseUser.js)
var parse = require('./parse.js');
function ParseUser() { }
ParseUser.prototype.useParse = function () {
console.log(parse.getCon());
}
module.exports = new ParseUser();
(app.js)
var parse = require('./parse.js');
var parseUser = require('parseUser.js');
parse.setData({ ... });
parseUser.useParse();
You still have to do basics like handle exceptions but hope this helps you understand the basic structure.
What comes to init it really depends when you want to initialize (fetch?) your data and where does that data come from. You can set timestamp to indicate how old your data is and make decision if you still rely on it or fetch newer data. Or you can register callbacks from your user modules to deal with new data every time its fetched.
So its up to you how you design your module. ;)