I'm writing an api to try featherjs with its mongoose adapter. I want my GET /books/ endpoint to only return books with the private attribute set to false. Should I use a before hook? If that's the case, how do I prevent users from running custom queries in my endpoint? Should I manually empty the params object?
You need to create a before hook in books.hooks.js
const books_qry = require('../../hooks/books_qry');
module.exports = {
before: {
all: [],
find: [books_qry()],
...
Create /src/hooks/books_qry.js
module.exports = function () {
return function (context) {
//You have 2 choices to change the context.params.query
//overwrite any request for a custom query
context.params.query = { private: false };
//or add a query param to the request for a custom query
context.params.query.private = false
//check the updated context.params.query
console.log(context.params.query);
return context;
}
}
Select one of the choices that you need. Since I never used mongoose at the moment, check the documentation in order to create a valid query (btw above example works for mongodb adapter)
Related
I am trying to take benefits of model instance methods, as stated in the doc. So I defined User class as following:
class User extends Model {
addRole(role){
let roles = this. roles;
roles[role] = true;
this.roles = roles;
this.save();
}
removeRole (role) {
let roles = this.roles;
delete roles[role];
this.save();
}
hasRole (role){
return this.roles[role] != null;
}
}
User.init({
// some attributes
,
roles:{
type: DataTypes.JSON,
allowNull: false,
}
}, { sequelize});
I expected to use methods addRole(), removeRole() and hasRole() in any User instance.
The problem that all the methods can't save their changes to database. (Can read only!)
// example
let user = null;
// get the first user to test.
User.findAll()
.then(users =>{
user = users[0];
user.addRole("admin");
console.log(user.roles); // {admin: true}
user.save();
// However the changes don't appear in the database.
});
I had found the answer.
For some reasons, sequelise can't detect the changes of the json object properly. As sequelise is optimised internally to ignore call to model.save() if there is no changes of the model. So, sequelize randomly ignore the save method.
This behavior had no relation with instance method as I believed when I face this problem first time.
To get out of this problem, I had to use :
user.addRole("admin");
user.changed("roles", true); // <<<< look at this;
console.log(user.roles); // {admin: true}
user.save();
Please note that this function will return false when a property from a nested (for example JSON) property was edited manually, you must call changed('key', true) manually in these cases. Writing an entirely new object (eg. deep cloned) will be detected.
Example:
const mdl = await MyModel.findOne();
mdl.myJsonField.a = 1;
console.log(mdl.changed()) => false
mdl.save(); // this will not save anything
mdl.changed('myJsonField', true);
console.log(mdl.changed()) => ['myJsonField']
mdl.save(); // will save
changed method usage
I want to pass an object to a SQL query.
I know this works:
connection.query('SELECT * FROM projects WHERE status = ?',
['active']
)
What is the correct syntax to use named object properties as parameters instead? Something like this:
connection.query('SELECT * FROM projects WHERE status = :status ',
{ status: 'active' }
)
This possibility is not available out-of-the-box, but the documentation of mysqljs (which promise-mysql relies on) explains how this can be achieved by assigning a custom function to connection.config.queryFormat:
If you prefer to have another type of query escape format, there's a
connection configuration option you can use to define a custom format
function. You can access the connection object if you want to use the
built-in .escape() or any other connection function.
Here's an example of how to implement another format:
connection.config.queryFormat = function (query, values) {
if (!values) return query;
return query.replace(/\:(\w+)/g, function (txt, key) {
if (values.hasOwnProperty(key)) {
return this.escape(values[key]);
}
return txt;
}.bind(this));
};
connection.query("UPDATE posts SET title = :title", { title: "Hello MySQL" });
How do i customize a PersistedModel in loopback ? Let's say i have two models Post and Comment. A Post hasMany Comment but it can have at most 3 comments. How can i implement that without using hooks? Also i need to do it inside a transaction.
I'm coming from java and this is how i would do that:
class Post {
void addComment(Comment c) {
if(this.comments.size() < 3)
this.comments.add(c)
else
throw new DomainException("Comment count exceeded")
}
}
then i would write a service ...
class PostService {
#Transactional
public void addCommentToPost(postId, Comment comment) {
post = this.postRepository.findById(postId);
post.addComment(comment)
this.postRepository.save(post);
}
}
I know i could write something like:
module.exports = function(app) {
app.datasources.myds.transaction(async (models) => {
post = await models.Post.findById(postId)
post.comments.create(commentData); ???? how do i restrict comments array size ?
})
}
i want to be able to use it like this:
// create post
POST /post --> HTTP 201
// add comments
POST /post/id/comments --> HTTP 201
POST /post/id/comments --> HTTP 201
POST /post/id/comments --> HTTP 201
// should fail
POST /post/id/comments --> HTTP 4XX ERROR
What you are asking here is actually one of the good use cases of using operation hooks, beforesave() in particatular. See more about it here here
https://loopback.io/doc/en/lb3/Operation-hooks.html#before-save
However, I'm not so sure about the transaction part.
For that, I'd suggest using a remote method, it gives you complete freedom to use the transaction APIs of loopback.
One thing to consider here is that you'll have to make sure that all comments are created through your method only and not through default loopback methods.
You can then do something like this
// in post-comment.js model file
module.exports = function(Postcomment){
Postcomment.addComments = function(data, callback) {
// assuming data is an object which gives you the postId and commentsArray
const { comments, postId } = data;
Postcomment.count({ where: { postId } }, (err1, count) => {
if (count + commentsArray.length <= 10) {
// initiate transaction api and make a create call to db and callback
} else {
// return an error message in callback
}
}
}
}
You can use validateLengthOf() method available for each model as part of the validatable class.
For more details refer to Loopback Validation
i think i have found a solution.
whenever you want to override methods created by model relations, write a boot script like this:
module.exports = function(app) {
const old = app.models.Post.prototype.__create__comments;
Post.prototype.__create__orders = function() {
// **custom code**
old.apply(this, arguments);
};
};
i think this is the best choice.
Heroku recently posted a list of some good tips for postgres. I was most intreged by the Track the Source of Your Queries section. I was curious if this was something that's possible to use with Sequelize. I know that sequelize has hooks, but wasn't sure if hooks could be used to make actual query string adjustments.
I'm curious if it's possible to use a hook or another Sequelize method to append a comment to Sequelize query (without using .raw) to keep track of where the query was called from.
(Appending and prepending to queries would also be helpful for implementing row-level security, specifically set role / reset role)
Edit: Would it be possible to use sequelize.fn() for this?
If you want to just insert a "tag" into the SQL query you could use Sequelize.literal() to pass a literal string to the query generator. Adding this to options.attributes.include will add it, however it will also need an alias so you would have to pass some kind of value as well.
Model.findById(id, {
attributes: {
include: [
[Sequelize.literal('/* your comment */ 1'), 'an_alias'],
],
},
});
This would produce SQL along the lines of
SELECT `model`.`id`, /* your comment */ 1 as `an_alias`
FROM `model` as `model`
WHERE `model`.`id` = ???
I played around with automating this a bit and it probably goes beyond the scope of this answer, but you could modify the Sequelize.Model.prototype before you create a connection using new Sequelize() to tweak the handling of the methods. You would need to do this for all the methods you want to "tag".
// alias findById() so we can call it once we fiddle with the input
Sequelize.Model.prototype.findById_untagged = Sequelize.Model.prototype.findById;
// override the findbyId() method so we can intercept the options.
Sequelize.Model.prototype.findById = function findById(id, options) {
// get the caller somehow (I was having trouble accessing the call stack properly)
const caller = ???;
// you need to make sure it's defined and you aren't overriding settings, etc
options.attributes.include.push([Sequelize.literal('/* your comment */ 1'), 'an_alias']);
// pass it off to the aliased method to continue as normal
return this.findById_untagged(id, options);
}
// create the connection
const connection = new Sequelize(...);
Note: it may not be possible to do this automagically as Sequelize has use strict so the arguments.caller and arguments.callee properties are not accessible.
2nd Note: if you don't care about modifying the Sequelize.Model prototypes you can also abstract your calls to the Sequelize methods and tweak the options there.
function Wrapper(model) {
return {
findById(id, options) {
// do your stuff
return model.findById(id, options);
},
};
}
Wrapper(Model).findById(id, options);
3rd Note: You can also submit a pull request to add this functionality to Sequelize under a new option value, like options.comment, which is added at the end of the query.
This overrides the sequelize.query() method that's internally used by Sequelize for all queries to add a comment showing the location of the query in the code. It also adds the stack trace to errors thrown.
const excludeLineTexts = ['node_modules', 'internal/process', ' anonymous ', 'runMicrotasks', 'Promise.'];
// overwrite the query() method that Sequelize uses internally for all queries so the error shows where in the code the query is from
sequelize.query = function () {
let stack;
const getStack = () => {
if (!stack) {
const o = {};
Error.captureStackTrace(o, sequelize.query);
stack = o.stack;
}
return stack;
};
const lines = getStack().split(/\n/g).slice(1);
const line = lines.find((l) => !excludeLineTexts.some((t) => l.includes(t)));
if (line) {
const methodAndPath = line.replace(/(\s+at (async )?|[^a-z0-9.:/\\\-_ ]|:\d+\)?$)/gi, '');
if (methodAndPath) {
const comment = `/* ${methodAndPath} */`;
if (arguments[0]?.query) {
arguments[0].query = `${comment} ${arguments[0].query}`;
} else {
arguments[0] = `${comment} ${arguments[0]}`;
}
}
}
return Sequelize.prototype.query.apply(this, arguments).catch((err) => {
err.fullStack = getStack();
throw err;
});
};
I am using Knex.JS migration tools. However, when creating a table, I'd like to have a column named updated_at that is automatically updated when a record is updated in the database.
For example, here is a table:
knex.schema.createTable('table_name', function(table) {
table.increments();
table.string('name');
table.timestamp("created_at").defaultTo(knex.fn.now());
table.timestamp("updated_at").defaultTo(knex.fn.now());
table.timestamp("deleted_at");
})
The created_at and updated_at column defaults to the time the record is created, which is fine. But, when that record is updated, I'd like the updated_at column to show the new time that it was updated at automatically.
I'd prefer not to write in raw postgres.
Thanks!
With Postgres, you'll need a trigger. Here's a method I've used successfully.
Add a function
If you have multiple migration files in a set order, you might need to artificially change the datestamp in the filename to get this to run first (or just add it to your first migration file). If you can't roll back, you might need to do this step manually via psql. However, for new projects:
const ON_UPDATE_TIMESTAMP_FUNCTION = `
CREATE OR REPLACE FUNCTION on_update_timestamp()
RETURNS trigger AS $$
BEGIN
NEW.updated_at = now();
RETURN NEW;
END;
$$ language 'plpgsql';
`
const DROP_ON_UPDATE_TIMESTAMP_FUNCTION = `DROP FUNCTION on_update_timestamp`
exports.up = knex => knex.raw(ON_UPDATE_TIMESTAMP_FUNCTION)
exports.down = knex => knex.raw(DROP_ON_UPDATE_TIMESTAMP_FUNCTION)
Now the function should be available to all subsequent migrations.
Define a knex.raw trigger helper
I find it more expressive not to repeat large chunks of SQL in migration files if I can avoid it. I've used knexfile.js here but if you don't like to complicate that, you could define it wherever.
module.exports = {
development: {
// ...
},
production: {
// ...
},
onUpdateTrigger: table => `
CREATE TRIGGER ${table}_updated_at
BEFORE UPDATE ON ${table}
FOR EACH ROW
EXECUTE PROCEDURE on_update_timestamp();
`
}
Use the helper
Finally, we can fairly conveniently define auto-updating triggers:
const { onUpdateTrigger } = require('../knexfile')
exports.up = knex =>
knex.schema.createTable('posts', t => {
t.increments()
t.string('title')
t.string('body')
t.timestamps(true, true)
})
.then(() => knex.raw(onUpdateTrigger('posts')))
exports.down = knex => knex.schema.dropTable('posts')
Note that dropping the table is enough to get rid of the trigger: we don't need an explicit DROP TRIGGER.
This all might seem like a lot of work, but it's pretty "set-and-forget" once you've done it and handy if you want to avoid using an ORM.
You can create a knex migration using timestamps:
exports.up = (knex, Promise) => {
return Promise.all([
knex.schema.createTable('table_name', (table) => {
table.increments();
table.string('name');
table.timestamps(false, true);
table.timestamp('deleted_at').defaultTo(knex.fn.now());
})
]);
};
exports.down = (knex, Promise) => {
return Promise.all([
knex.schema.dropTableIfExists('table_name')
]);
};
With timestamps a database schema will be created which adds a created_at and updated_at column, each containing an initial timestamp.
To keep the updated_at column current, you'll need knex.raw:
table.timestamp('updated_at').defaultTo(knex.raw('CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP'));
To skip the knex.raw solution, I suggest using a high level ORM like Objection.js. With Objection.js you could implement your own BaseModel which then updates the updated_at column:
Something.js
const BaseModel = require('./BaseModel');
class Something extends BaseModel {
constructor() {
super();
}
static get tableName() {
return 'table_name';
}
}
module.exports = Something;
BaseModel
const knexfile = require('../../knexfile');
const knex = require('knex')(knexfile.development);
const Model = require('objection').Model;
class BaseModel extends Model {
$beforeUpdate() {
this.updated_at = knex.fn.now();
}
}
module.exports = BaseModel;
Source: http://vincit.github.io/objection.js/#timestamps
This is my way of doing that in Mysql 5.6+
The reason I didn't use table.timestamps is because I use DATETIME instead of timestamp.
table.dateTime('created_on')
.notNullable()
.defaultTo(knex.raw('CURRENT_TIMESTAMP'))
table.dateTime('updated_on')
.notNullable()
.defaultTo(knex.raw('CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP'))
This is not a feature of Knex. Knex only creates the columns, but does not keep them up to date for you.
If you use, the Bookshelf ORM, however, you can specify that a table has timestamps, and it will set & update the columns as expected:
Bookshelf docs
Github issue
exports.up = (knex) => {
return knex.raw(create or replace function table_name_update() RETURNS trigger AS $$ begin new.updated_at = now(); RETURN NEW; end; $$ language 'plpgsql'; create or replace trigger tg_table_name_update on table_name before update for each row execute table_name_update();)
};
exports.down = (knex) => {
return knex.raw(drop table if exists table_name; drop function if exists table_name_update;)
};
You can directly use this function
table.timestamps()
This will create the 'created_at' and 'updated_at' columns by default and update them accordingly
https://knexjs.org/#Schema-timestamps