If I have a Schema which has an Array of references to another Schema, is there a way I can update both Documents with one endpoint?
This is my Schema:
CompanySchema = new Schema({
addresses: [{
type: Schema.Types.ObjectId,
ref: 'Address'
}]
});
I want to send a Company with the full Address object to /companies/:id/edit. With this endpoint, I want to edit attributes on Company and Address at the same time.
In Rails you can use something like nested attributes to do one big UPDATE call, and it will update the Company and update or add the Address as well.
Any idea how would you do this in Mongoose?
Cascade saves are not natively supported in Mongoose (issue).
But there are plugins (example: cascading-relations) that implement this behavior on nested populate objects.
Take in mind that mongodb is not a fully transactional database, and the "big save" is achieved with various insert()/update() op calls and you (or the plugin) have to handle errors and rollback.
Example of cascade save:
company.save()
.then(() => Promise.all(company.addresses.map(address => {
/* update fkeys if needed */
return address.save()
}))
.catch(err => console.error('something went wrong...', err))
Related
I have an update endpoint that when an incoming (request) contains a site name that matches any site name in my job site's table, I change all those particular DB entries status to "Pending Transfer" and essentially clear their site location data.
I've been able to make this work with the following:
async function bulkUpdate(req, res){
const site = req.body.data;
const data = await knex('assets')
.whereRaw(`location ->> 'site' = '${site.physical_site_name}'`)
.update({
status: "Pending Transfer",
location: {
site: site.physical_site_name,
site_loc: { first_octet: site.first_octet, mdc: '', shelf: '', unit: ''} //remove IP address
},
//history: ''
}) //todo: update history as well
.returning('*')
.then((results) => results[0]);
res.status(200).json({ data });
}
I also want to update history (any action we ever take on an object like a job site is stored in a JSON object, basically used as an array.
As you can see, history is commented out, but as this function essentially "sweeps" over all job sites that match the criteria and makes the change, I would also like to "push" an entry onto the existing history column here as well. I've done this in other situations where I destructure the existing history data, and add the new entry, etc. But as we are "sweeping" over the data, I'm wondering if there is a way to just push this data onto that array without having to pull each individual's history data via destructuring?
The shape of an entry in history column is like so:
[{"action_date":"\"2022-09-06T22:41:10.232Z\"","action_taken":"Bulk Upload","action_by":"Davi","action_by_id":120,"action_comment":"Initial Upload","action_key":"PRtW2o3OoosRK9oiUUMnByM4V"}]
So ideally I would like to "push" a new object onto this array without having (or overwriting) the previous data.
I'm newer at this, so thank you for all the help.
I had to convert the column from json to jsonb type, but this did the trick (with the concat operator)...
history: knex.raw(`history || ?::jsonb`, JSON.stringify({ newObj: newObjData }))
The official line from Facebook is that Relay is "intentionally agnostic about authentication mechanisms." In all the examples in the Relay repository, authentication and access control are a separate concern. In practice, I have not found a simple way to implement this separation.
The examples provided in the Relay repository all have root schemas with a viewer field that assumes there is one user. And that user has access to everything.
However, in reality, an application has has many users and each user has different degrees of access to each node.
Suppose I have this schema in JavaScript:
export const Schema = new GraphQLSchema({
query: new GraphQLObjectType({
name: 'Query',
fields: () => ({
node: nodeField,
user: {
type: new GraphQLObjectType({
name: 'User',
args: {
// The `id` of the user being queried for
id: { type: new GraphQLNonNull(GraphQLID) },
// Identity the user who is querying
session: { type: new GraphQLInputObjectType({ ... }) },
},
resolve: (_, { id, session }) => {
// Given `session, get user with `id`
return data.getUser({ id, session });
}
fields: () => ({
name: {
type: GraphQLString,
resolve: user => {
// Does `session` have access to this user's
// name?
user.name
}
}
})
})
}
})
})
});
Some users are entirely private from the perspective of the querying user. Other users might only expose certain fields to the querying user. So to get a user, the client must not only provide the user ID they are querying for, but they must also identify themselves so that access control can occur.
This seems to quickly get complicated as the need to control access trickles down the graph.
Furthermore, I need to control access for every root query, like nodeField. I need to make sure that every node implementing nodeInterface.
All of this seems like a lot of repetitive work. Are there any known patterns for simplifying this? Am I thinking about this incorrectly?
Different applications have very different requirements for the form of access control, so baking something into the basic Relay framework or GraphQL reference implementation probably doesn't make sense.
An approach that I have seen pretty successful is to bake the privacy/access control into the data model/data loader framework. Every time you load an object, you wouldn't just load it by id, but also provide the context of the viewer. If the viewer cannot see the object, it would fail to load as if it doesn't exist to prevent even leaking the existence of the object. The object also retains the viewer context and certain fields might have restricted access that are checked before being returned from the object. Baking this in the lower level data loading mechanism helps to ensure that bugs in higher level product / GraphQL code doesn't leak private data.
In a concrete example, I might not be allowed to see some User, because he has blocked me. You might be allowed to see him in general, but no his email, since you're not friends with him.
In code something like this:
var viewer = new Viewer(getLoggedInUser());
User.load(id, viewer).then(
(user) => console.log("User name:", user.name),
(error) => console.log("User does not exist or you don't have access.")
)
Trying to implement the visibility on GraphQL level has lots of potential to leak information. Think of the many way to access a user in GraphQL implementation for Facebook:
node($userID) { name }
node($postID) { author { name } }
node($postID) { likers { name } }
node($otherUserID) { friends { name } }
All of these queries could load a user's name and if the user has blocked you, none of them should return the user or it's name. Having the access control on all these fields and not forgetting the check anywhere is a recipe for missing the check somewhere.
I found that handling authentication is easy if you make use of the GraphQL rootValue, which is passed to the execution engine when the query is executed against the schema. This value is available at all levels of execution and is useful for storing an access token or whatever identifies the current user.
If you're using the express-graphql middleware, you can load the session in a middleware preceding the GraphQL middleware and then configure the GraphQL middleware to place that session into the root value:
function getSession(req, res, next) {
loadSession(req).then(session => {
req.session = session;
next();
}).catch(
res.sendStatus(400);
);
}
app.use('/graphql', getSession, graphqlHTTP(({ session }) => ({
schema: schema,
rootValue: { session }
})));
This session is then available at any depth in the schema:
new GraphQLObjectType({
name: 'MyType',
fields: {
myField: {
type: GraphQLString,
resolve(parentValue, _, { rootValue: { session } }) {
// use `session` here
}
}
}
});
You can pair this with "viewer-oriented" data loading to achieve access control. Check out https://github.com/facebook/dataloader which helps create this kind of data loading object and provides batching and caching.
function createLoaders(authToken) {
return {
users: new DataLoader(ids => genUsers(authToken, ids)),
cdnUrls: new DataLoader(rawUrls => genCdnUrls(authToken, rawUrls)),
stories: new DataLoader(keys => genStories(authToken, keys)),
};
}
If anyone has problems with this topic: I made an example repo for Relay/GraphQL/express authentication based on dimadima's answer. It saves session data (userId and role) in a cookie using express middleware and a GraphQL Mutation
I'm developing and app with Sails.js and using Waterline orm for db. I'm developing functionality for users to do friend requests and other similar requests to each other. I have following URequest model for that:
module.exports = {
attributes: {
owner: {
model: 'Person'
},
people: {
collection: 'Person'
},
answers: {
collection: 'URequestAnswer'
},
action: {
type: 'json' //TODO: Consider alternative more schema consistent approach.
}
}
};
Basically owner is association to Person who made the request and people is one-to-many association to all Persons who the request is directed. So far fine.
Now I want to have a controller which returns all requests where certain user is involved in meaning all requests where user is either in owner field or in people. How I do query like "give me all rows where there is association to person P" ? In other words how I ca know which URequest models have association to a certain Person?
I tried something like this:
getRequests: function (req, res) {
var personId = req.param('personId');
URequest.find().where({
or: [
{people: [personId]}, //TODO: This is not correct
{owner: personId}
]
}).populateAll().then(function(results) {
res.json(results);
});
},
So I know how to do the "or" part but how do I check if the personId is in people? I know I should somehow be able to look into join-table but I have no idea how and couldn't find much from Waterline docs relating to my situation. Also, I'm trying to keep this db-agnostic, though atm I'm using MongoDB but might use Postgres later.
I have to be honest this is a tricky one, and, as far as I know what you are trying to do is not possible using Waterline so your options are to write a native query using query( ) if you are using a sql based adapter or native otherwise, or try doing some manual filtering. Manual filtering would depend on how large of a dataset you are dealing with.
My mind immediately goes to reworking your data model a bit, maybe instead of a collection you have a table that stores associations. Something like this:
module.exports = {
attributes: {
owner: {
model: 'URequest'
},
person: {
model: 'Person'
}
}
Using the sailsjs model methods (like beforeCreate) you could auto create these associations as needed.
Good Luck, I hope you get it working!
I'm playing around with GraphQL-JS right now, wiring it up to a MariaDB backend.
I've figured out how to return an entire result set:
const queryType = new GraphQLObjectType({
name: 'Query',
fields: () => ({
users: {
type: new GraphQLList(userType),
resolve: (root, args) => new Promise((resolve, reject) => {
db.query('select * from users', (err, rows, fields) => {
if(err) return reject(err);
resolve(rows);
});
}),
}
})
});
Which is pretty cool, but the library I'm using also lets me stream results, row-by-row.
Does GraphQL have anything to facilitate this?
As far as I can tell, GraphQLList is expecting a full array, and I can only resolve my result set once, not feed using an Emitter or something.
(This is not really a complete answer, but doesn't fit into a comment either)
According to the graphQL spec. it should be possible :
GraphQL provides a number of built‐in scalars, but type systems can
add additional scalars with semantic meaning. For example, a GraphQL
system could define a scalar called Time which, while serialized as a
string, promises to conform to ISO‐8601. When querying a field of type
Time, you can then rely on the ability to parse the result with an
ISO‐8601 parser and use a client‐specific primitive for time. Another
example of a potentially useful custom scalar is Url, which serializes
as a string, but is guaranteed by the server to be a valid URL.
So the structure should look like this :
var myStream = new GraphQLScalarType({
name: "myStream",
serialize: ...
parseValue: ...
parseLiteral: ...
});
}
Now I guess you could try to return promises/streams and see what happens, but I don't think it is supported yet. Many people are asking for this, and I think I saw a comment from someone on the graphQL team saying that they are looking into this (cannot find it back unfortunately).
A personal opinion, but if it cannot be changed without incorporating that behaviour into the core of graphQL, may show that graphQL as it is today is not that flexible.
It's not possible to stream results with basic GraphQL server because it has a client-server architecture (you send HTTP request and get HTTP response back).
It's possible with subscriptions, though. They were recently added to GraphQL (see the story behind).
See the GraphQL #stream directive, example https://github.com/rmosolgo/graphql-ruby-stream-defer-demo
Ok so after a ton of trial and error, I've determined that when I drop a collection and then recreate it through my app, unique doesn't work until I restart my local node server. Here's my Schema
var mongoose = require('mongoose');
var Schema = mongoose.Schema;
var Services = new Schema ({
type : {type : String},
subscriptionInfo : Schema.Types.Mixed,
data : Schema.Types.Mixed
},{_id:false});
var Hashtags = new Schema ({
name: {type : String},
services : [Services]
},{_id:false});
var SubscriptionSchema = new Schema ({
eventId : {type: String, index: { unique: true, dropDups: true }},
hashtags : [Hashtags]
});
module.exports = mongoose.model('Subscription', SubscriptionSchema);
And Here's my route...
router.route('/')
.post(function(req, res) {
var subscription = new subscribeModel();
subscription.eventId = eventId;
subscription.save(function(err, subscription) {
if (err)
res.send(err);
else
res.json({
message: subscription
});
});
})
If I drop the collection, then hit the /subscribe endpoint seen above, it will create the entry but will not honor the duplicate. It's not until I then restart the server that it starts to honor it. Any ideas why this is? Thanks!
What mongoose does when your application starts and it itself initializes is scan your schema definitions for the registered models and calls the .ensureIndexes() method for the supplied arguments. This is the "by design" behavior and is also covered with this statement:
When your application starts up, Mongoose automatically calls ensureIndex for each defined index in your schema. While nice for development, it is recommended this behavior be disabled in production since index creation can cause a significant performance impact. Disable the behavior by setting the autoIndex option of your schema to false.
So your general options here are:
Don't "drop" the collection and call .remove() which leaves the indexes intact.
Manually call .ensureIndexes() when you issue a drop on a collection in order to rebuild them.
The warning in the document is generally that creating indexes for large collections can take some time and take up server resources. If the index exists this is more or less a "no-op" to MongoDB, but beware of small changes to the index definition which would result in creating "additional" indexes.
As such, it is generally best to have a deployment plan for production systems where you determine what needs to be done.
This post seems to argue that indexes are not re-built when you restart: Are MongoDB indexes persistent across restarts?