The official line from Facebook is that Relay is "intentionally agnostic about authentication mechanisms." In all the examples in the Relay repository, authentication and access control are a separate concern. In practice, I have not found a simple way to implement this separation.
The examples provided in the Relay repository all have root schemas with a viewer field that assumes there is one user. And that user has access to everything.
However, in reality, an application has has many users and each user has different degrees of access to each node.
Suppose I have this schema in JavaScript:
export const Schema = new GraphQLSchema({
query: new GraphQLObjectType({
name: 'Query',
fields: () => ({
node: nodeField,
user: {
type: new GraphQLObjectType({
name: 'User',
args: {
// The `id` of the user being queried for
id: { type: new GraphQLNonNull(GraphQLID) },
// Identity the user who is querying
session: { type: new GraphQLInputObjectType({ ... }) },
},
resolve: (_, { id, session }) => {
// Given `session, get user with `id`
return data.getUser({ id, session });
}
fields: () => ({
name: {
type: GraphQLString,
resolve: user => {
// Does `session` have access to this user's
// name?
user.name
}
}
})
})
}
})
})
});
Some users are entirely private from the perspective of the querying user. Other users might only expose certain fields to the querying user. So to get a user, the client must not only provide the user ID they are querying for, but they must also identify themselves so that access control can occur.
This seems to quickly get complicated as the need to control access trickles down the graph.
Furthermore, I need to control access for every root query, like nodeField. I need to make sure that every node implementing nodeInterface.
All of this seems like a lot of repetitive work. Are there any known patterns for simplifying this? Am I thinking about this incorrectly?
Different applications have very different requirements for the form of access control, so baking something into the basic Relay framework or GraphQL reference implementation probably doesn't make sense.
An approach that I have seen pretty successful is to bake the privacy/access control into the data model/data loader framework. Every time you load an object, you wouldn't just load it by id, but also provide the context of the viewer. If the viewer cannot see the object, it would fail to load as if it doesn't exist to prevent even leaking the existence of the object. The object also retains the viewer context and certain fields might have restricted access that are checked before being returned from the object. Baking this in the lower level data loading mechanism helps to ensure that bugs in higher level product / GraphQL code doesn't leak private data.
In a concrete example, I might not be allowed to see some User, because he has blocked me. You might be allowed to see him in general, but no his email, since you're not friends with him.
In code something like this:
var viewer = new Viewer(getLoggedInUser());
User.load(id, viewer).then(
(user) => console.log("User name:", user.name),
(error) => console.log("User does not exist or you don't have access.")
)
Trying to implement the visibility on GraphQL level has lots of potential to leak information. Think of the many way to access a user in GraphQL implementation for Facebook:
node($userID) { name }
node($postID) { author { name } }
node($postID) { likers { name } }
node($otherUserID) { friends { name } }
All of these queries could load a user's name and if the user has blocked you, none of them should return the user or it's name. Having the access control on all these fields and not forgetting the check anywhere is a recipe for missing the check somewhere.
I found that handling authentication is easy if you make use of the GraphQL rootValue, which is passed to the execution engine when the query is executed against the schema. This value is available at all levels of execution and is useful for storing an access token or whatever identifies the current user.
If you're using the express-graphql middleware, you can load the session in a middleware preceding the GraphQL middleware and then configure the GraphQL middleware to place that session into the root value:
function getSession(req, res, next) {
loadSession(req).then(session => {
req.session = session;
next();
}).catch(
res.sendStatus(400);
);
}
app.use('/graphql', getSession, graphqlHTTP(({ session }) => ({
schema: schema,
rootValue: { session }
})));
This session is then available at any depth in the schema:
new GraphQLObjectType({
name: 'MyType',
fields: {
myField: {
type: GraphQLString,
resolve(parentValue, _, { rootValue: { session } }) {
// use `session` here
}
}
}
});
You can pair this with "viewer-oriented" data loading to achieve access control. Check out https://github.com/facebook/dataloader which helps create this kind of data loading object and provides batching and caching.
function createLoaders(authToken) {
return {
users: new DataLoader(ids => genUsers(authToken, ids)),
cdnUrls: new DataLoader(rawUrls => genCdnUrls(authToken, rawUrls)),
stories: new DataLoader(keys => genStories(authToken, keys)),
};
}
If anyone has problems with this topic: I made an example repo for Relay/GraphQL/express authentication based on dimadima's answer. It saves session data (userId and role) in a cookie using express middleware and a GraphQL Mutation
Related
I'm trying to make an access control system based on many to many relationships between Users and Resources (projects, documents and others). The main key is that everywhere I see role based AC which allow users to edit a type of resource not specific resources based on IDs.
What i have:
Project: {id, ..., editors: [ User: { id, ...}, User: {id...} ]
I want to check using casl (I'm open to other solutions than casl) if the User from the request is in the Project.editors array.
In AbilityFactory defineAbility(userId)
export enum Action {
Edit = 'Edit',
View = 'View',
}
export enum Resource {
Project = 'project',
}
const user = await this.userService.findOne(userId);
can(Action.Edit, Project, { editors: user });
In controller (temp solution, I will use guard in future) I check it this way:
const ability = await this.abilityFactory.defineAbility(userId);
const project = await this.projectService.findOne(id);
if (!ability.can(Action.Edit, project)) {
throw new ForbiddenException('Access denied!');
}
But this doesn't work well, always returning false...
Any solution?
Is there a way to get the user session and profile at the same time? The way I did it would be get the user session first after login then fetch the user profile using the id.
const [authsession, setSession] = useState(null);
const [loading, setLoading] = useState(false);
const [error, setError] = useState(false);
useEffect(() => {
const userSession = supabase.auth.session();
setSession(userSession);
if (userSession) {
getProfile(userSession.user.id);
} else {
setSession((s) => ({ ...s, profile: null }));
}
supabase.auth.onAuthStateChange((_event, session) => {
setSession(session);
if (session) {
getProfile(session.user.id);
} else {
setSession((s) => ({ ...s, profile: null }));
}
});
}, []);
const getProfile = async (id) => {
setLoading(true);
setError(false);
try {
const { data } = await supabase
.from("profiles")
.select("*")
.eq("id", id)
.single();
setSession((s) => ({ ...s, profile: data }));
} catch (error) {
setError(true);
} finally {
setLoading(false);
}
};
(In support of #chipilov's answer, but too long for a comment.)
Solution
When you get the user session at login time, Supabase will additionally return the JSON stored in auth.users.raw_user_meta_data. So, save the profile info there at signup time wit supabase.auth.signUp() or later with supabase.auth.updateUser(), you are all set.
Should you really store profile data in auth.users?
People seem to freak out a bit at the prospect of writing into auth.users, probably fearing that they might mess with Supabase internals. However, raw_user_meta_data is meant for this purpose: Supabase itself does not save anything into that column, only the additional user metadata that you may provide at signup or when updating a user.
Supabase maintainers do not recommend to write with own server-side routines to auth.users (source). But we don't do that here, relying only on the Supabase-provided functions supabase.auth.signUp() and supabase.auth.updateUser().
In the Supabase docs, they even provide an example where this "additional user metadata" is used for profile information:
const { data, error } = await supabase.auth.signUp(
{
email: 'example#email.com',
password: 'example-password',
options: {
data: {
first_name: 'John',
age: 27,
}
}
}
)
(Source: Supabase Documentation: JavaScript [Client Library] v2.0: AUTH: Create a new user, example "Sign up with additional user metadata")
How to access this profile data server-side?
The OP uses a table public.profiles to maintain profile information. For additional profile information that you generate and write to server-side, this is the recommended practice. Such a table is also recommended to make user data from auth.users accessible through the API:
GaryAustin1 (Supabase maintainer): "You could use the user meta data [field to store your additional metadata] and update it server side. The recommendation though normally is to have your own copy of key user info from the auth.users table […] [in] your own user table. This table is updated with a trigger function on auth.users inserts/updates/deletes." (source)
Instead of a set of trigger functions, you may also opt for a VIEW with SECURITY definer privileges to make auth.users data available in the public schema and thus via the API. Compared to triggers, it does not introduce redundancy, and I also think it's simpler. Such a view can also include your auth.users.raw_user_meta_data JSON split nicely into columns, as explored in this question. For security, be sure to include row access filtering via the WHERE clause of this view (instructions), because VIEWs cannot have their own row-level security (RLS) policies.
How to modify this profile data server-side?
Your own user table in the public schema can be used to store that additional profile data. It would be connected to auth.users with a foreign key relation. In the VIEW proposed above, you can then include both data from this own table and columns from auth.users.
Of course, user information from your own table will then not be returned automatically on login. If you cannot live with that, then I propose to alternatively use auth.users.raw_user_meta_data to save your additional metadata. I know I disagree with a Supabase maintainer here, but really, you're not messing with Supabase internals, just writing into a field that nothing in Supabase depends on.
You would use PostgreSQL functions (with SECURITY definer set) to provide access to specific JSON properties of auth.users.raw_user_meta_data from within the public schema. These functions can even be exposed to the API automatically (see).
If however you also need to prevent write access by users to some of your user metadata, then the own table solution from above is usually better. You could use triggers on auth.users.raw_user_meta_data, but it seems rather complex.
The session.user object contains a property called user_metadata - this property contains the data which is present in the raw_user_meta_data column of the of auth.users table.
Hence, you can setup a DB trigger on your custom user profile table which copies the data from that table to the raw_user_meta_data column of the of auth.users table in JSON format anytime the data in the user profile table changes (i.e. you will need the trigger to be run on INSERT/UPDATE and probably DELETE statements). This way the profile data will be automatically delivered to the client with the sign-in or token refresh events.
IMPORTANT: This approach has potential drawbacks:
this data will also be part of the JWT sent to the user. This might be a problem because the JWT is NOT easy to revoke BEFORE its expiration time and you might end up in a situation where a JWT which expires in 1 hour (and will be refreshed in 1 hour) still contains profile data which has already been updated on the server. The implications of this really depends on what you put in your profile data and how you use it in the client and/or the backend;
since this data is in the JWT, the data will be re-sent to the client with each refresh of the session (which is every 1 hour by default). So, (a) if the profile data is big, you would be sending this large piece of data on every token refresh even if it has NOT changed and (b) if the data changes often and you need to ensure that the client is up to date in (near) real time you will need to increase the token refresh rate;
Is it possible to configure the Apollo Client to fetch a single cached Item from a query that returns a list of Items, in order to prefetch data when querying for a single Item?
Schema:
type Item {
id: ID!
name: String!
}
type Query {
items: [Item!]!
itemById(id: ID!): Item!
}
Query1:
query HomepageList {
items {
id
name
}
}
Query2:
query ItemDetail($id: ID!) {
itemById(id: $id) {
id
name
}
}
Given that the individual Item's data will already be in the cache, it should be possible to use the already cached data whilst still executing a fetch incase any data has changed.
However, the query does not utilise the cached data (by default at least), and it seems that we need to somehow tell Apollo that we know the Item is already in the cache.
Any help greatly appreciated.
This functionality exists, but it's hard to find if you don't know what you're looking for. In Apollo Client v2 you're looking for cache redirect functionality, in Apollo Client v3 this is replaced by type policies / field read policies (v3 docs).
Apollo doesn't 'know' your GraphQL schema and that makes it easy to set up and work with in day-to-day usage. However, this implies that given some query (e.g. getBooks) it doesn't know what the result type is going to be upfront. It does know it afterwards, as long as the __typename's are enabled. This is the default behaviour and is needed for normalized caching.
Let's assume you have a getBooks query that fetches a list of Books. If you inspect the cache after this request is finished using Apollo devtools, you should find the books in the cache using the Book:123 key in which Book is the typename and 123 is the id. If it exists (and is queried!) the id field is used as identifier for the cache. If your id field has another name, you can use the typePolicies of the cache to inform Apollo InMemoryCache about this field.
If you've set this up and you run a getBook query afterwards, using some id as input, you will not get any cached data. The reason is as described before: Apollo doesn't know upfront which type this query is going to return.
So in Apollo v2 you would use a cacheRedirect to 'redirect' Apollo to the right cache:
cacheRedirects: {
Query: {
getBook(_, args, { getCacheKey }) {
return getCacheKey({
__typename: 'Book',
id: args.id,
});
}
},
},
(args.id should be replaced by another identifier if you have specified another key in the typePolicy)
When using Apollo v3, you need a typepolicy / field read policy:
typePolicies: {
Query: {
fields: {
getBook(_, { args, toReference }) {
return toReference({
__typename: 'Book',
id: args.id,
});
}
}
}
}
I'm trying to initialize a user upon registration with a isUSer role using custom claims and the onCreate listener. I've got it to set the correct custom claim but the front end is aware of it only after a full page refresh.
I've been following this article, https://firebase.google.com/docs/auth/admin/custom-claims?authuser=0#logic, to notify the front end that it needs to refresh the token in order to get the latest changes on the custom claims object, but to be honest I don't quite fully understand what's going on in the article.
Would someone be able to help me successfully do this with the firestore database ?
This is my current cloud function:
exports.initializeUserRole = functions.auth.user().onCreate(user => {
return admin.auth().setCustomUserClaims(user.uid, {
isUser: true
}).then(() => {
return null;
});
});
I've tried adapting the real-time database example provided in the article above to the firestore database but I've been unsuccessful.
exports.initializeUserRole = functions.auth.user().onCreate(user => {
return admin.auth().setCustomUserClaims(user.uid, {
isUser: true
}).then(() => {
// get the user with the updated claims
return admin.auth().getUser(user.uid);
}).then(user => {
user.metadata.set({
refreshTime: new Date().getTime()
});
return null;
})
});
I thought I could simply set refreshTime on the user metadata but there's no such property on the metadata object.
In the linked article, does the metadataRef example provided not actually live on the user object but instead somewhere else in the database ?
const metadataRef = admin.database().ref("metadata/" + user.uid);
If anyone could at least point me in the right direction on how to adapt the real-time database example in the article to work with the firestore database that would be of immense help.
If my description doesn't make sense or is missing vital information let me know and I'll amend it.
Thanks.
The example is using data stored in the Realtime Database at a path of the form metadata/[userID]/refreshTime.
To do the same thing in Firestore you will need to create a Collection named metadata and add a Document for each user. The Document ID will be the value of user.uid. Those documents will need a timestamp field named refreshTime.
After that, all you need to do is update that field on the corresponding Document after the custom claim has been set for the user. On the client side, you will subscribe to changes for the user's metadata Document and update in response to that.
Here is an example of how I did it in one of my projects. My equivalent of the metadata collection is named userTokens. I use a transaction to prevent partial database changes in the case that any of the steps fail.
Note: My function uses some modern JavaScript syntax that is being transpiled with Babel before uploading.
exports.initializeUserData = functions.auth.user().onCreate(async user => {
await firestore.collection('userTokens').doc(user.uid).set({ accountStatus: 'pending' })
const tokenRef = firestore.collection('userTokens').doc(user.uid)
const userRef = firestore.collection('users').doc(user.uid)
const permissionsRef = firestore.collection('userPermissions').doc(user.email)
await firestore.runTransaction(async transaction => {
const permissionsDoc = await transaction.get(permissionsRef)
const permissions = permissionsDoc.data();
const customClaims = {
admin: permissions ? permissions.admin : false,
hasAccess: permissions ? permissions.hasAccess : false,
};
transaction.set(userRef, { name: user.displayName, email: user.email, getEmails: customClaims.hasAccess })
await admin.auth().setCustomUserClaims(user.uid, customClaims)
transaction.update(tokenRef, { accountStatus: 'ready', refreshTime: admin.firestore.FieldValue.serverTimestamp() })
});
})
If I have a Schema which has an Array of references to another Schema, is there a way I can update both Documents with one endpoint?
This is my Schema:
CompanySchema = new Schema({
addresses: [{
type: Schema.Types.ObjectId,
ref: 'Address'
}]
});
I want to send a Company with the full Address object to /companies/:id/edit. With this endpoint, I want to edit attributes on Company and Address at the same time.
In Rails you can use something like nested attributes to do one big UPDATE call, and it will update the Company and update or add the Address as well.
Any idea how would you do this in Mongoose?
Cascade saves are not natively supported in Mongoose (issue).
But there are plugins (example: cascading-relations) that implement this behavior on nested populate objects.
Take in mind that mongodb is not a fully transactional database, and the "big save" is achieved with various insert()/update() op calls and you (or the plugin) have to handle errors and rollback.
Example of cascade save:
company.save()
.then(() => Promise.all(company.addresses.map(address => {
/* update fkeys if needed */
return address.save()
}))
.catch(err => console.error('something went wrong...', err))