I'm playing around with GraphQL-JS right now, wiring it up to a MariaDB backend.
I've figured out how to return an entire result set:
const queryType = new GraphQLObjectType({
name: 'Query',
fields: () => ({
users: {
type: new GraphQLList(userType),
resolve: (root, args) => new Promise((resolve, reject) => {
db.query('select * from users', (err, rows, fields) => {
if(err) return reject(err);
resolve(rows);
});
}),
}
})
});
Which is pretty cool, but the library I'm using also lets me stream results, row-by-row.
Does GraphQL have anything to facilitate this?
As far as I can tell, GraphQLList is expecting a full array, and I can only resolve my result set once, not feed using an Emitter or something.
(This is not really a complete answer, but doesn't fit into a comment either)
According to the graphQL spec. it should be possible :
GraphQL provides a number of built‐in scalars, but type systems can
add additional scalars with semantic meaning. For example, a GraphQL
system could define a scalar called Time which, while serialized as a
string, promises to conform to ISO‐8601. When querying a field of type
Time, you can then rely on the ability to parse the result with an
ISO‐8601 parser and use a client‐specific primitive for time. Another
example of a potentially useful custom scalar is Url, which serializes
as a string, but is guaranteed by the server to be a valid URL.
So the structure should look like this :
var myStream = new GraphQLScalarType({
name: "myStream",
serialize: ...
parseValue: ...
parseLiteral: ...
});
}
Now I guess you could try to return promises/streams and see what happens, but I don't think it is supported yet. Many people are asking for this, and I think I saw a comment from someone on the graphQL team saying that they are looking into this (cannot find it back unfortunately).
A personal opinion, but if it cannot be changed without incorporating that behaviour into the core of graphQL, may show that graphQL as it is today is not that flexible.
It's not possible to stream results with basic GraphQL server because it has a client-server architecture (you send HTTP request and get HTTP response back).
It's possible with subscriptions, though. They were recently added to GraphQL (see the story behind).
See the GraphQL #stream directive, example https://github.com/rmosolgo/graphql-ruby-stream-defer-demo
Related
I am making a get request with additional params options, since I am using that request on a filter, so the params are filters for what to get back:
const res = await axios.get("http://localhots:3000/getsomedata", {
params: {
firstFilter: someObject,
secondFilter: [someOtherObject, someOtherObject]
}
});
The request goes through just fine, on the other end, when I console.log(req.query); I see the following:
{
firstFilter: 'someObject',
'secondFilter[]': ['{someOtherObject}', '{someOtherObject}'],
}
If I do req.query.firstFilter that works just fine, but req.query.secondFilter does not work and in order for me to get the data, I have to do it with req.query["secondFilter[]"], is there a way to avoid this and be able to get my array of data with req.query.secondFilter?
My workaround for now is to do:
const filter = {
firstFilter: req.query.firstFilter,
secondFilter: req.query["secondFilter[]"]
}
And it works of course, but I don't like it, I am for sure missing something.
Some tools for parsing query strings expect arrays of data to be encoded as array_name=1&array_name=2.
This could be a problem if you have one or more items because it might be an array or might be a string.
To avoid that problem PHP required arrays of data to be encoded as array_name[]=1&array_name[]=2 and would discard all but the last item if you left the [] out (so you'd always get a string).
A lot of client libraries that generated data for submission over HTTP decided to do so in a way that was compatible with PHP (largely because PHP was and is very common).
So you need to either:
Change the backend to be able to parse PHP style
Change your call to axios so it doesn't generate PHP style
Backend
The specifics depend what backend you are using, but it looks like you might be using Express.js.
See the settings.
You can turn on Extended (PHP-style) query parsing by setting it to "extended" (although that is the default)
const app = express()
app.set("query parser", "extended");
Frontend
The axios documentation says:
// `paramsSerializer` is an optional function in charge of serializing `params`
// (e.g. https://www.npmjs.com/package/qs, http://api.jquery.com/jquery.param/)
paramsSerializer: function (params) {
return Qs.stringify(params, {arrayFormat: 'brackets'})
},
So you can override that
const res = await axios.get("http://localhots:3000/getsomedata", {
params: {
firstFilter: someObject,
secondFilter: [someOtherObject, someOtherObject]
},
paramsSerializer: (params) => Qs.stringify(params, {arrayFormat: 'repeat'})
});
My example requires the qs module
This has to do with params not being serialized correctly for HTTP GET method. Remember that GET has no "body" params similar to POST, it is a text URL.
For more information I refer to this answer, which provides more detailed info with code snippets.
Thanks for looking at my question. It should be easy for anyone who has used Meteor in production, I am still at the learning stage.
So my meteor setup is I have a bunch of documents with ownedBy _id's reflecting which user owns each document (https://github.com/rgstephens/base/tree/extendDoc is the full github, note that it is the extendDoc branch and not the master branch).
I now want to modify my API such that I can display the real name of each owner of the document. On the server side I can access this with Meteor.users.findOne({ownedBy}) but on the client side I have discovered that I cannot do this due to Meteor security protocols (a user doesnt have access to another user's data).
So I have two options:
somehow modify the result of what I am publishing to include the user's real name on the server side
somehow push the full user data to the clientside and do the mapping of the _id to the real names on the clientside
what is the best practice here? I have tried both and here are my results so far:
I have failed here. This is very 'Node' thinking I know. I can access user data on clientside but Meteor insists that my publications must return cursors and not JSON objects. How do I transform JSON objects into cursors or otherwise circumvent this publish restriction? Google is strangely silent on this topic.
Meteor.publish('documents.listAll', function docPub() {
let documents = Documents.find({}).fetch();
documents = documents.map((x) => {
const userobject = Meteor.users.findOne({ _id: x.ownedBy });
const x2 = x;
if (userobject) {
x2.userobject = userobject.profile;
}
return x2;
});
return documents; //this causes error due to not being a cursor
}
I have succeeded here but I suspect at the cost of a massive security hole. I simply modified my publish to be an array of cursors, as below:
Meteor.publish('documents.listAll', function docPub() {
return [Documents.find({}),
Meteor.users.find({}),
];
});
I would really like to do 1 because I sense there is a big security hole in 2, but please advise on how I should do it? thanks very much.
yes, you are right to not want to publish full user objects to the client. but you can certainly publish a subset of the full user object, using the "fields" on the options, which is the 2nd argument of find(). on my project, i created a "public profile" area on each user; that makes it easy to know what things about a user we can publish to other users.
there are several ways to approach getting this data to the client. you've already found one: returning multiple cursors from a publish.
in the example below, i'm returning all the documents, and a subset of all the user object who own those documents. this example assumes that the user's name, and whatever other info you decide is "public," is in a field called publicInfo that's part of the Meteor.user object:
Meteor.publish('documents.listAll', function() {
let documentCursor = Documents.find({});
let ownerIds = documentCursor.map(function(d) {
return d.ownedBy;
});
let uniqueOwnerIds = _.uniq(ownerIds);
let profileCursor = Meteor.users.find(
{
_id: {$in: uniqueOwnerIds}
},
{
fields: {publicInfo: 1}
});
return [documentCursor, profileCursor];
});
In the MeteorChef slack channel, #distalx responded thusly:
Hi, you are using fetch and fetch return all matching documents as an Array.
I think if you just use find - w/o fetch it will do it.
Meteor.publish('documents.listAll', function docPub() {
let cursor = Documents.find({});
let DocsWithUserObject = cursor.filter((doc) => {
const userobject = Meteor.users.findOne({ _id: doc.ownedBy });
if (userobject) {
doc.userobject = userobject.profile;
return doc
}
});
return DocsWithUserObject;
}
I am going to try this.
If I have a Schema which has an Array of references to another Schema, is there a way I can update both Documents with one endpoint?
This is my Schema:
CompanySchema = new Schema({
addresses: [{
type: Schema.Types.ObjectId,
ref: 'Address'
}]
});
I want to send a Company with the full Address object to /companies/:id/edit. With this endpoint, I want to edit attributes on Company and Address at the same time.
In Rails you can use something like nested attributes to do one big UPDATE call, and it will update the Company and update or add the Address as well.
Any idea how would you do this in Mongoose?
Cascade saves are not natively supported in Mongoose (issue).
But there are plugins (example: cascading-relations) that implement this behavior on nested populate objects.
Take in mind that mongodb is not a fully transactional database, and the "big save" is achieved with various insert()/update() op calls and you (or the plugin) have to handle errors and rollback.
Example of cascade save:
company.save()
.then(() => Promise.all(company.addresses.map(address => {
/* update fkeys if needed */
return address.save()
}))
.catch(err => console.error('something went wrong...', err))
TL;DR: I want a PouchDB db that acts like Ember Data: fetch from the local store first, and if not found, go to the remote. Replicate only that document in both cases.
I have a single document type called Post in my PouchDB/CouchDB servers. I want PouchDB to look at the local store, and if it has the document, return the document and start replicating. If not, go to the remote CouchDB server, fetch the document, store it in the local PouchDB instance, then start replicating only that document. I don't want to replicate the entire DB in this case, only things the user has already fetched.
I could achieve it by writing something like this:
var local = new PouchDB('local');
var remote = new PouchDB('http://localhost:5984/posts');
function getDocument(id) {
return local.get(id).catch(function(err) {
if (err.status === 404) {
return remote.get(id).then(function(doc) {
return local.put(id);
});
}
throw error;
});
}
This doesn't handle the replication issue either, but it's the general direction of what I want to do.
I can write this code myself I guess, but I'm wondering if there's some built-in way to do this.
Unfortunately what you describe doesn't quite exist (at least as a built-in function). You can definitely fall back from local to remote using the code above (which is perfect BTW :)), but local.put() will give you problems, because the local doc will end up with a different _rev than the remote doc, which could mess with replication later on down the line (it would be interpreted as a conflict).
You should be able to use {revs: true} to fetch the doc with its revision history, then insert with {new_edits: false} to properly replicate the missing doc, while preserving revision history (this is what the replicator does under the hood). That would look like this:
var local = new PouchDB('local');
var remote = new PouchDB('http://localhost:5984/posts');
function getDocument(id) {
return local.get(id).catch(function(err) {
if (err.status === 404) {
// revs: true gives us the critical "_revisions" object,
// which contains the revision history metadata
return remote.get(id, {revs: true}).then(function(doc) {
// new_edits: false inserts the doc while preserving revision
// history, which is equivalent to what replication does
return local.bulkDocs([doc], {new_edits: false});
}).then(function () {
return local.get(id); // finally, return the doc to the user
});
}
throw error;
});
}
That should work! Let me know if that helps.
The official line from Facebook is that Relay is "intentionally agnostic about authentication mechanisms." In all the examples in the Relay repository, authentication and access control are a separate concern. In practice, I have not found a simple way to implement this separation.
The examples provided in the Relay repository all have root schemas with a viewer field that assumes there is one user. And that user has access to everything.
However, in reality, an application has has many users and each user has different degrees of access to each node.
Suppose I have this schema in JavaScript:
export const Schema = new GraphQLSchema({
query: new GraphQLObjectType({
name: 'Query',
fields: () => ({
node: nodeField,
user: {
type: new GraphQLObjectType({
name: 'User',
args: {
// The `id` of the user being queried for
id: { type: new GraphQLNonNull(GraphQLID) },
// Identity the user who is querying
session: { type: new GraphQLInputObjectType({ ... }) },
},
resolve: (_, { id, session }) => {
// Given `session, get user with `id`
return data.getUser({ id, session });
}
fields: () => ({
name: {
type: GraphQLString,
resolve: user => {
// Does `session` have access to this user's
// name?
user.name
}
}
})
})
}
})
})
});
Some users are entirely private from the perspective of the querying user. Other users might only expose certain fields to the querying user. So to get a user, the client must not only provide the user ID they are querying for, but they must also identify themselves so that access control can occur.
This seems to quickly get complicated as the need to control access trickles down the graph.
Furthermore, I need to control access for every root query, like nodeField. I need to make sure that every node implementing nodeInterface.
All of this seems like a lot of repetitive work. Are there any known patterns for simplifying this? Am I thinking about this incorrectly?
Different applications have very different requirements for the form of access control, so baking something into the basic Relay framework or GraphQL reference implementation probably doesn't make sense.
An approach that I have seen pretty successful is to bake the privacy/access control into the data model/data loader framework. Every time you load an object, you wouldn't just load it by id, but also provide the context of the viewer. If the viewer cannot see the object, it would fail to load as if it doesn't exist to prevent even leaking the existence of the object. The object also retains the viewer context and certain fields might have restricted access that are checked before being returned from the object. Baking this in the lower level data loading mechanism helps to ensure that bugs in higher level product / GraphQL code doesn't leak private data.
In a concrete example, I might not be allowed to see some User, because he has blocked me. You might be allowed to see him in general, but no his email, since you're not friends with him.
In code something like this:
var viewer = new Viewer(getLoggedInUser());
User.load(id, viewer).then(
(user) => console.log("User name:", user.name),
(error) => console.log("User does not exist or you don't have access.")
)
Trying to implement the visibility on GraphQL level has lots of potential to leak information. Think of the many way to access a user in GraphQL implementation for Facebook:
node($userID) { name }
node($postID) { author { name } }
node($postID) { likers { name } }
node($otherUserID) { friends { name } }
All of these queries could load a user's name and if the user has blocked you, none of them should return the user or it's name. Having the access control on all these fields and not forgetting the check anywhere is a recipe for missing the check somewhere.
I found that handling authentication is easy if you make use of the GraphQL rootValue, which is passed to the execution engine when the query is executed against the schema. This value is available at all levels of execution and is useful for storing an access token or whatever identifies the current user.
If you're using the express-graphql middleware, you can load the session in a middleware preceding the GraphQL middleware and then configure the GraphQL middleware to place that session into the root value:
function getSession(req, res, next) {
loadSession(req).then(session => {
req.session = session;
next();
}).catch(
res.sendStatus(400);
);
}
app.use('/graphql', getSession, graphqlHTTP(({ session }) => ({
schema: schema,
rootValue: { session }
})));
This session is then available at any depth in the schema:
new GraphQLObjectType({
name: 'MyType',
fields: {
myField: {
type: GraphQLString,
resolve(parentValue, _, { rootValue: { session } }) {
// use `session` here
}
}
}
});
You can pair this with "viewer-oriented" data loading to achieve access control. Check out https://github.com/facebook/dataloader which helps create this kind of data loading object and provides batching and caching.
function createLoaders(authToken) {
return {
users: new DataLoader(ids => genUsers(authToken, ids)),
cdnUrls: new DataLoader(rawUrls => genCdnUrls(authToken, rawUrls)),
stories: new DataLoader(keys => genStories(authToken, keys)),
};
}
If anyone has problems with this topic: I made an example repo for Relay/GraphQL/express authentication based on dimadima's answer. It saves session data (userId and role) in a cookie using express middleware and a GraphQL Mutation