I have two GraphQL schemas. One is written in Javascript and the other is written in Java.
Is there any way for me to combine the two into a single GraphQL implementation.
For example:
GraphQL Server A can query the following:
{
cats
}
GraphQL Server B can query the following:
{
dogs
}
I want my GraphQL Server to be able to query by somehow combining the two together
{
cats
dogs
}
I want to know if there is something that exists that does this already or if I have to do it myself. If I have to do it myself where should I start?
There are a few ways to do this, the term for it that the community has settled on is "schema stitching"
Apollo graphql-tools has a solution <-- I would start with this one
These guys have a solution
And I created a simple solution that I would not recommend using unless you have a really strong opinion and are willing to contribute to supporting it if I decided not to.
Related
I have a local container created with Docker with MongoDB & an express node server
What is the recommended way to populate it with new data?
1) Use the cli
2) Using Mongoose
3) Use a GUI such as Compass
Thanks!
I don't know if there's a 'correct' way to do this, but I've run a couple of 'seeds' files for my projects:
https://github.com/rmgreenstreet/surfShop/blob/master/seeds.js
https://github.com/rmgreenstreet/yelpcamp/blob/master/seeds.js
https://github.com/rmgreenstreet/custom-forms/blob/master/seeds.js
I almost wish there was some kind of niche field/need/position for being able to generate fake data!
The point is that you'll need to set and understand the structure of your data and essentially go through a bunch of nested loops for any connected/dependent data types.
Now if you're working with a SQL database, I'm totally clueless. That's next on my "things to learn" once I feel more comfortable with Javascript/NoSQL.
This would possibly depend on the usecase here,
Answer would be:
MONGOOSE : If you are planning to deploy an application using express. As mongoose goes hand in hand with express. (https://medium.com/#SigniorGratiano/mongoose-and-express-68994fcfdeff) As in many stacks like MEAN, MERN.
GUI like Compass: When you have to visualise the data or do ONE TIME OPERATIONS.
I have the following react-apollo-wrapped GraphQL query:
user(id: 1) {
name
friends {
id
name
}
}
As semantically represented, it fetches the user with ID 1, returns its name, and returns the id and name of all of its friends.
I then render this in a component structure like the following:
graphql(ParentComponent)
-> UserInfo
-> ListOfFriends (with the list of friends passed in)
This is all working for me. However, I wish to be able to refetch the list of friends for the current user.
I can do this.props.data.refetch() on the parent component and updates will be propagated; however, I'm not sure this is the best practice, given that my GraphQL query looks something more like this,
user(id: 1) {
name
foo1
foo2
foo3
foo4
foo5
...
friends {
id
name
}
}
Whilst the only thing I wish to refetch is the list of friends.
What is the best way to cleanly architect this? I'm thinking along the lines of binding an initially skipped GraphQL fetcher to the ListOfFriends component, which can be triggered as necessary, but would like some guidance on how this should be best done.
Thanks in advance.
I don't know why you question is downvoted because I think it is a very valid question to ask. One of GraphQL's selling points is "fetch less and more at once". A client can decide very granually what it needs from the backend. Using deeply nested graphlike queries that previously required multiple endpoints can now be expressed in a single query. At the same time over-fetching can be avoided. Now you find yourself with a big query, everything loads at once and there are no n+1 query waterfalls. But now you know that a few fields in your big query are subject to change from now and then and you want to actively update the cache with new data from the server. Apollo offers the refetch field but it loads the whole query which clearly is overfetching that was sold to me as not being a concern anymore in GraphQL. Let me offer some solutions:
Premature Optimisation?
The real problem is that programmers have spent far too much time worrying about efficiency in the wrong places and at the wrong times; premature optimization is the root of all evil (or at least most of it) in programming. - Donald Knuth
Sometimes we try to optimise too much without measuring first. Write it the easy way first and then see if it is really an issue. What exactly is slow? The network? A particular field in the query? The sheer size of the query?
After you analized what exactly is slow we can start looking into improving:
Refetch and include/skip directives
Using directives you can exclude fields from a query depending on variables. The refetch function can specify different variables than the initial query. This way you can exclude fields when you refetch the query.
Splitting up Queries
Single page apps are a great idea. The HTML is generated client side and the page does not have to make expensive trips to the server to render a new page. But soon SPAs got to big and code splitting became an issue. And now we are basically back to server side rendering and splitting the app into pages. The same might apply to GraphQL. Sometimes queries are too big and should be split. You could split up the queries for UserInfo and ListOfFriends. Inside of the cache the fields will be merged. With query batching both queries will be send in the same request and a GraphQL server that implements per request resource caching correctly (e.g. with Dataloader) will barely notice a difference.
Subscriptions
Maybe you are ready to use subscriptions already. Subscriptions send updates from the server for fields that have changed. This way you could subscribe to a user's friends and get updates in real time. The good news is that Apollo Client, Relay and many server implementations offer support for subscriptions already. The bad news is that it needs websockets that usually put different requirements on your technology stack than pure HTTP.
withApollo() -> this.client.query
This should only be your last resort! Using react-apollo's withApollo higher order component you can directly inject the ApolloClient instance. You can now execute queries using this.client.query(). { user(id: 1) { friendlist { ... } } } can be used to just fetch the friend list and update the cache which will lead to an update of your component. This might look like what you want but can haunt you in later stages of the app.
I recently followed a tutorial to create a node.js server connecting to orchestrate.io database. The problem is I now want to point the server at a mongodb hosted on mongolab - currently I am declaring a variable:
var db = require('orchestrate')(APIKEY);
which allows me to retrieve data using something like:
db.get('collection', key)
.then(function(result){
console.log(result.body);
});
My question is - Is there any way I can switch the value of 'db' to point at a mongolab database without changing the structure of the get request?
I work at Orchestrate and we do not believe in data lock-in. I hope you'll reconsider using our service, but here's some advice if you choose to leave...
It sounds like your code is fairly minimal, so you may be best off recreating your Node server with another tutorial specific to Mongo.
That said, if you are using simple key-value storage, it should be as easy as rewriting the db.get Orchestrate lines to be db.find functions from MongoDB. If you've loaded a lot of data you could export it from Orchestrate, then import into Mongo (either manually, or using another tool).
If you're using some advanced, built-in Orchestrate features, such as full-text search, relation graphing, time-series data, and geographic look-ups, it may take some more effort (and MongoDB experience) to switch. If you'd like these features in a highly scalable database-as-a-service that you don't have to maintain, you know where to find us.
I came across this question and was quite baffled. I could not understand the underlying thoughts behind this. I have done some API intergation using AngularJS usng $http and $resource when its RESTFul, but these two questions was something like a puzzle. I want to understand this in detail.
Does the JavaScript framework you choose support a model abstraction
with REST integration? If so, what schema does it expect the JSON
replies to use?
Can anyone explain me the two questions.
Some libraries expect your REST API to return specifically structured result. (HAL, JSONP, HATEOAS, ...)
By default, $resource works best with HAL, but it can easily be extended to support other types of return formats (https://github.com/jmarquis/angular-hateoas)
Maybe the question is asking about something like jQuery.map() which allows you to convert the JSON object from the server to your own internal object (abstracted model).
If you use the object from the server throughout your code, and that object's schema changes (e.g. email string changes to emails array), you may have to change your code in many places. But if you've mapped the server data to a local object, you may only need to change the mapping (e.g. set internal email to first value from server emails).
My current project uses JSON as data interchange format. Both Front-end and Back-end team agree upon a JSON structure before start integrating a service. At times due to un-notified changes in JSON structure by back-end team; it breaks the front-end code.
Is there any external library that we could use to compare a mock JSON (fixture) with servers JSON response. Basically it should assert the whole JSON object and should throw an error if there is any violation in servers JSON format.
Additional info: App is built on JQuery consuming REST JSON services.
I would recommend a schema for your JSON objects.
I use Kwalify but you can also use Rx if you like that syntax better.
I've been using QUnit: http://docs.jquery.com/QUnit recently for a lot of my JS code.
asyncTest http://docs.jquery.com/QUnit/asyncTest could be used pretty effectively to test JSON Structure.
Example:
asyncTest("Test JSON API 1", 1, function() {
$.getJSON("http://test.com/json", function(data) {
equals(data.expected, "what you expected", "Found it");
});
});
Looks like you're trying to solve a problem from other end. Why should you as a front-end developer bother with testing back-end developer's work?
A JSON that is generated on server is better to test on server using standard approach, i.e. functional tests in xUnit. You could also look at acceptance test frameworks like FITnesse if you want to have tests and documentation wiki all in one.
If even after introducing testing on server you'll get invalid JSON it is a problem in human communication, not in tests.
Since there is no answer I'll put my two cents in.
If the problem is that you are dealing with shifting requirements from the back-end then what you need to do is isolate yourself from those changes. Put an abstraction between the front-end and back-end.
Maybe you can call this abstraction the JSON Data Format Interchange.
So when GUI unit-testing (hopefully you are TDDing your Web GUI) you will have a mock for the JSON DIF. SO when the time to integrate the back-end with the front-end*, any software changes will be done in the abstraction layer implementation. And of course you already have tests for those based upon the agreed upon JSON structure.
OBTW, I think that the server-side team should have responsibility for specifying the protocol to be used against the server.
*Why does this remind of the joke my butt and your face could be twins.
https://github.com/skyscreamer/JSONassert may be helpful in eliminating false positives, so that if the order of fields returned by server changes, but overall response is the same, it doesn't trigger a failure.