I've made a basic implementation of ApolloClient:
const gqlClient = new ApolloClient({
connectToDevTools: true,
link: new HttpLink({
uri: "/api",
}),
cache: new InMemoryCache(),
resolvers: {
Group: {
icon: () => "noIcon",
},
},
typeDefs: gql`
extend type Group {
icon: String!
}
`,
});
The only fancy thing is the one resolver and type def - both of which are to support an icon field for groups (an upcoming feature).
I then try to query the server with the following:
gqlClient.query({
query: gql`{
groups {
name
icon
}
}`,
})
.then(console.log);
and get a big 'ol error:
bundle.esm.js:63 Uncaught (in promise) Error: GraphQL error: Cannot query field `icon' on type `Group'.
at new ApolloError (bundle.esm.js:63)
at bundle.esm.js:1247
at bundle.esm.js:1559
at Set.forEach (<anonymous>)
at bundle.esm.js:1557
at Map.forEach (<anonymous>)
at QueryManager../node_modules/apollo-client/bundle.esm.js.QueryManager.broadcastQueries (bundle.esm.js:1555)
at bundle.esm.js:1646
at Object.next (Observable.js:322)
at notifySubscription (Observable.js:135)
Running the same query without asking for icon works perfectly. I'm not quite sure what I'm doing wrong. How can I mock icon and fix this error?
I'm only running Apollo Client - do I need to run Apollo Server to get all of the features? The outgoing request doesn't seem to have any of my type def information, so I'm not sure how the having Apollo server would make a difference.
Handling #client fields with resolvers
gqlClient.query({
query: gql`
{
groups {
name
icon #client
}
}
`,
});
Related
I try to create a dataflow job to index a bigquery table into elasticSearchwith the node package google-cloud/dataflow.v1beta3.
The job is working fine when it's created and launched from the google cloud console, but I have the following error when I try it in node:
Error: 3 INVALID_ARGUMENT: (b69ddc3a5ef1c40b): Cannot set worker pool zone. Please check whether the worker_region experiments flag is valid. Causes: (b69ddc3a5ef1cd76): An internal service error occurred.
I tried to specify the experiments params in various ways but I always end up with the same error.
Does anyone managed to get a similar dataflow job working? Or do you have information about dataflow experiments?
Here is the code:
const { JobsV1Beta3Client } = require('#google-cloud/dataflow').v1beta3
const dataflowClient = new JobsV1Beta3Client()
const response = await dataflowClient.createJob({
projectId: 'myGoogleCloudProjectId',
location: 'europe-west1',
job: {
launch_parameter: {
jobName: 'indexation-job',
containerSpecGcsPath: 'gs://dataflow-templates-europe-west1/latest/flex/BigQuery_to_Elasticsearch',
parameters: {
inputTableSpec: 'bigQuery-table-gs-adress',
connectionUrl: 'elastic-endpoint-url',
index: 'elastic-index',
elasticsearchUsername: 'username',
elasticsearchPassword: 'password'
}
},
environment: {
experiments: ['worker_region']
}
}
})
Thank you very much for your help.
After many attempts I manage yesterday to find how to specify the worker region.
It looks like this:
await dataflowClient.createJob({
projectId,
location,
job: {
name: 'jobName',
type: 'Batch',
containerSpecGcsPath: 'gs://dataflow-templates-europe-west1/latest/flex/BigQuery_to_Elasticsearch',
pipelineDescription: {
inputTableSpec: 'bigquery-table',
connectionUrl: 'elastic-url',
index: 'elastic-index',
elasticsearchUsername: 'username',
elasticsearchPassword: 'password',
project: projectId,
appName: 'BigQueryToElasticsearch'
},
environment: {
workerPools: [
{ region: 'europe-west1' }
]
}
}
})
It's not working yet, I need to find the correct way to provide the other parameters, but now the dataflow job is created in the google cloud console.
For anyone who would be struggling with this issue, I finally found how to launch a dataflow job from a template.
There is a function launchFlexTemplate that work the same way as the job creation in the google cloud console.
Here is the final function working correctly:
const { FlexTemplatesServiceClient } = require('#google-cloud/dataflow').v1beta3
const response = await dataflowClient.launchFlexTemplate({
projectId: 'google-project-id',
location: 'europe-west1',
launchParameter: {
jobName: 'job-name',
containerSpecGcsPath: 'gs://dataflow-templates-europe-west1/latest/flex/BigQuery_to_Elasticsearch',
parameters: {
apiKey: 'elastic-api-key', //mandatory but not used if you provide username and password
connectionUrl: 'elasticsearch endpoint',
index: 'elasticsearch index',
elasticsearchUsername: 'username',
elasticsearchPassword: 'password',
inputTableSpec: 'bigquery source table', //projectid:datasetId.table
//parameters to upsert elasticsearch index
propertyAsId: 'table index use for elastic _id',
usePartialUpdate: true,
bulkInsertMethod: 'INDEX'
}
}
I have the following template.yaml from a SAM application
AWSTemplateFormatVersion: "2010-09-09"
Transform: AWS::Serverless-2016-10-31
Description: >
image-resizing-lambda-js
Sample SAM Template for image-resizing-lambda-js
# More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst
Globals:
Function:
Timeout: 3
MemorySize: 1536
Resources:
HelloWorldFunction:
Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
Properties:
CodeUri: hello-world/
Handler: app.lambdaHandler
Runtime: nodejs10.x
Architectures:
- x86_64
Events:
HelloWorld:
Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api
Properties:
Path: /hello
Method: post
imagemagicklambdalayer:
Type: AWS::Serverless::Application
Properties:
Location:
ApplicationId: arn:aws:serverlessrepo:us-east-1:145266761615:applications/image-magick-lambda-layer
SemanticVersion: 1.0.0
Outputs:
# ServerlessRestApi is an implicit API created out of Events key under Serverless::Function
# Find out more about other implicit resources you can reference within SAM
# https://github.com/awslabs/serverless-application-model/blob/master/docs/internals/generated_resources.rst#api
HelloWorldApi:
Description: "API Gateway endpoint URL for Prod stage for Hello World function"
Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/hello/"
HelloWorldFunction:
Description: "Hello World Lambda Function ARN"
Value: !GetAtt HelloWorldFunction.Arn
HelloWorldFunctionIamRole:
Description: "Implicit IAM Role created for Hello World function"
Value: !GetAtt HelloWorldFunctionRole.Arn
With the following code.
Now I have read that I need to do the following to use ImageMagick with node on aws lambda is the following
Install the custom layer https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:145266761615:applications~image-magick-lambda-layer
Link any lambda function using ImageMagick with that custom layer
It is the link any lambda function using ImageMagick with the custom layer I am confused. Do I need to do something different in my app.js code that points the imagemagick call in my code to the layer somehow. I am not entirely sure what a layer is. But my understanding is it is needed for ImageMagick to work.
Any help would be greatly appreciated
const axios = require("axios");
// const url = 'http://checkip.amazonaws.com/';
//const sharp = require("sharp");
const gm = require("gm");
const imageMagick = gm.subClass({ imageMagick: true });
let response;
/**
*
* Event doc: https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-lambda-proxy-integrations.html#api-gateway-simple-proxy-for-lambda-input-format
* #param {Object} event - API Gateway Lambda Proxy Input Format
*
* Context doc: https://docs.aws.amazon.com/lambda/latest/dg/nodejs-prog-model-context.html
* #param {Object} context
*
* Return doc: https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-lambda-proxy-integrations.html
* #returns {Object} object - API Gateway Lambda Proxy Output Format
*
*/
exports.lambdaHandler = async (event, context) => {
try {
// const ret = await axios(url);
const parsedBody = JSON.parse(event.body);
response = {
statusCode: 200,
body: JSON.stringify({
message: parsedBody.imagePath,
// location: ret.data.trim()
}),
};
const WEB_WIDTH_MAX = 420;
const WEB_Q_MAX = 85;
const url = "https://miro.medium.com/max/512/1*V395S0MUwmZo8dX2aezpMg.png";
const data = imageMagick(url)
.resize(WEB_WIDTH_MAX)
.quality(WEB_Q_MAX)
// .gravity('Center')
.strip()
// .crop(WEB_WIDTH_MAX, WEB_HEIGHT_MAX)
.toBuffer("png", (err, buffer) => {
if (err) {
console.log("An error occurred while saving IM to buffer: ", err);
return false; /* stop the remaining sequence and prevent sending an empty or invalid buffer to AWS */
} else {
console.log("buffer", buffer);
}
});
// gmToBuffer(data).then(console.log);
} catch (err) {
console.log(err);
return err;
}
return response;
};
Currently I get the following error when running sam build
Plugin 'ServerlessAppPlugin' raised an exception: 'NoneType' object has no attribute 'get'
Before addign the imagemagicklamdalayer section I was able to run sam build and have imagemagick run with the following error under sam local start-api after hitting the endpoint
An error occurred while saving IM to buffer: Error: Stream yields empty buffer
I have got it to build with the following as the layer in the template.yaml
Layers:
- "arn:aws:serverlessrepo:us-east-1:145266761615:applications/image-magick-lambda-layer"
Now I get an error on running the function like the following
File "/usr/local/Cellar/aws-sam-cli/1.46.0/libexec/lib/python3.8/site-packages/botocore/regions.py", line 230, in _endpoint_for_partition
raise NoRegionError()
botocore.exceptions.NoRegionError: You must specify a region.
Not sure where I am to specifiy this region
UPDATE:
Ok two things one I set the region in my aws config file and I set the arn to the deployed layer. I have gotten the following with something that hasn't built yet in about 5 minutes so we shall see if it ever does.
It tries to build the layer when the function is invoked
Invoking app.lambdaHandler (nodejs12.x)
arn:aws:lambda:us-east-1:396621406187:layer:image-magick:1 is already cached. Skipping download
Image was not found.
Building image...................................................................
UPDATE:
so it seemed to finally build but the module for ImageMagick could not be found. I have seen other examples use this layer and are calling imagemagick from a module called gm which is what I am doing in my code.
Does including this layer not give access to gm module so i can use imageMagick.
{"errorType":"Runtime.ImportModuleError","errorMessage":"Error: Cannot find module 'gm'\nRequire stack:\n- /var/task/app.js\n- /var/runtime/UserFunction.js\n- /var/runtime/index.js","stack":["Runtime.ImportModuleError: Error: Cannot find module 'gm'","Require stack:","- /var/task/app.js","- /var/runtime/UserFunction.js","- /var/runtime/index.js"," at _loadUserApp (/var/runtime/UserFunction.js:100:13)",
this is my first discussion post here. I have learned Apollo + GraphQL through Odyssey. Currently, I am building my own project using Next.js which required fetching data from 2 GraphQL endpoints.
My problem: How can I fetch data from multiple GraphQL endpoints with ApolloClient?
Below is my code for my first endpoint:
import { ApolloClient, InMemoryCache, createHttpLink } from "#apollo/client";
const client = new ApolloClient({
ssrMode: true,
link: createHttpLink({
uri: "https://api.hashnode.com/",
credentials: "same-origin",
headers: {
Authorization: process.env.HASHNODE_AUTH,
},
}),
cache: new InMemoryCache(),
});
export default client;
What you are trying to accomplish is kinda against Apollo's "One Graph" approach.
Take a look at gateways and federation - https://www.apollographql.com/docs/federation/
With that being said, some hacky solution is possible but you will need to maintain a more complex structure and specify the endpoint in every query, which undermines the built-in mechanism and might cause optimization issues.
//Declare your endpoints
const endpoint1 = new HttpLink({
uri: 'https://api.hashnode.com/graphql',
...
})
const endpoint2 = new HttpLink({
uri: 'endpoint2/graphql',
...
})
//pass them to apollo-client config
const client = new ApolloClient({
link: ApolloLink.split(
operation => operation.getContext().clientName === 'endpoint2',
endpoint2, //if above
endpoint1
)
...
})
//pass client name in query/mutation
useQuery(QUERY, {variables, context: {clientName: 'endpoint2'}})
This package seems to do what you want: https://github.com/habx/apollo-multi-endpoint-link
Also, check the discussion here: https://github.com/apollographql/apollo-client/issues/84
Encountered the same problem today. I wanted to have it dynamic so this is what I came out with:
export type DynamicLinkClientName = "aApp" | "bApp" | "graphqlApp";
type Link = RestLink | HttpLink;
type DynamicLink = { link: Link; name: DynamicLinkClientName };
const LINK_MAP: DynamicLink[] = [
{ link: aRestLink, name: "aApp" },
{ link: bAppRestLink, name: "bApp" },
{ link: graphqlAppLink, name: "graphqlApp" },
];
const isClientFromContext = (client: string) => (op: Operation) =>
op.getContext().client === client;
const DynamicApolloLink = LINK_MAP.reduce<ApolloLink | undefined>(
(prevLink, nextLink) => {
// When no name is specified, fallback to defaultLink.
if (!prevLink) {
return ApolloLink.split(
isClientFromContext(nextLink.name),
nextLink.link,
defaultLink
);
}
return ApolloLink.split(
isClientFromContext(nextLink.name),
nextLink.link,
prevLink
);
},
undefined
) as ApolloLink;
I try to upload a file with GraphQL. While upload, I get following error message:
Variable "$file" got invalid value { resolve: [function], reject:
[function], promise: {}, file: { filename:
"insung-yoon-TPvE8qPfMr0-unsplash.jpg", mimetype: "image/jpeg",
encoding: "7bit", createReadStream: [function createReadStream] } };
Upload value invalid.
The only solution I found, was to disable the upload at apollo-graphql and add graphql-upload
new ApolloServer({ schema, context, uploads: false })
app.use(graphqlUploadExpress({ maxFileSize: 10000, maxFiles: 10 }))
I already had added this setting, but the issue is still there.
My mutation looks like:
#Mutation(() => ReturnType)
uploadFileAndData(
#Ctx() ctx: Context,
#Arg('data', { description: 'File Upload' }) data: MyInputType,
): Promise<ReturnType> {
return functionWithMagic({ ctx, data })
}
and my InputType like:
import { FileUpload, GraphQLUpload } from 'graphql-upload'
...
#InputType({ description: 'Upload data input' })
export class MyInputType {
#Field(() => GraphQLUpload)
#Joiful.any()
file: FileUpload
}
After a lot of searching, I finally found my issue. We are using a mono repo, and had installed two different version of file-upload in two packages. When I changed the version, at both packages on the same version and the error is gone.
To add, if you're using Altair graphql client, this error can originate from the client itself.
try:
1- close all tabs and start altair again
2- redo step one, then close the query tab and rewrite the query.
I'm using GitHub Graphql API and I wrote following code with react-apollo but when I paginate after many requests I get following errors on the console.
You are using the simple (heuristic) fragment matcher, but your queries contain union or interface types. Apollo Client will not be able to accurately map fragments. To make this error go away, use the IntrospectionFragmentMatcher as described in the docs: https://www.apollographql.com/docs/react/recipes/fragment-matching.html
.
WARNING: heuristic fragment matching going on!
.
Missing field name in {
"__typename": "Organization"
}
Missing field avatarUrl in {
"__typename": "Organization"
}
Missing field repositories in {
"__typename": "Organization"
}
and I wrote the following codes:
gql`
query($username: String!, $nextPage: String) {
search(query: $username, type: USER, first: 100, after: $nextPage) {
pageInfo {
hasNextPage
endCursor
}
edges {
node {
... on User {
name
avatarUrl
repositories {
totalCount
}
}
}
}
}
}
`;
handleSubmit = ({ username }) => {
const { client } = this.props;
this.setState({
loading: true,
searchTerm: username,
});
client
.query({
query: SEARCH_USER,
variables: {
username
}
})
.then(({ data }) => {
this.setState({
loading: false,
searchList: data.search.edges,
pagination: data.search.pageInfo,
});
})
.catch(err => {
console.warn(err);
});
};
Because Apollo has not enough information about your GraphQL Schema you need to provide it somehow. Apollo has a well written documentation on that topic.
It describes to use a script to introspect your GraphQL Server in order to get that missing information about Unions and Interfaces.
To make the process even easier I wrote a plugin for GraphQL Code Generator that automates everything. There's a chapter called "Fragment Matcher" that I recommend to read.
Either first, the manual solution or the second should fix your problem :)