i want to gatsby stripe serverless checkout , i added products on stripe. but i cant install gatsby-source-stripe or gatsby-source-stripe-products. error;
error UNHANDLED EXCEPTION
Error: /...../node_modules/gatsby-source-stripe/gatsby-node.js:4
exports.sourceNodes = async ({ actions }, { objects = [], secretKey = "" }) => {
^^^^^^^^^^^^
SyntaxError: Invalid shorthand property initializer
gatsby-config
{
resolve: `gatsby-source-stripe`,
options: {
objects: [
'balance',
'customers',
'products',
'applicationFees',
'skus',
'subscriptions'
],
secretKey: process.env.STRIPE_SECRET
}
},
i have .env file.
Make sure you have entered test SKUs in Stripe - you can do so by toggling the View test data.
Related
I try to create a dataflow job to index a bigquery table into elasticSearchwith the node package google-cloud/dataflow.v1beta3.
The job is working fine when it's created and launched from the google cloud console, but I have the following error when I try it in node:
Error: 3 INVALID_ARGUMENT: (b69ddc3a5ef1c40b): Cannot set worker pool zone. Please check whether the worker_region experiments flag is valid. Causes: (b69ddc3a5ef1cd76): An internal service error occurred.
I tried to specify the experiments params in various ways but I always end up with the same error.
Does anyone managed to get a similar dataflow job working? Or do you have information about dataflow experiments?
Here is the code:
const { JobsV1Beta3Client } = require('#google-cloud/dataflow').v1beta3
const dataflowClient = new JobsV1Beta3Client()
const response = await dataflowClient.createJob({
projectId: 'myGoogleCloudProjectId',
location: 'europe-west1',
job: {
launch_parameter: {
jobName: 'indexation-job',
containerSpecGcsPath: 'gs://dataflow-templates-europe-west1/latest/flex/BigQuery_to_Elasticsearch',
parameters: {
inputTableSpec: 'bigQuery-table-gs-adress',
connectionUrl: 'elastic-endpoint-url',
index: 'elastic-index',
elasticsearchUsername: 'username',
elasticsearchPassword: 'password'
}
},
environment: {
experiments: ['worker_region']
}
}
})
Thank you very much for your help.
After many attempts I manage yesterday to find how to specify the worker region.
It looks like this:
await dataflowClient.createJob({
projectId,
location,
job: {
name: 'jobName',
type: 'Batch',
containerSpecGcsPath: 'gs://dataflow-templates-europe-west1/latest/flex/BigQuery_to_Elasticsearch',
pipelineDescription: {
inputTableSpec: 'bigquery-table',
connectionUrl: 'elastic-url',
index: 'elastic-index',
elasticsearchUsername: 'username',
elasticsearchPassword: 'password',
project: projectId,
appName: 'BigQueryToElasticsearch'
},
environment: {
workerPools: [
{ region: 'europe-west1' }
]
}
}
})
It's not working yet, I need to find the correct way to provide the other parameters, but now the dataflow job is created in the google cloud console.
For anyone who would be struggling with this issue, I finally found how to launch a dataflow job from a template.
There is a function launchFlexTemplate that work the same way as the job creation in the google cloud console.
Here is the final function working correctly:
const { FlexTemplatesServiceClient } = require('#google-cloud/dataflow').v1beta3
const response = await dataflowClient.launchFlexTemplate({
projectId: 'google-project-id',
location: 'europe-west1',
launchParameter: {
jobName: 'job-name',
containerSpecGcsPath: 'gs://dataflow-templates-europe-west1/latest/flex/BigQuery_to_Elasticsearch',
parameters: {
apiKey: 'elastic-api-key', //mandatory but not used if you provide username and password
connectionUrl: 'elasticsearch endpoint',
index: 'elasticsearch index',
elasticsearchUsername: 'username',
elasticsearchPassword: 'password',
inputTableSpec: 'bigquery source table', //projectid:datasetId.table
//parameters to upsert elasticsearch index
propertyAsId: 'table index use for elastic _id',
usePartialUpdate: true,
bulkInsertMethod: 'INDEX'
}
}
I am creating a Kubernetes Job within NodeJS class After importing the library #kubernetes/client-node, I created an object to use the module BatchV1Api inside the function which I am exporting to other class in which I have defined the body of the Kubernetes job like this:
//listJobs.js
import { post } from '../kubeClient.js';
const kubeRoute = async (ctx) => {
const newJob = {
metadata: {
name: 'countdown',
},
spec: {
template: {
metadata: {
name: 'countdown',
},
},
spec: {
containers: [
{
name: 'counter',
image: 'centos:7',
command: 'bin/bash, -c, for i in 9 8 7 6 5 4 3 2 1 ; do echo $i ; done',
}],
restartPolicy: 'Never',
},
},
};
const kubeClient = post();
kubeClient.createNamespacedJob('default', newJob);
ctx.body = {
// listConfigMap: (await kubeClient.listConfigMapForAllNamespaces()).body,
listJobs: (await kubeClient.listJobForAllNamespaces()).body,
// listService: (await kubeClient.listServiceForAllNamespaces()).body,
};
};
export default kubeRoute;
Then I created a router class to request the post method like:
import post from './listJobs.js';
const apiRouter = new Router();
apiRouter.post('/api/v1/newJob', post);
when executing the application and requesting the route localhost:3000/api/v1/newJob as a post request in postman, it is showing status code 422 (with some very long output, as in the screenshot) in the vs code terminal and some Kubernetes information in postman body, but it is not creating any job or pod.
Does anyone have any idea, why there is 422 code at the end?
Status code 422 Unprocessable Entity means that server understand the content type, and the syntax of the request is correct, but it was unable to process the contained instructions.
In your case though, the Job manifest looks off.
I'm not an expert in JavaScript kubernetes client, but newJob body looks weird. The resulting yaml should look like this
apiVersion: batch/v1
kind: Job
metadata:
name: countdown
spec:
template:
spec:
containers:
- name: counter
image: centos:7
command: 'bin/bash, -c, for i in {9..1} ; do echo $i ; done' #fixed this one for you
restartPolicy: Never
In your case the second spec is a child of spec. It should be a child of template, so:
{
"metadata": {
"name": "countdown"
},
"spec": {
"template": {
"spec": {
"containers": [
{
"name": "counter",
"image": "centos:7",
"command": "bin/bash, -c, for i in {9..1} ; do echo $i ; done"
}
],
"restartPolicy": "Never"
}
}
}
}
What am i doing wrong/missing? When trying to connect my project's id from https://infura.io/ I am getting the following error after running:
$ npx hardhat run scripts/deploy.js --network mumbai
error:
ProviderError: project ID does not have access to polygon l2
here is my file
require("#nomiclabs/hardhat-waffle");
require("dotenv").config();
const privateKey = process.env.PRIVATE_KEY;
const projectId = process.env.PROJECT_ID;
if (privateKey.error) {
throw privateKey.error;
}
if (projectId.error) {
throw projectId.error;
}
module.exports = {
networks: {
hardhat: {
chainId: 1337,
},
mumbai: {
url: `https://polygon-mumbai.infura.io/v3/${projectId}`,
accounts: [privateKey],
},
mainnet: {
url: `https://arbitrum-mainnet.infura.io/v3/${projectId}`,
accounts: [privateKey],
},
matic: {
url: "https://rpc-mainnet.maticvigil.com",
accounts: [privateKey],
},
},
solidity: "0.8.4",
};
Actually I figured it out, turned out I needed to enable polygon beta inside my account settings and add my billing information for infura.io
Alchemy - https://dashboard.alchemyapi.io/ is an alternative to interact with Polygon network. No need for billing information
I'm new to both Strapi and Mongoose, so I apologise if this is a stupid question.
Following the docs (https://strapi.io/documentation/developer-docs/latest/development/backend-customization.html) I'm trying to create a custom query in Strapi in which I want to return the whole collection called people sorted by name desc. But when I hit the endpoint I get a 500 error and checking the terminal the error message is CastError: Cast to ObjectId failed for value "alldesc" at path "_id" for model "people".
Here's my code:
services/people.js
module.exports = {
findByNameDesc() {
const result = strapi
.query("people")
.model.find()
.sort({ name: "descending" });
return result.map((entry) => entry.toObject());
},
};
controllers/people.js
module.exports = {
async alldesc(ctx) {
const entities = await strapi.services.people.findByNameDesc(ctx);
return entities.map((entity) =>
sanitizeEntity(entity, { model: strapi.models.people })
);
},
};
config/routes.json
{
"routes": [
...
{
"method": "GET",
"path": "/people/alldesc",
"handler": "people.alldesc",
"config": {
"policies": []
}
}
]
}
What am I doing wrong?
UPDATE: even when removing .sort({ name: "descending" }); from the query, the error is still there, so I'm thinking that maybe there's something wrong in the way I use the service in the controller?
The problem was in routes.json. Basically seems like Strapi doesn't like the slash / so instead of /people/alldesc I tried /people-alldesc and it worked.
Also in the service there's no need for return result.map((entry) => entry.toObject());, that causes anther error, simply doing return result works.
I've made a basic implementation of ApolloClient:
const gqlClient = new ApolloClient({
connectToDevTools: true,
link: new HttpLink({
uri: "/api",
}),
cache: new InMemoryCache(),
resolvers: {
Group: {
icon: () => "noIcon",
},
},
typeDefs: gql`
extend type Group {
icon: String!
}
`,
});
The only fancy thing is the one resolver and type def - both of which are to support an icon field for groups (an upcoming feature).
I then try to query the server with the following:
gqlClient.query({
query: gql`{
groups {
name
icon
}
}`,
})
.then(console.log);
and get a big 'ol error:
bundle.esm.js:63 Uncaught (in promise) Error: GraphQL error: Cannot query field `icon' on type `Group'.
at new ApolloError (bundle.esm.js:63)
at bundle.esm.js:1247
at bundle.esm.js:1559
at Set.forEach (<anonymous>)
at bundle.esm.js:1557
at Map.forEach (<anonymous>)
at QueryManager../node_modules/apollo-client/bundle.esm.js.QueryManager.broadcastQueries (bundle.esm.js:1555)
at bundle.esm.js:1646
at Object.next (Observable.js:322)
at notifySubscription (Observable.js:135)
Running the same query without asking for icon works perfectly. I'm not quite sure what I'm doing wrong. How can I mock icon and fix this error?
I'm only running Apollo Client - do I need to run Apollo Server to get all of the features? The outgoing request doesn't seem to have any of my type def information, so I'm not sure how the having Apollo server would make a difference.
Handling #client fields with resolvers
gqlClient.query({
query: gql`
{
groups {
name
icon #client
}
}
`,
});