What am i doing wrong/missing? When trying to connect my project's id from https://infura.io/ I am getting the following error after running:
$ npx hardhat run scripts/deploy.js --network mumbai
error:
ProviderError: project ID does not have access to polygon l2
here is my file
require("#nomiclabs/hardhat-waffle");
require("dotenv").config();
const privateKey = process.env.PRIVATE_KEY;
const projectId = process.env.PROJECT_ID;
if (privateKey.error) {
throw privateKey.error;
}
if (projectId.error) {
throw projectId.error;
}
module.exports = {
networks: {
hardhat: {
chainId: 1337,
},
mumbai: {
url: `https://polygon-mumbai.infura.io/v3/${projectId}`,
accounts: [privateKey],
},
mainnet: {
url: `https://arbitrum-mainnet.infura.io/v3/${projectId}`,
accounts: [privateKey],
},
matic: {
url: "https://rpc-mainnet.maticvigil.com",
accounts: [privateKey],
},
},
solidity: "0.8.4",
};
Actually I figured it out, turned out I needed to enable polygon beta inside my account settings and add my billing information for infura.io
Alchemy - https://dashboard.alchemyapi.io/ is an alternative to interact with Polygon network. No need for billing information
Related
I try to create a dataflow job to index a bigquery table into elasticSearchwith the node package google-cloud/dataflow.v1beta3.
The job is working fine when it's created and launched from the google cloud console, but I have the following error when I try it in node:
Error: 3 INVALID_ARGUMENT: (b69ddc3a5ef1c40b): Cannot set worker pool zone. Please check whether the worker_region experiments flag is valid. Causes: (b69ddc3a5ef1cd76): An internal service error occurred.
I tried to specify the experiments params in various ways but I always end up with the same error.
Does anyone managed to get a similar dataflow job working? Or do you have information about dataflow experiments?
Here is the code:
const { JobsV1Beta3Client } = require('#google-cloud/dataflow').v1beta3
const dataflowClient = new JobsV1Beta3Client()
const response = await dataflowClient.createJob({
projectId: 'myGoogleCloudProjectId',
location: 'europe-west1',
job: {
launch_parameter: {
jobName: 'indexation-job',
containerSpecGcsPath: 'gs://dataflow-templates-europe-west1/latest/flex/BigQuery_to_Elasticsearch',
parameters: {
inputTableSpec: 'bigQuery-table-gs-adress',
connectionUrl: 'elastic-endpoint-url',
index: 'elastic-index',
elasticsearchUsername: 'username',
elasticsearchPassword: 'password'
}
},
environment: {
experiments: ['worker_region']
}
}
})
Thank you very much for your help.
After many attempts I manage yesterday to find how to specify the worker region.
It looks like this:
await dataflowClient.createJob({
projectId,
location,
job: {
name: 'jobName',
type: 'Batch',
containerSpecGcsPath: 'gs://dataflow-templates-europe-west1/latest/flex/BigQuery_to_Elasticsearch',
pipelineDescription: {
inputTableSpec: 'bigquery-table',
connectionUrl: 'elastic-url',
index: 'elastic-index',
elasticsearchUsername: 'username',
elasticsearchPassword: 'password',
project: projectId,
appName: 'BigQueryToElasticsearch'
},
environment: {
workerPools: [
{ region: 'europe-west1' }
]
}
}
})
It's not working yet, I need to find the correct way to provide the other parameters, but now the dataflow job is created in the google cloud console.
For anyone who would be struggling with this issue, I finally found how to launch a dataflow job from a template.
There is a function launchFlexTemplate that work the same way as the job creation in the google cloud console.
Here is the final function working correctly:
const { FlexTemplatesServiceClient } = require('#google-cloud/dataflow').v1beta3
const response = await dataflowClient.launchFlexTemplate({
projectId: 'google-project-id',
location: 'europe-west1',
launchParameter: {
jobName: 'job-name',
containerSpecGcsPath: 'gs://dataflow-templates-europe-west1/latest/flex/BigQuery_to_Elasticsearch',
parameters: {
apiKey: 'elastic-api-key', //mandatory but not used if you provide username and password
connectionUrl: 'elasticsearch endpoint',
index: 'elasticsearch index',
elasticsearchUsername: 'username',
elasticsearchPassword: 'password',
inputTableSpec: 'bigquery source table', //projectid:datasetId.table
//parameters to upsert elasticsearch index
propertyAsId: 'table index use for elastic _id',
usePartialUpdate: true,
bulkInsertMethod: 'INDEX'
}
}
Trying to set up strapi and cloudinary
I watched two videos and they are all OK, but I have an error, tell me why?
Middleware "strapi::session": App keys are required. Please set
app.keys in config/server.js (ex: keys: ['myKeyA', 'myKeyB'])
plugins.js
module.exports = ({ env }) => ({
// ...
upload: {
config: {
provider: 'cloudinary',
providerOptions: {
cloud_name: env('CLOUDINARY_NAME'),
api_key: env('CLOUDINARY_KEY'),
api_secret: env('CLOUDINARY_SECRET'),
},
actionOptions: {
upload: {},
delete: {},
},
},
},
// ...
});
.env
HOST=0.0.0.0
PORT=1337
CLOUDINARY_NAME="my data"
CLOUDINARY_KEY="my data"
CLOUDINARY_SECRET="my data"
JWT_SECRET=my data
server.js
module.exports = ({ env }) => ({
host: env('HOST', '0.0.0.0'),
port: env.int('PORT', 1337),
app: {
keys: env.array('APP_KEYS'),
},
});
middlewares.js
module.exports = [
'strapi::errors',
'strapi::security',
'strapi::cors',
'strapi::poweredBy',
'strapi::logger',
'strapi::query',
'strapi::body',
'strapi::session',
'strapi::favicon',
'strapi::public',
];
admin.js
module.exports = ({ env }) => ({
auth: {
secret: env('ADMIN_JWT_SECRET'),
},
apiToken: {
salt: env('API_TOKEN_SALT'),
},
});
It looks like you don't have APP_KEYS defined in your .env file (assuming you're sharing the full file), as well as some other env variables you're accessing using strapi's built in env utility.
In your .env you should have a value for the following that should follow this format:
APP_KEYS=['key1','key2']
You can read more about environment config from Strapi's development docs:
https://docs.strapi.io/developer-docs/latest/setup-deployment-guides/configurations/optional/environment.html#strapi-s-environment-variables
I wrote a quick kotlin script to generate the necessary environment variables for Strapi. For security reasons, you should run this locally. But you can of course run it on play.kotlin.org. You can put the output of this script in your .env file.
import java.security.SecureRandom
import java.util.Base64
fun main() {
val rnd = SecureRandom()
val appKeys = List(4) {
val buffer = ByteArray(16)
rnd.nextBytes(buffer)
Base64.getEncoder().encodeToString(buffer)
}
val buffer = ByteArray(16)
rnd.nextBytes(buffer)
var jwtSecret = buffer.joinToString("") { String.format("%02x", it) }
jwtSecret = jwtSecret.substring(0, 8) + "-" + jwtSecret.substring(8, 12) + "-" + jwtSecret.substring(12, 16) + "-" + jwtSecret.substring(16)
rnd.nextBytes(buffer)
val apiTokenSalt = buffer.joinToString("") { String.format("%02x", it) }
println("APP_KEYS=" + appKeys.joinToString(","))
println("JWT_SECRET=$jwtSecret")
println("API_TOKEN_SALT=$apiTokenSalt")
}
To resolve the issue create a .env file which should be a copy of .env.example. Depending on your version of Strapi you may need to add more environment variables. Then use Node in the terminal to run crypto.randomBytes(16).toString('base64') commands, after running the command you will get a generated key to use in the .env
Here's an example of a working .env file, just swap generatedKey with your crypto keys.
HOST=0.0.0.0
PORT=1337
APP_KEYS=['generatedKey1', 'generatedKey2']
JWT_SECRET=generatedKey3
API_TOKEN_SALT=generatedKey4
ADMIN_JWT_SECRET=generatedKey5
I have been working with AWS CDK and I think its a great way to work with AWS. Recently I have got a problem which I am unable to resolve. Went over documentations and resources but none had explained how to do it in CDK. So I have two code pipelines and each pipeline either deploys to staging or production. Now I want a manual approval stage before the code gets deployed to production. I'll show my simple code below for reference:
import * as cdk from '#aws-cdk/core';
import { AppsPluginsCdkStack } from './apps-plugins-services/stack';
import {
CodePipeline,
ShellStep,
CodePipelineSource
} from '#aws-cdk/pipelines';
class ApplicationStage extends cdk.Stage {
constructor(scope: cdk.Construct, id: string) {
super(scope, id);
new CdkStack(this, 'cdkStack');
}
}
class ProductionPipelineStack extends cdk.Stack {
constructor(scope: cdk.Construct, id: string, props: cdk.StackProps) {
super(scope, id, props);
//Create the CDK Production Pipeline
const prodPipeline = new CodePipeline(this, 'ProductionPipeline', {
pipelineName: 'ProdPipeline',
synth: new ShellStep('ProdSynth', {
// Use a connection created using the AWS console to authenticate to GitHub
input: CodePipelineSource.connection(
'fahigm/cdk-repo',
'develop',
{
connectionArn:
'AWS-CONNECTION-ARN' // Created using the AWS console
}
),
commands: ['npm ci', 'npm run build', 'npx cdk synth']
})
});
prodPipeline.addStage(new ApplicationStage(this, 'Production'));
}
}
class StagingPipelineStack extends cdk.Stack {
constructor(scope: cdk.Construct, id: string, props: cdk.StackProps) {
super(scope, id, props);
//Create the CDK Staging Pipeline
const stagePipeline = new CodePipeline(this, 'StagingPipeline', {
pipelineName: 'StagePipeline',
synth: new ShellStep('StageSynth', {
// Use a connection created using the AWS console to authenticate to GitHub
input: CodePipelineSource.connection(
'fahigm/cdk-repo',
'master',
{
connectionArn:
'AWS-CONNECTION-ARN' // Created using the AWS console
}
),
commands: ['npm ci', 'npm run build', 'npx cdk synth']
})
});
stagePipeline.addStage(new ApplicationStage(this, 'Staging'));
}
}
//
const app = new cdk.App();
new ProductionPipelineStack(app, 'ProductionCDKPipeline', {
env: { account: 'ACCOUNT', region: 'REGION' }
});
new StagingPipelineStack(app, 'StagingCDKPipeline', {
env: { account: 'ACCOUNT', region: 'REGION' }
});
app.synth();
Now I don't know where to go from here. The documentation only talks about how to do it from console but I want to add it in the code. Would really appreciate any help!
The CDK documentation doesn't actually talk about how to do it from console, it talks about how to do it with CDK, and provides examples. Here's an example straight from the docs:
The following example shows both an automated approval in the form of a ShellStep, and a manual approval in the form of a ManualApprovalStep added to the pipeline. Both must pass in order to promote from the PreProd to the Prod environment:
declare const pipeline: pipelines.CodePipeline;
const preprod = new MyApplicationStage(this, 'PreProd');
const prod = new MyApplicationStage(this, 'Prod');
pipeline.addStage(preprod, {
post: [
new pipelines.ShellStep('Validate Endpoint', {
commands: ['curl -Ssf https://my.webservice.com/'],
}),
],
});
pipeline.addStage(prod, {
pre: [
new pipelines.ManualApprovalStep('PromoteToProd'),
],
});
Here's the documentation about the manual approval step specifically: https://docs.aws.amazon.com/cdk/api/latest/docs/#aws-cdk_pipelines.ManualApprovalStep.html
I need to do a new api in order to send an email with sendgrid. I followed the official doc and other examples so I did:
config/plugins
module.exports = ({ env }) => ({
email: {
provider: 'sendgrid',
providerOptions: {
apiKey: env('SENDGRID_API_KEY'),
},
settings: {
defaultFrom: 'juliasedefdjian#strapi.io',
defaultReplyTo: 'juliasedefdjian#strapi.io',
},
},
});
then I did a new folder named email in api folder
api/email/config/routes.json
{
"routes": [
{
"method": "POST",
"path": "/email",
"handler": "email.index",
"config": {
"policies": []
}
}
]
}
finally under api/email/controllers/email.js
const { default: createStrapi } = require('strapi');
module.exports = {
index: async (ctx) => {
//build email with data from ctx.request.body
await createStrapi.plugins['email'].services.email.send({
to: 'email#email.com',
from: 'email#email.com',
replyTo: 'email#email.com',
subject: 'test',
text: 'test',
});
ctx.send('Email sent!');
},
};
The real problem is that /email api returns me a 403 even if I did this from the dashboard:
I have done many APIs with strapi but I have never sent emails with it.
Is there a way to add permissions from the code? I have to say that if I use GET method it works, but I need to do it with a POST method, which doesn't. Did I miss something?
i want to gatsby stripe serverless checkout , i added products on stripe. but i cant install gatsby-source-stripe or gatsby-source-stripe-products. error;
error UNHANDLED EXCEPTION
Error: /...../node_modules/gatsby-source-stripe/gatsby-node.js:4
exports.sourceNodes = async ({ actions }, { objects = [], secretKey = "" }) => {
^^^^^^^^^^^^
SyntaxError: Invalid shorthand property initializer
gatsby-config
{
resolve: `gatsby-source-stripe`,
options: {
objects: [
'balance',
'customers',
'products',
'applicationFees',
'skus',
'subscriptions'
],
secretKey: process.env.STRIPE_SECRET
}
},
i have .env file.
Make sure you have entered test SKUs in Stripe - you can do so by toggling the View test data.