I'm trying to create an S3 Bucket and a corresponding Resource Policy in the same serverless.yml so that both are established on the new stack formation.
However, I am running into an error on build:
Unresolved resource dependencies [CUSTOM-BUCKETNAME] in the Resources block of the template
Is there to a way to synchronously create the policy so that it waits for the bucket to be created first? I'm setting this up in the resources section of my yml
resources:
Resources:
Bucket:
Type: AWS::S3::Bucket
Properties:
BucketName: CUSTOM-BUCKETNAME
BucketPolicy:
Type: AWS::S3::BucketPolicy
Properties:
Bucket:
Ref: CUSTOM-BUCKETNAME
PolicyDocument:
Statement:
- Principal:
Service: "ses.amazonaws.com"
Action:
- s3:PutObject
Effect: Allow
Sid: "AllowSESPuts"
Resource:
Fn::Join: ['', ['arn:aws:s3:::', Ref: "CUSTOM-BUCKETNAME", '/*'] ]
Above is a small snippet of my yml configuration.
After using DependsOn, I'm still getting the same error. Worth note, the resource dependency refers to the dynamic name (CUSTOM-BUCKETNAME) of the bucket.
resources:
Resources:
Bucket:
Type: AWS::S3::Bucket
Properties:
BucketName: CUSTOM-BUCKETNAME
BucketPolicy:
Type: AWS::S3::BucketPolicy
DependsOn: Bucket
Properties:
Bucket:
Ref: CUSTOM-BUCKETNAME
PolicyDocument:
Statement:
- Principal:
Service: "ses.amazonaws.com"
Action:
- s3:PutObject
Effect: Allow
Sid: "AllowSESPuts"
Resource:
Fn::Join: ['', ['arn:aws:s3:::', Ref: "CUSTOM-BUCKETNAME", '/*'] ]
CUSTOM-BUCKETNAME is never explicity hardcoded in the yml itself, its a dynamically generated name using template literals.
Issue is occurring on your policy as your bucket is: BucketName: CUSTOM-BUCKETNAME
Not a referenced parameter. Which means your not referencing the actual resource in the policy statement when your using Bucket: Ref: CUSTOM-BUCKETNAME.
Instead, either change the bucket name to reference the same parameter BucketName: Ref: CUSTOM-BUCKETNAME or change the policy to reference the resource: Bucket: Ref: Bucket
CloudFormation DependsOn attribute should solve your problem.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-dependson.html
Related
I am having an issue while creating AWS::Appsync:Resolver Cloudformation in Serverless Framework using Javascript resolver.
My Javascript Resolver code in root dir /src/resolvers/jsResolver.js which I have attached to AWS::AppSync::Resolver cloudformation in code block. I have also installed npm plugin for appsync utils in my package.json
import { util } from '#aws-appsync/utils';
export function request(ctx) {
const {source, args} = ctx
return {
operation: 'Invoke',
payload: { field: ctx, arguments: args, source },
};
}
export function response(ctx) {
util.error("Failed to fetch relatedPosts", "LambdaFailure", ctx.prev.result)
return ctx.result;
}
My AWS::AppSync::Resolver Cloudformation is below in YML file also I have used Code as its Mandatory if I have declared it as APPSYNC_JS Runtime
AppSyncJsResolver:
Type: AWS::AppSync::Resolver
Properties:
ApiId: !GettAtt Graphql.APiId
Kind: PIPELINE
Code: ./src/resolvers/jsResolver.js <—- Here my code is breaking up with error contains one or more error
TypeName: Query
FieldName: getInfo
Runtime:
Name: APPSYNC_JS
RuntimeVersion: 1.0.0
PipelineConfig:
Functions:
- !GetAtt AppSyncFunction.FunctionId
I have tried above code as per AWS Cloudformation documentation for Appsync available where they have mentioned that in AWS::AppSync::Resolver for creating Javascript Resolver using Cloudformation as below one of the properties. which I have included in my AWS::AppSync::Resolver
Code
The resolver code that contains the request and response functions. When code is
used, the runtime is required. The runtime value must be APPSYNC_JS.
Required: No
Type: String
Required: No
Type: String
So I've tried this and cant find enough solutions regarding Javascript Resolvers all are available with VTL template specific.
With above code my CloudFormation build failed with the following error: An error occurred: AppSyncJSResolver- The code contains one or more errors. (Service: AWSAppSync; Status Code: 400; Error Code: BadRequestException; Request ID: 0245d64d-...; Proxy: null)
AppSyncJSResolver- which is my AWS::AppSync::Resolver in above code. and I have code property in that which giving me an error. I have verified with multiple sources and I am not finding any errors with my Javascript Resolver file /src/resolvers/jsResolver.js which I have declared in AppSyncJSResolver AWS::AppSync::Resolver in code property, I am not sure why I’m getting this error, any help would be great.
To answering my own question, The way I resolve it via two ways.
1. We can write whole Resolver code in YML Cloudformation in Code property like below. Make sure your resolver code should be inside of your Code property and use "|" special character (Multi-line code) after Code property.
AppSyncJsResolver:
Type: AWS::AppSync::Resolver
Properties:
ApiId: !GettAtt Graphql.APiId
Kind: PIPELINE
Code: |
import { util } from '#aws-appsync/utils';
export function request(ctx) {
const {source, args} = ctx
return {
operation: 'Invoke',
payload: { field: ctx, arguments: args, source },
};
}
export function response(ctx) {
util.error("Failed to fetch relatedPosts", "LambdaFailure",ctx.prev.result)
return ctx.result;
}
TypeName: Query
FieldName: getInfo
Runtime:
Name: APPSYNC_JS
RuntimeVersion: 1.0.0
PipelineConfig:
Functions:
- !GetAtt AppSyncFunction.FunctionId
2. If you want to keep your business logic out of YML file and keep it separate then you can use CodeS3Location property in your javascript resolver like below.
first create bucket in S3 and store your javascript resolver file with your resolver code in bucket. make sure you give enough IAM permission to your appsync to access your S3 bucket.
After above step you can rewrite your YML Cloudformation like below
AppSyncJsResolver:
Type: AWS::AppSync::Resolver
Properties:
ApiId: !GettAtt Graphql.APiId
Kind: PIPELINE
CodeS3Location:s3://my-bucket-name/my-filename.js
TypeName: Query
FieldName: getInfo
Runtime:
Name: APPSYNC_JS
RuntimeVersion: 1.0.0
PipelineConfig:
Functions:
- !GetAtt AppSyncFunction.FunctionId
Hope this help others and will contribute more about Javascript Resolver so it will be easier for other to find more complex solutions and get as much as resources about Javascript Resolver. Thanks to #Graham Hesketh for your suggestions.
I have built a GitHub Action with the Javascript "method".
I have the following action.yaml file:
name: 'xxx Action'
author: 'xxx'
description: 'Runs `xxx` in your repository.'
branding:
icon: 'code'
color: 'purple'
inputs:
token:
description: 'Txxxxunt'
required: true
groupId:
description: 'xxxx'
required: true
runs:
using: 'node16'
main: './build/index.js'
This is how my build folder looks like:
My index.js must use the keytar.node file. Is there a way to include this extra file in the action as well?
I know that GitHub will run the build/index.js file but it will fail because it will miss the extra file.
Is there a way to include this file Only With Javascript method? I do know I could do the same with Docker method
I am trying to run my swagger file and i keep getting this error.
Error: ptr must be a JSON Pointer
at pathFromPtr (/Users/salma/Desktop/swaggerIntegration/node_modules/json-refs/index.js:1128:11)
at /Users/salma/Desktop/swaggerIntegration/node_modules/json-refs/index.js:293:45
at process.runNextTicks [as _tickCallback] (internal/process/next_tick.js:47:5)
at Function.Module.runMain (internal/modules/cjs/loader.js:800:11)
at executeUserCode (internal/bootstrap/node.js:526:15)
at startMainThreadExecution (internal/bootstrap/node.js:439:3)
and here is my swagger file
swagger: "2.0"
info:
version: "0.0.1"
title: employees DB
# during dev, should point to your local machine
host: localhost:10010
# basePath prefixes all resource paths
basePath: /
#
schemes:
# tip: remove http to make production-grade
- http
- https
# format of bodies a client can send (Content-Type)
consumes:
- application/json
# format of the responses to the client (Accepts)
produces:
- application/json
paths:
/employees:
# binds a127 app logic to a route
x-swagger-router-controller: employees
get:
description: Returns 'Hello' to the caller
# used as the method name of the controller
operationId: index
parameters:
- name: name
in: query
description: The name of the person to whom to say hello
required: false
type: string
responses:
"200":
description: Success
schema:
# a pointer to a definition
$ref: "#/definitions/employeesListBody"
# responses may fall through to errors
default:
description: Error
schema:
$ref: "#/definitions/ErrorResponse"
/swagger:
x-swagger-pipe: swagger_raw
# complex objects have schema definitions
definitions:
employeesListBody:
required:
- employees
properties:
employees:
type: array
items:
$ref: "#definitions/Employee"
Employee:
required:
- name
- email
- position
properties:
name:
type: string
email:
type: string
position:
type: string
age:
type: integer
minimum: 20
ErrorResponse:
required:
- message
properties:
message:
type: string
any idea how to solve that ?
is there an easier way to prettify the swagger file? because i get many parsing erros.
also does any one have a good example of using swagger with express and mongodb ?
many thanks.
One of the references is missing a / after #:
$ref: "#definitions/Employee"
Change it to:
$ref: "#/definitions/Employee"
# ^
If you paste your definition into http://editor.swagger.io, it shows where exactly the error is.
REST Api services is available in Node.js express app along with routes are specified properly. I am planning to use node.js to auto generate the swagger YAML Open API specification. Writing a swagger YAML manually is tough if you have more endpoints and the intention to add more endpoints will need to update everywhere.
Example: Consider a REST API services contains two basic endpoints and specified in the following YAML
swagger: "2.0"
info:
version: "0.0.1"
title: Hello World App
# during dev, should point to your local machine
host: localhost:10010
# basePath prefixes all resource paths
basePath: /
#
schemes:
# tip: remove http to make production-grade
- http
- https
# format of bodies a client can send (Content-Type)
consumes:
- application/json
# format of the responses to the client (Accepts)
produces:
- application/json
paths:
/hello:
# binds a127 app logic to a route
x-swagger-router-controller: hello_world
get:
description: Returns 'Hello' to the caller
# used as the method name of the controller
operationId: hello
parameters:
- name: name
in: query
description: The name of the person to whom to say hello
required: false
type: string
responses:
"200":
description: Success
schema:
# a pointer to a definition
$ref: "#/definitions/HelloWorldResponse"
# responses may fall through to errors
default:
description: Error
schema:
$ref: "#/definitions/ErrorResponse"
/try:
x-swagger-router-controller: hello_world
get:
description: Returns 'Try' to the caller
operationId: try
parameters:
- name: name
in: query
description: The content to try
required: false
type: string
responses:
"200":
description: Success
schema:
$ref: "#/definitions/HelloWorldResponse"
default:
description: Error
schema:
$ref: "#/definitions/ErrorResponse"
/swagger:
x-swagger-pipe: swagger_raw
# complex objects have schema definitions
definitions:
HelloWorldResponse:
required:
- message
properties:
message:
type: string
ErrorResponse:
required:
- message
properties:
message:
type: string
My doubt is how can i generate this YAML in Node.js? Is there any specific NPM package or any swagger module available to generate this kind of YAML or writing code on own is preferable?
Trying to analyze the better way to do this. I appreciate all suggestions and thanks in advance!
I am looking for help debugging the message "Field Errors" I receive as a browser popup when trying to upload an image through the Keystone CMS.
I am using the npm package keystone-storage-adapter-s3. For some context, I am trying to upload images to an AWS S3 bucket and later retrieve them as part of a website's content using the Keystone CMS. I am pretty new to AWS S3, but trying.
Here is the image model in question.
const keystone = require('keystone');
const Types = keystone.Field.Types;
const Image = new keystone.List('Image');
const storage = new keystone.Storage({
adapter: require('keystone-storage-adapter-s3'),
s3: {
key: process.env.S3_KEY, // required; defaults to process.env.S3_KEY
secret: process.env.S3_SECRET, // required; defaults to process.env.S3_SECRET
bucket: process.env.S3_BUCKET, // required; defaults to process.env.S3_BUCKET
region: process.env.S3_REGION, // optional; defaults to process.env.S3_REGION, or if that's not specified, us-east-1
uploadParams: { // optional; add S3 upload params; see below for details
ACL: 'public-read',
},
},
schema: {
bucket: true, // optional; store the bucket the file was uploaded to in your db
etag: true, // optional; store the etag for the resource
path: true, // optional; store the path of the file in your db
url: true, // optional; generate & store a public URL
},
});
Image.add({
name: { type: String },
file: { type: Types.File, storage: storage },
});
Image.register();
I believe I've filled out the region, bucket name, secret (random secure string), and even created a new key that's stored securely as well in a .env file.
Here is the error I receive in the browser console.
packages.js:33 POST http://localhost:3000/keystone/api/images/5bf2c27e05ba79178cd7d2be 500 (Internal Server Error)
a # packages.js:33
i # packages.js:33
List.updateItem # admin.js:22863
updateItem # admin.js:15021
r # packages.js:16
a # packages.js:14
s # packages.js:14
d # packages.js:14
v # packages.js:14
r # packages.js:17
processEventQueue # packages.js:14
r # packages.js:16
handleTopLevel # packages.js:16
i # packages.js:16
perform # packages.js:17
batchedUpdates # packages.js:16
i # packages.js:16
dispatchEvent # packages.js:16
These are the permission settings of my S3 bucket.
Block new public ACLs and uploading public objects: False
Remove public access granted through public ACLs: False
Block new public bucket policies: True
Block public and cross-account access if bucket has public policies: True
These are similar questions, but I believe have to do with Keystone's previous implementation of Knox.
"Field errors"
Field errors in s3 file upload
I found the debug package in use within node_modules/keystone/fields/types/file/FileType.js and enabled it. I received the following debug messages when attempting to upload an image.
$ DEBUG=keystone:fields:file node keystone.js
------------------------------------------------
KeystoneJS v4.0.0 started:
keystone-s3 is ready on http://0.0.0.0:3000
------------------------------------------------
GET /keystone/images/5bf2c27e05ba79178cd7d2be 200 17.446 ms
GET /keystone/api/images/5bf2c27e05ba79178cd7d2be?drilldown=true 304 3.528 ms
keystone:fields:file [Image.file] Validating input: upload:File-file-1001 +0ms
keystone:fields:file [Image.file] Validation result: true +1ms
keystone:fields:file [Image.file] Uploading file for item 5bf2c27e05ba79178cd7d2be: { fieldname: 'File-file-1001',
originalname: 'oof.PNG',
encoding: '7bit',
mimetype: 'image/png',
destination: 'C:\\Users\\Dylan\\AppData\\Local\\Temp',
filename: '42c161c1c36a84a244a2cf09d327afd4',
path:
'C:\\Users\\Dylan\\AppData\\Local\\Temp\\42c161c1c36a84a244a2cf09d327afd4',
size: 6684 } +0ms
POST /keystone/api/images/5bf2c27e05ba79178cd7d2be 500 225.027 ms
This message looks promising, so I will keep looking through this to see if I can debug any more information.
Edit: Progress! I searched the Keystone package for "Field errors" and found where the error message is set. Debugging that location revealed another error.
"InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records."
The search continues.
I was mixing up my "key" and "secret".
As per the keystone-storage-adapter-s3 package, required are your "key" and "secret". Having inexperience with AWS, and some with web development, I thought the secret was a random secure string (like you would sign a cookie with) and the key was my secret key.
wrong
"key" : Secret Key
"secret" : Random secure key.
correct
"key": Key ID
"secret": Secret key.
Turns out I was wrong. The "key" is my key id, and the "secret" is my secret key. Settings those correctly in my .env file allowed me to upload a file to the S3 bucket.