I am looking for help debugging the message "Field Errors" I receive as a browser popup when trying to upload an image through the Keystone CMS.
I am using the npm package keystone-storage-adapter-s3. For some context, I am trying to upload images to an AWS S3 bucket and later retrieve them as part of a website's content using the Keystone CMS. I am pretty new to AWS S3, but trying.
Here is the image model in question.
const keystone = require('keystone');
const Types = keystone.Field.Types;
const Image = new keystone.List('Image');
const storage = new keystone.Storage({
adapter: require('keystone-storage-adapter-s3'),
s3: {
key: process.env.S3_KEY, // required; defaults to process.env.S3_KEY
secret: process.env.S3_SECRET, // required; defaults to process.env.S3_SECRET
bucket: process.env.S3_BUCKET, // required; defaults to process.env.S3_BUCKET
region: process.env.S3_REGION, // optional; defaults to process.env.S3_REGION, or if that's not specified, us-east-1
uploadParams: { // optional; add S3 upload params; see below for details
ACL: 'public-read',
},
},
schema: {
bucket: true, // optional; store the bucket the file was uploaded to in your db
etag: true, // optional; store the etag for the resource
path: true, // optional; store the path of the file in your db
url: true, // optional; generate & store a public URL
},
});
Image.add({
name: { type: String },
file: { type: Types.File, storage: storage },
});
Image.register();
I believe I've filled out the region, bucket name, secret (random secure string), and even created a new key that's stored securely as well in a .env file.
Here is the error I receive in the browser console.
packages.js:33 POST http://localhost:3000/keystone/api/images/5bf2c27e05ba79178cd7d2be 500 (Internal Server Error)
a # packages.js:33
i # packages.js:33
List.updateItem # admin.js:22863
updateItem # admin.js:15021
r # packages.js:16
a # packages.js:14
s # packages.js:14
d # packages.js:14
v # packages.js:14
r # packages.js:17
processEventQueue # packages.js:14
r # packages.js:16
handleTopLevel # packages.js:16
i # packages.js:16
perform # packages.js:17
batchedUpdates # packages.js:16
i # packages.js:16
dispatchEvent # packages.js:16
These are the permission settings of my S3 bucket.
Block new public ACLs and uploading public objects: False
Remove public access granted through public ACLs: False
Block new public bucket policies: True
Block public and cross-account access if bucket has public policies: True
These are similar questions, but I believe have to do with Keystone's previous implementation of Knox.
"Field errors"
Field errors in s3 file upload
I found the debug package in use within node_modules/keystone/fields/types/file/FileType.js and enabled it. I received the following debug messages when attempting to upload an image.
$ DEBUG=keystone:fields:file node keystone.js
------------------------------------------------
KeystoneJS v4.0.0 started:
keystone-s3 is ready on http://0.0.0.0:3000
------------------------------------------------
GET /keystone/images/5bf2c27e05ba79178cd7d2be 200 17.446 ms
GET /keystone/api/images/5bf2c27e05ba79178cd7d2be?drilldown=true 304 3.528 ms
keystone:fields:file [Image.file] Validating input: upload:File-file-1001 +0ms
keystone:fields:file [Image.file] Validation result: true +1ms
keystone:fields:file [Image.file] Uploading file for item 5bf2c27e05ba79178cd7d2be: { fieldname: 'File-file-1001',
originalname: 'oof.PNG',
encoding: '7bit',
mimetype: 'image/png',
destination: 'C:\\Users\\Dylan\\AppData\\Local\\Temp',
filename: '42c161c1c36a84a244a2cf09d327afd4',
path:
'C:\\Users\\Dylan\\AppData\\Local\\Temp\\42c161c1c36a84a244a2cf09d327afd4',
size: 6684 } +0ms
POST /keystone/api/images/5bf2c27e05ba79178cd7d2be 500 225.027 ms
This message looks promising, so I will keep looking through this to see if I can debug any more information.
Edit: Progress! I searched the Keystone package for "Field errors" and found where the error message is set. Debugging that location revealed another error.
"InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records."
The search continues.
I was mixing up my "key" and "secret".
As per the keystone-storage-adapter-s3 package, required are your "key" and "secret". Having inexperience with AWS, and some with web development, I thought the secret was a random secure string (like you would sign a cookie with) and the key was my secret key.
wrong
"key" : Secret Key
"secret" : Random secure key.
correct
"key": Key ID
"secret": Secret key.
Turns out I was wrong. The "key" is my key id, and the "secret" is my secret key. Settings those correctly in my .env file allowed me to upload a file to the S3 bucket.
Related
I am trying to setup a subdomain to point to a Cloudfront distribution which then points to an S3 bucket where my site is hosted. My domain is managed through GoDaddy and my AWS infrastructure is managed through the AWS CDK.
Here is my current setup in CDK:
const myFrontendBucket = new s3.Bucket(this, 'My-Frontend', {
removalPolicy: RemovalPolicy.DESTROY,
websiteIndexDocument: 'index.html'
});
const oia = new OriginAccessIdentity(this, 'OIA', {
comment: 'Created by CDK'
});
feesioFrontendBucket.grantRead(oia);
new cloudfront.CloudFrontWebDistribution(
this,
'CDKmyFrontendStaticDistribution',
{
originConfigs: [
{
s3OriginSource: {
s3BucketSource: myFrontendBucket,
originAccessIdentity: oia
},
behaviors: [
{
isDefaultBehavior: true,
pathPattern: '/*'
}
]
},
{
customOriginSource: {
domainName: 'app.myapp.io'
},
behaviors: [
{
pathPattern: '/*',
compress: true
}
]
}
],
defaultRootObject: 'index.html'
}
);
I have followed what documentation I have found online to reach the above configuration. The Cloudfront originConfigs first item configures the websites s3 bucket where the website assets are stored. The second item is my attempt at listing what custom domain(s) are allowed to point to this distribution – in my case this is app.myapp.io.
Then finally in Godaddy I have setup a cName record that looks like so:
Type: CNAME
Name: app
Data: abc123.cloudfront.net.
What I was hoping this would have the following effect in routing the domain to the website assets:
User visits app.myapp.io -> is taken to my cloudfront distribution (abc123.cloudfront.net.) -> the cloudfront distribution forwards on the request to the s3 bucket where my website assets are stored and therefore displays the website.
The above configuration gives me the following error when I visit app.myapp.io:
403 error – the request could not be satisfied
Where might have I gone wrong in my setup? The documentation around pointing a GoDaddy managed domain to Cloudfront is a bit bare
I have a use case where I have to read a ZIP file and pass it to the creation of lambda as a template.
Now I want to read zip file from a S3 public bucket. How can I read the file from the public bucket?
S3 bucket zip file where I am reading is https://lambda-template-code.s3.amazonaws.com/LambdaTemplate.zip
const zipContents = 'https://lambda-template-code.s3.amazonaws.com/LambdaTemplate.zip';
var params = {
Code: {
// here at below I have to pass the zip file reading it from the S3 public bucket
ZipFile: zipContents,
},
FunctionName: 'testFunction', /* required */
Role: 'arn:aws:iam::149727569662:role/ROLE', /* required */
Description: 'Created with tempalte',
Handler: 'index.handler',
MemorySize: 256,
Publish: true,
Runtime: 'nodejs12.x',
Timeout: 15,
TracingConfig: {
Mode: "Active"
}
};
lambda.createFunction(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
The above code gives error Could not unzip uploaded file. Please check your file, then try to upload again
How can I read the URL file? And pass it in the params
Can anyone help me here
Using the createFunction docs and specifically the Code docs, you can see that ZipFile expects
The base64-encoded contents of the deployment package. AWS SDK and AWS CLI clients handle the encoding for you.
and not a URL of where it is. Instead, you need to use S3Bucket and S3Key.
It is not clear from the docs that public buckets are allowed for this purpose, but the docs do say
An Amazon S3 bucket in the same AWS Region as your function. The bucket can be in a different AWS account.
I am trying to run my swagger file and i keep getting this error.
Error: ptr must be a JSON Pointer
at pathFromPtr (/Users/salma/Desktop/swaggerIntegration/node_modules/json-refs/index.js:1128:11)
at /Users/salma/Desktop/swaggerIntegration/node_modules/json-refs/index.js:293:45
at process.runNextTicks [as _tickCallback] (internal/process/next_tick.js:47:5)
at Function.Module.runMain (internal/modules/cjs/loader.js:800:11)
at executeUserCode (internal/bootstrap/node.js:526:15)
at startMainThreadExecution (internal/bootstrap/node.js:439:3)
and here is my swagger file
swagger: "2.0"
info:
version: "0.0.1"
title: employees DB
# during dev, should point to your local machine
host: localhost:10010
# basePath prefixes all resource paths
basePath: /
#
schemes:
# tip: remove http to make production-grade
- http
- https
# format of bodies a client can send (Content-Type)
consumes:
- application/json
# format of the responses to the client (Accepts)
produces:
- application/json
paths:
/employees:
# binds a127 app logic to a route
x-swagger-router-controller: employees
get:
description: Returns 'Hello' to the caller
# used as the method name of the controller
operationId: index
parameters:
- name: name
in: query
description: The name of the person to whom to say hello
required: false
type: string
responses:
"200":
description: Success
schema:
# a pointer to a definition
$ref: "#/definitions/employeesListBody"
# responses may fall through to errors
default:
description: Error
schema:
$ref: "#/definitions/ErrorResponse"
/swagger:
x-swagger-pipe: swagger_raw
# complex objects have schema definitions
definitions:
employeesListBody:
required:
- employees
properties:
employees:
type: array
items:
$ref: "#definitions/Employee"
Employee:
required:
- name
- email
- position
properties:
name:
type: string
email:
type: string
position:
type: string
age:
type: integer
minimum: 20
ErrorResponse:
required:
- message
properties:
message:
type: string
any idea how to solve that ?
is there an easier way to prettify the swagger file? because i get many parsing erros.
also does any one have a good example of using swagger with express and mongodb ?
many thanks.
One of the references is missing a / after #:
$ref: "#definitions/Employee"
Change it to:
$ref: "#/definitions/Employee"
# ^
If you paste your definition into http://editor.swagger.io, it shows where exactly the error is.
REST Api services is available in Node.js express app along with routes are specified properly. I am planning to use node.js to auto generate the swagger YAML Open API specification. Writing a swagger YAML manually is tough if you have more endpoints and the intention to add more endpoints will need to update everywhere.
Example: Consider a REST API services contains two basic endpoints and specified in the following YAML
swagger: "2.0"
info:
version: "0.0.1"
title: Hello World App
# during dev, should point to your local machine
host: localhost:10010
# basePath prefixes all resource paths
basePath: /
#
schemes:
# tip: remove http to make production-grade
- http
- https
# format of bodies a client can send (Content-Type)
consumes:
- application/json
# format of the responses to the client (Accepts)
produces:
- application/json
paths:
/hello:
# binds a127 app logic to a route
x-swagger-router-controller: hello_world
get:
description: Returns 'Hello' to the caller
# used as the method name of the controller
operationId: hello
parameters:
- name: name
in: query
description: The name of the person to whom to say hello
required: false
type: string
responses:
"200":
description: Success
schema:
# a pointer to a definition
$ref: "#/definitions/HelloWorldResponse"
# responses may fall through to errors
default:
description: Error
schema:
$ref: "#/definitions/ErrorResponse"
/try:
x-swagger-router-controller: hello_world
get:
description: Returns 'Try' to the caller
operationId: try
parameters:
- name: name
in: query
description: The content to try
required: false
type: string
responses:
"200":
description: Success
schema:
$ref: "#/definitions/HelloWorldResponse"
default:
description: Error
schema:
$ref: "#/definitions/ErrorResponse"
/swagger:
x-swagger-pipe: swagger_raw
# complex objects have schema definitions
definitions:
HelloWorldResponse:
required:
- message
properties:
message:
type: string
ErrorResponse:
required:
- message
properties:
message:
type: string
My doubt is how can i generate this YAML in Node.js? Is there any specific NPM package or any swagger module available to generate this kind of YAML or writing code on own is preferable?
Trying to analyze the better way to do this. I appreciate all suggestions and thanks in advance!
I am new to javascript. I am trying to implement OAuth 2.0 for Server to Server Applications for that i am using this library. So while i was doing this
googleAuth.authenticate(
{
email: 'my.gserviceaccount.com',
keyFile: fs.readFileSync("./accesstoken/key.pem"),
scopes: ['https://www.googleapis.com/auth/drive.readonly']
},
function (err, token) {
console.log(token);
console.log("err:"+err);
});
it gave me following exception
ENOENT: no such file or directory, open '-----BEGIN PRIVATE KEY-----asdasxxx---END PRIVATE KEY-----
my file pem.key file is in the same directory in which my js file is.
There is no need of fs.readFileSync
keyFile: fs.readFileSync("./accesstoken/key.pem"),
Just give simple path to file
keyFile: "./key.pem", // if file is in same folder
As given in Original Doc :
// the path to the PEM file to use for the cryptographic key (ignored if 'key' is also defined)
// the key will be used to sign the JWT and validated by Google OAuth
keyFile: 'path/to/key.pem',