Simple CDK Cloudfront and GoDaddy config giving 403 - javascript

I am trying to setup a subdomain to point to a Cloudfront distribution which then points to an S3 bucket where my site is hosted. My domain is managed through GoDaddy and my AWS infrastructure is managed through the AWS CDK.
Here is my current setup in CDK:
const myFrontendBucket = new s3.Bucket(this, 'My-Frontend', {
removalPolicy: RemovalPolicy.DESTROY,
websiteIndexDocument: 'index.html'
});
const oia = new OriginAccessIdentity(this, 'OIA', {
comment: 'Created by CDK'
});
feesioFrontendBucket.grantRead(oia);
new cloudfront.CloudFrontWebDistribution(
this,
'CDKmyFrontendStaticDistribution',
{
originConfigs: [
{
s3OriginSource: {
s3BucketSource: myFrontendBucket,
originAccessIdentity: oia
},
behaviors: [
{
isDefaultBehavior: true,
pathPattern: '/*'
}
]
},
{
customOriginSource: {
domainName: 'app.myapp.io'
},
behaviors: [
{
pathPattern: '/*',
compress: true
}
]
}
],
defaultRootObject: 'index.html'
}
);
I have followed what documentation I have found online to reach the above configuration. The Cloudfront originConfigs first item configures the websites s3 bucket where the website assets are stored. The second item is my attempt at listing what custom domain(s) are allowed to point to this distribution – in my case this is app.myapp.io.
Then finally in Godaddy I have setup a cName record that looks like so:
Type: CNAME
Name: app
Data: abc123.cloudfront.net.
What I was hoping this would have the following effect in routing the domain to the website assets:
User visits app.myapp.io -> is taken to my cloudfront distribution (abc123.cloudfront.net.) -> the cloudfront distribution forwards on the request to the s3 bucket where my website assets are stored and therefore displays the website.
The above configuration gives me the following error when I visit app.myapp.io:
403 error – the request could not be satisfied
Where might have I gone wrong in my setup? The documentation around pointing a GoDaddy managed domain to Cloudfront is a bit bare

Related

AWS EC2 IAM Role Credentials not passed into Node.js application

I am developing a Node.js application which is deployed on an EC2 instance. I am using the AWS SDK for JavaScript. I have tried both v2 and v3. My problem is that whenever I make a call to any AWS service in my application, I get an error saying that the credentials are missing. However, according to the documentation, assigning an IAM role to the EC2 instance should enable the SDK to automatically retrieve the credentials: AWS Documentation. I believe that I have correctly added the IAM role with sufficient permissions to the EC2 instance so I don't understand why the requests are not going through. I do not want to use Environment variables as I would have to manually use them in my code then. Any suggestions of how to debug this issue or thoughts what the problem might be, are greatly appreciated.
For example, a call is made as follows:
const client = new CloudFormationClient({ region: "eu-central-1" });
const params = {
StackStatusFilter: [
"CREATE_IN_PROGRESS"
]
};
const command = new ListStacksCommand(params);
client.send(command).then(
(data) => {
console.log(data);
},
(error) => {
console.log(error);
}
);
The error is simply: Error: Credential is missing.
This originates from the console.log(error).
In have tried with multiple roles but even with the AdministratorAccess the same error occurs. For reference, the permissions are:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "*",
"Resource": "*"
}
]
}

Electron + Vue API requests fall to app://

I've built a Vue app and added Electron to it. I used Vue CLI Plugin Electron Builder
It's ok in development mode, all API requests fall on address which is written in my vue.config.js:
proxy: {
'^/api': {
target: 'http://my-api:3000',
changeOrigin: true
},
},
For example, Axios POST request /api/open_session/ falls to http://my-api/api/open_session as needed.
When I build the project it creates an app:// protocol to open the index.html file.
But it also makes all url paths beginning with app:// including API requests.
My background.js:
if (process.env.WEBPACK_DEV_SERVER_URL) {
// Load the url of the dev server if in development mode
await win.loadURL(process.env.WEBPACK_DEV_SERVER_URL)
if (!process.env.IS_TEST) win.webContents.openDevTools()
}
else {
createProtocol('app');
// Load the index.html when not in development
win.loadURL('app://./index.html');
}
I want these paths to be directed to my API, while open all my files as usual (via app protocol)
Well, it's been a longer time and I coped with that on my own. However, here's an answer I came across some forums for those who are struggling with the same issue:
Firstly, I modified my vue.config.js:
proxy: {
'^/api': {
target: 'http://127.0.0.1:3000',
changeOrigin: true
},
},
Then, I made some changes in main.js - added a session variable:
const sesh = session.defaultSession.webRequest.onBeforeSendHeaders({
urls: ['*://*/*']
}, (details, callback) => {
// eslint-disable-next-line prefer-destructuring
details.requestHeaders.Host = details.url.split('://')[1].split('/')[0]
callback({
requestHeaders: details.requestHeaders
})
})
that defines app's behavior when requests get called. Also, I've added a session value to webPreferences:
const win = new BrowserWindow({
width: 1500,
height: 700,
title: "Title",
webPreferences: {
session: sesh,
nodeIntegration: true,
webSecurity: false
}
})
And, finally, load my index.html via app protocol
createProtocol('app');
win.loadURL('app://./index.html');
In result, all my requests got redirected to my server.
Forgive me for not knowing the source, if the author of the code is reading this, you can surely mark yourself in comments :)

How to redirect pages/app folder to subdomain in next.js

I search a lot for this on the internet but I don't find any article related to it.
Like I have a folder called pages in the root of my project and below tree is files of it.
| 404.js
| auth.js
| index.js
| _app.js
| _error.js
\---app
index.js
next.js gives default behavior when someone opens project.local:3000 it will openindex.js and project.local:3000/app it will open app/index.js but I want that when someone open app.project.local:3000 it will open app/index.js.
My Hosts file
127.0.0.1 project.local
127.0.0.1 app.project.local
In short
I want to redirect pages/app folder to app.project.local or app.example.com in next.js
Most updated solution
I found the solution while exploring the documentation on redirects.
In your Next.js's root folder, create a vercel.json file and then insert your redirects as object inside redirects array like so:
{
"redirects": [
{ "source": "/blog", "destination": "https://blog.example.com" }
]
}
This will only work on production environment. It should work as intended.
I'm still a noobie in next.js (2nd day learning), but I was searching for subdomain support and I found three solutions on this Github issue: https://github.com/vercel/next.js/issues/5682
Using zones (still no idea how it works)
"Vercel will implement subdomain routing in the near future" (I don't expect to use Vercel in the near future)
(my preferred, but not yet tested) An example using custom servers in Next.js: https://github.com/dcangulo/nextjs-subdomain-example
For #3, see how it was implemented in the server.js file
With the new "middleware" feature of Next.js, you can rewrite it using a function instead of the rewrite object and keep the getStaticProps working.
// middleware.ts
import { NextResponse } from 'next/server';
import type { NextRequest } from 'next/server';
import { getValidSubdomain } from '#/utils/subdomain';
// RegExp for public files
const PUBLIC_FILE = /\.(.*)$/; // Files
export async function middleware(req: NextRequest) {
// Clone the URL
const url = req.nextUrl.clone();
// Skip public files
if (PUBLIC_FILE.test(url.pathname) || url.pathname.includes('_next')) return;
const host = req.headers.get('host');
const subdomain = getValidSubdomain(host);
if (subdomain) {
// Subdomain available, rewriting
console.log(`>>> Rewriting: ${url.pathname} to /${subdomain}${url.pathname}`);
url.pathname = `/${subdomain}${url.pathname}`;
}
return NextResponse.rewrite(url);
}
You can take a look on nextjs docs about middleware and I've also wrote this medium article with some related content that might help.
NextJS now supports Locales: https://nextjs.org/docs/advanced-features/i18n-routing.
You can specify a locale in your config, e.g. admin and specify the URL like this:
// next.config.js
module.exports = {
i18n: {
// These are all the locales you want to support in
// your application
locales: ['en-US', 'admin', 'nl-NL'],
// This is the default locale you want to be used when visiting
// a non-locale prefixed path e.g. `/hello`
defaultLocale: 'en-US',
// This is a list of locale domains and the default locale they
// should handle (these are only required when setting up domain routing)
// Note: subdomains must be included in the domain value to be matched e.g. "fr.example.com".
domains: [
{
domain: 'example.com',
defaultLocale: 'en-US',
},
{
domain: 'example.nl',
defaultLocale: 'nl-NL',
},
{
domain: 'admin.example.com',
defaultLocale: 'admin',
// an optional http field can also be used to test
// locale domains locally with http instead of https
http: true,
},
],
},
}

How to set proxy for different API server using Nuxt?

So I have 2 applications:
an Adonis API server accessible via http://10.10.120.4:3333
A SSR app using Nuxt.js accessible via http://10.10.120.4:80
The Nuxt.js app is accessible outside using url http://my-website.com. I have axios module with this config
axios: {
baseURL: '0.0.0.0:3333/api',
timeout: 5000
}
Now the problem is, when I am requesting data with asyncData it works, but when the request was made from outside asyncData, say created() for example, it throws an error saying the url http:0.0.0.0:3333 is missing which is true since it's already running in the browser and not in the server.
The first solution that I've tried is to change the baseURL of the axios module to this
axios: {
baseURL: 'http://my-website.com/api',
timeout: 5000
}
But it seems nuxt server can't find it, so I think the solution is to make proxy and installed #nuxtjs/proxy.
And this is my proxy config in nuxt.config.js
{
proxy: {
'/api': 'http://my-website.com:3333',
}
}
and then I just changed my axios baseURL to
http://my-website.com/api
But again it didn't work.
My question is, how do you deal with this kind of scenario? Accessing different server from browser?
When using Proxy in a nuxt project you need to remove baseUrl and set proxy to true as seen below.
axios: {
// Do away with the baseUrl when using proxy
proxy: true
},
proxy: {
// Simple proxy
"/api/": {
target: "https://test.com/",
pathRewrite: { "^/api/": "" }
}
},
when making a call to your endpoint do:
// append /api/ to your endpoints
const data = await $axios.$get('/api/users');
checkout Shealan article

Field Errors after using keystone-storage-adapter-s3

I am looking for help debugging the message "Field Errors" I receive as a browser popup when trying to upload an image through the Keystone CMS.
I am using the npm package keystone-storage-adapter-s3. For some context, I am trying to upload images to an AWS S3 bucket and later retrieve them as part of a website's content using the Keystone CMS. I am pretty new to AWS S3, but trying.
Here is the image model in question.
const keystone = require('keystone');
const Types = keystone.Field.Types;
const Image = new keystone.List('Image');
const storage = new keystone.Storage({
adapter: require('keystone-storage-adapter-s3'),
s3: {
key: process.env.S3_KEY, // required; defaults to process.env.S3_KEY
secret: process.env.S3_SECRET, // required; defaults to process.env.S3_SECRET
bucket: process.env.S3_BUCKET, // required; defaults to process.env.S3_BUCKET
region: process.env.S3_REGION, // optional; defaults to process.env.S3_REGION, or if that's not specified, us-east-1
uploadParams: { // optional; add S3 upload params; see below for details
ACL: 'public-read',
},
},
schema: {
bucket: true, // optional; store the bucket the file was uploaded to in your db
etag: true, // optional; store the etag for the resource
path: true, // optional; store the path of the file in your db
url: true, // optional; generate & store a public URL
},
});
Image.add({
name: { type: String },
file: { type: Types.File, storage: storage },
});
Image.register();
I believe I've filled out the region, bucket name, secret (random secure string), and even created a new key that's stored securely as well in a .env file.
Here is the error I receive in the browser console.
packages.js:33 POST http://localhost:3000/keystone/api/images/5bf2c27e05ba79178cd7d2be 500 (Internal Server Error)
a # packages.js:33
i # packages.js:33
List.updateItem # admin.js:22863
updateItem # admin.js:15021
r # packages.js:16
a # packages.js:14
s # packages.js:14
d # packages.js:14
v # packages.js:14
r # packages.js:17
processEventQueue # packages.js:14
r # packages.js:16
handleTopLevel # packages.js:16
i # packages.js:16
perform # packages.js:17
batchedUpdates # packages.js:16
i # packages.js:16
dispatchEvent # packages.js:16
These are the permission settings of my S3 bucket.
Block new public ACLs and uploading public objects: False
Remove public access granted through public ACLs: False
Block new public bucket policies: True
Block public and cross-account access if bucket has public policies: True
These are similar questions, but I believe have to do with Keystone's previous implementation of Knox.
"Field errors"
Field errors in s3 file upload
I found the debug package in use within node_modules/keystone/fields/types/file/FileType.js and enabled it. I received the following debug messages when attempting to upload an image.
$ DEBUG=keystone:fields:file node keystone.js
------------------------------------------------
KeystoneJS v4.0.0 started:
keystone-s3 is ready on http://0.0.0.0:3000
------------------------------------------------
GET /keystone/images/5bf2c27e05ba79178cd7d2be 200 17.446 ms
GET /keystone/api/images/5bf2c27e05ba79178cd7d2be?drilldown=true 304 3.528 ms
keystone:fields:file [Image.file] Validating input: upload:File-file-1001 +0ms
keystone:fields:file [Image.file] Validation result: true +1ms
keystone:fields:file [Image.file] Uploading file for item 5bf2c27e05ba79178cd7d2be: { fieldname: 'File-file-1001',
originalname: 'oof.PNG',
encoding: '7bit',
mimetype: 'image/png',
destination: 'C:\\Users\\Dylan\\AppData\\Local\\Temp',
filename: '42c161c1c36a84a244a2cf09d327afd4',
path:
'C:\\Users\\Dylan\\AppData\\Local\\Temp\\42c161c1c36a84a244a2cf09d327afd4',
size: 6684 } +0ms
POST /keystone/api/images/5bf2c27e05ba79178cd7d2be 500 225.027 ms
This message looks promising, so I will keep looking through this to see if I can debug any more information.
Edit: Progress! I searched the Keystone package for "Field errors" and found where the error message is set. Debugging that location revealed another error.
"InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records."
The search continues.
I was mixing up my "key" and "secret".
As per the keystone-storage-adapter-s3 package, required are your "key" and "secret". Having inexperience with AWS, and some with web development, I thought the secret was a random secure string (like you would sign a cookie with) and the key was my secret key.
wrong
"key" : Secret Key
"secret" : Random secure key.
correct
"key": Key ID
"secret": Secret key.
Turns out I was wrong. The "key" is my key id, and the "secret" is my secret key. Settings those correctly in my .env file allowed me to upload a file to the S3 bucket.

Categories