I am facing the same issue as Github issue but for a different use case. I am trying to add multiple instances of the application: Example code here.
With the domain added I am getting the same error:
Domain www.app.com was not found in your AWS account.
So to achieve the multiple instances, I tried a hack changing the env-prod file to bucket="prod-appname". This gets deployed, but when I add an env-stage file to bucket="stage-appname", this creates a new bucket, but deploys it to the same CloudFront URL. Is there a way to fix any of them so that I can achieve multiple instances?
Thanks in advance
nextjsapp:
component: "#sls-next/serverless-component#1.18.0"
inputs:
bucketName: < S3 bucket name >
use this in servereless.yml and s3 bucket should be created in us-esat-1 (N.virginia)
and then deploy then a cloud front is created where the bucket is your s3 bucket after note the ID of cloud front and change the serverless.yml as
nextjsapp:
component: "#sls-next/serverless-component#1.18.0"
inputs:
bucketName: < S3 bucket name >
cloudfront:
distributionId:
Related
I am trying to upload a Github action workflow via the Github V3 API.
I am trying to do the following to upload the main.yaml file to .github/workflows/main.yaml:
await this.put(`https://api.github.com/repos/${this.ownerName}/${this.repoName}/contents/.github/workflows/main.yaml`, {
message: title,
content,
sha,
branch: newBranch
})
It seems as though including the .github in the file path of the URL returns a 404. Is it possible to upload to a hidden directory? Maybe I need to escape the . somehow?
It might be that used personal access token (PAT) has insufficient permissions.
Slightly unintuitive, but to upload workflow files, the PAT needs to have a "Workflow" scope enabled too, not only have a write access to repo.
It is at tokens and looks like:
With meteor, so in javascript, I try to display an image from AMAZON S3.
In the src property of the image, I put the access url to the mage provided by the AWS console, and it works if my image is public.
If I put it not public in AMAZON S3, it does not work.
So I ask S3 to send me the correct URL and I provided this URL to the src property of my image:
var s3 = new AWS.S3();
var params = {Bucket: 'mybucket', Key: 'photoPage06_4.jpg', Expires: 6000};
var url = s3.getSignedUrl('getObject', params);
console.log("URL", url);
document.getElementById('photoTest').src =url;
And there I have the error:
Failed to load resource: the server responded with a status of 400
(Bad Request)
Does somebody have an idea?
This is the correct behaviour. If an image is set as private on S3, you can't view it publicly. This means you won't be able to view the image by setting it as the src property of an image tag or by pasting it into the address bar in your browser.
This is intentional since many people and companies use it to back up files, share files internally, etc. So they need the option to keep their files private. The way they create and access their private files is through the AWS APIs, not directly over the web.
So to answer your question, you need to make any images you want to display on your site public in S3.
I have created EC2 instances, with a load balancer and auto scaling as described in the documentation for AWS from the following link:
http://docs.aws.amazon.com/gettingstarted/latest/wah-linux/getting-started-application-server.html
I would like to store all of the user images and files on a S3 bucket, but I'm not sure of the best way to connect it to my web-app.
The web-app has an API coded in PHP, which would need to upload files to the bucket. The front end is coded in JavaScript which would need to be able to access the files. These files will be on every page, and range from user images to documents.
Currently all the media is loaded locally, and nothing is stored on the S3 bucket. The front end and the API are both stored on EC2 instances and would need to access the S3 bucket contents.
What is the most efficient way to load the data stored on the S3 bucket? I have looked into the AWS SDK for JavaScript, but it doesn't seem to be the best way of getting the files. Here is the code that seems relevant:
var s3 = new AWS.S3();
var params = {Bucket: 'myBucket', Key: 'myKey'};
s3.getSignedUrl('getObject', params, function (err, url) {
console.log("The URL is", url);
});
If this is the way to go, would I have to do this for each image?
Thank you
I am attempting to load a series of Images from a Shared Dropbox Folder like so:
function getSprite(raw) {
var sprt = new Image();
sprt.crossOrigin = '';
sprt.src = 'https://dl.dropboxusercontent.com/s/k1v7iv85vntx107/AABOD-CfE3A5sQo0RPPmRmmJa/ground1.png' + (raw ? '?raw=1' : '');
return sprt;
}
The folder is shared, and Dropbox says that 'People with Link can View'. I have tried to do the same with Google Drive, but I get a Cross Origin Error there.
EDIT: I just tried to share one of the files individually, and it worked. DO I have to now go through and do this for each file in the folder? I thought If I just share the folder I should have access to all its contents.
ERROR MESSAGE:
GET https://dl.dropboxusercontent.com/s/k1v7iv85vntx107/AABOD-CfE3A5sQo0RPPmRmmJa/characters/triggerman/up.png?raw=1 403 (Forbidden)
It looks like the original shared link you had was:
https://www.dropbox.com/sh/k1v7iv85vntx107/AABOD-CfE3A5sQo0RPPmRmmJa?dl=0
This is a shared link for a folder. Note that you can't just modify it directly to get shared links for individual files inside that folder though, which is what you appear to be trying in your question.
To get the individual files, you have a few options:
Manually get the shared links for each file via the Dropbox web site, as you mentioned.
Use the API to individually but programmatically get shared links for each file: https://www.dropbox.com/developers/documentation/http/documentation#sharing-create_shared_link_with_settings
Use the API to download the files in the original shared link by specifying the path inside the folder: https://www.dropbox.com/developers/documentation/http/documentation#sharing-get_shared_link_file This is likely closest to what you're looking for.
I don't think this has much to do with JavaScript. Go Incognito and take a look at it because all I can see is a 403 error from my browser.
I am using an S3 bucket as static web solution to host a single page React application.
The react application works fine when I hit the root domain s3-bucket.amazon.com and the HTML5 history api works fine every time I click on a link the new url looks fine: _http://s3-bucket.amazon.com/entities/entity_id_
The problem happens when I use permalinks to access the application. Let's assume I am typing the same url (_http://s3-bucket.amazon.com/entities/entity_id_) in the browser I will get the following error from Amazon S3:
404 Not Found
Code: NoSuchKey
Message: The specified key does not exist.
Key: users
RequestId: 3A4B65F49D07B42C
HostId: j2KCK4tvw9DVQzOHaViZPwT7+piVFfVT3lRs2pr2knhdKag7mROEfqPRLUHGeD/TZoFJBI4/maA=
Is it possible to make Amazon S3 to work nicely with permalinks and HTML5 history api? Maybe it can act as proxy?
Thank you
Solution using AWS CloudFront:
Step 1: Go to CloudFront
Click on distribution id
Go to the Error page tab
Click on Create Custom Error Response
Step 2: Fill the form as
HTTP Error Code: 404
TTL: 0
Custom Error Response: Yes
Response Page Path: /index.html
HTTP Response Code: 200
source: https://medium.com/#nikhilkapoor17/deployment-of-spa-without-location-strategy-61a190a11dfc
Sadly S3 does not support the wildcard routing required for single page apps (basically you want everything after the host in the url to serve index.html, but preserve the route.
So www.blah.com/hello/world would actually serve www.blah.com/index.html but pass the route to the single page app.
The good news is you can do this with a CDN such as Fastly (Varnish) in front of S3. Setting up a rule such as:
if(req.url.ext == "" ) {
set req.url = "/index.html";
}
This will redirect all non asset requests (anything without a file extension) to index.html on the domain.
I have no experience running SPA on Amazon S3, but this seems to be a problem of url-rewriting.
It is one thing do have the history (html5) api rewrite your url when running your application / site.
But allowing rewritten urls to be accessible when refreshing or cold-surfing to your site definitely needs url-rewriting on a server level.
I'm thinking web.confg (IIS), .htaccess (Apache) or nginx site configuration.
It seems the same question already got asked some time ago: https://serverfault.com/questions/650704/url-rewriting-in-amazon-s3
Specify the same name for the index and error files on "Static website hosting" properties. See linked image 1.
Old question but simplest way would be to use hash routing. So instead of mydomain.com/somepage/subpage it would be mydomain.com/#/somepage/subpage