Programatically upload html files to a web server with nodejs - javascript

I have a discord js bot with tickets system (A system where users can create private channel and ask staff for help). When a channel is deleted, my bot saves all messages and creates html file with messages that were in that channel. I want to upload that file to a web server so users can review it without having to download it, but I am unsure how to do it. The below variable attachment holds the html file/string to be uploaded.
const attachment = await discordTranscripts.createTranscript(channel, {
limit: -1, // Max amount of messages to fetch.
returnType: 'attachment', // Valid options: 'buffer' | 'string' | 'attachment' Default: 'attachment'
fileName: fileName, // Only valid with returnBuffer false. Name of attachment.
minify: true // Minify the result? Uses html-minifier
});

This is quite opinion-based question. There are a lot of ways to do it.
SSH
You can use Node SSH to log into your server via SSH and upload the file.
Rest API
Host a simple and secure Rest API in the server to write the HTML file and call it from your main application.
AWS S3
If you are using AWS, Create an S3 Bucket, make it publicly readable and enable static web hosting. Now you can put your HTML files to your S3 Bucket using AWS-SDK and access them via S3 Bucket Url.
If you are using some other cloud platform like Azure / GCP, Checkout their own storage service.

Related

Uploading to Amazon S3 gives 403 error - following example photo album guide

I'm trying to upload a simple image to a S3 bucket, I always get a 403 error when try to upload, but list and createAlbum methods works.
I've followed this guide: Uploading Photos to Amazon S3 from a Browser - AWS SDK for JavaScript
Since the upload of the tutorial don't work for me I've tried to use the S3.upload method without success as follow:
S3.upload({
Key: 'my-album/aqui.png',
Body: file,
ACL: 'public-read',
Bucket: SOUNDS_BUCKET_NAME
}, function(err, data) {
if(err) {
alert('fail')
} else {
alert('Successfully Uploaded!');
}
}
);
I believe this error was a config on my S3 or my identity pool, but the config is the same as the tutorial:
CORS
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>DELETE</AllowedMethod>
<AllowedMethod>HEAD</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Cognito:
S3:
What am I doing wrong?
This is Node.js code to upload files to S3.
Setting up the Environment
AWS Credentials
To get started you need to generate the AWS Security Key Access Credentials first. To do so login to your AWS Management Console.
Click on your username:
Select Access Keys -> Create New Access Key:
After that you can either copy the Access Key ID and Secret Access Key from this window or you can download it as a .CSV file:
Creating an S3 Bucket
Now let's create a AWS S3 Bucket with proper access. We can do this using the AWS management console or by using Node.js.
To create an S3 bucket using the management console, go to the S3 service by selecting it from the service menu:
Select "Create Bucket" and enter the name of your bucket and the region that you want to host your bucket. If you already know from which region the majority of your users will come from, it's wise to select a region as close to their's as possible. This will ensure that the files from the server will be served in a more optimal timeframe.
The name you select for your bucket should be a unique name among all AWS users, so try a new one if the name is not available:
Follow through the wizard and configure permissions and other setting per your requirements.
To create the bucket using Node.js, we'll first have to set up our development environment.
Development Environment
Get started with our example by configuring a new Node.js project:
$ npm init
To start using any AWS Cloud Services in Node.js, we have to install the AWS SDK (System Development Kit).
Install it using your preferred package manager - we'll use npm:
$ npm i --save aws-sdk
Implementation
Creating an S3 Bucket
If you have already created a bucket manually, you may skip this part. But if not, let's create a file, say, create-bucket.js in your project directory.
Import the aws-sdk library to access your S3 bucket:
const AWS = require('aws-sdk');
Now, let's define three constants to store ID, SECRET, and BUCKET_NAME. These are used to identify and access our bucket:
// Enter copied or downloaded access ID and secret key here
const ID = '';
const SECRET = '';
// The name of the bucket that you have created
const BUCKET_NAME = 'test-bucket';
Now we need to initialize the S3 interface by passing our access keys:
const s3 = new AWS.S3({
accessKeyId: ID,
secretAccessKey: SECRET
});
Source: Uploading Files to AWS S3 with Node.js

reading in a file from ubuntu (AWS EC2) on local machine?

I have a python script which I'm running on AWS (EC2 instance with Ubuntu). This python script outputs a JSON file daily, to a directory in /home/ubuntu:
with open("/home/ubuntu/bandsintown/sf_events.json", "w") as writeJSON:
file_str = json.dumps(allEvents, sort_keys=True)
file_str = "var sf_events = " + file_str
All works as expected here. My issue is that I'm unsure of how to read this JSON (existing on ubuntu) into a javascript file that I'm running on my local machine.
Javascript can't find the file if I call the file from ubuntu:
<script src="/home/ubuntu/bandsintown/sf_events.json"></script>
In other words, I'd like to read in the JSON that I've created in the cloud, to a file that exists on my local machine. Should I output the JSON somewhere other than home/ubuntu? Or, can my local file somehow recognize /home/ubuntu as a file location?
Thanks in advance.
The problem occurs because the file does not exist on your local machine, only on the running EC2 instance.
A possible solution is to upload the JSON file from EC2 instance to S3 and afterward download the JSON file to your local machine /home/ubuntu/bandsintown/sf_events.json.
First, install the AWS CLI toolkit on running EC2 instance AWS CLI and run the following commands in the terminal
aws configure
aws s3 cp /home/ubuntu/bandsintown/sf_events.json s3://mybucket/sf_events.json
Or install Python AWS SDK boto3 and upload it via python
s3 = boto3.resource('s3')
def upload_file_to_s3(s3_path, local_path):
bucket = s3_path.split('/')[2] #bucket is always second as paths are S3://bucket/.././
file_path = '/'.join(s3_path.split('/')[3:])
response = s3.Object(bucket, file_path).upload_file(local_path)
return response
s3_path = "s3://mybucket/sf_events.json"
local_path = "/home/ubuntu/bandsintown/sf_events.json"
upload_file_to_s3(s3_path, local_path)
Then on your local machine download file from s3 via AWS CLI
aws configure
aws s3 cp s3://mybucket/sf_events.json /home/ubuntu/bandsintown/sf_events.json
Or if you prefer python SDK:
s3 = boto3.resource('s3')
def download_file_from_s3(s3_path, local_path):
bucket = s3_path.split('/')[2] #bucket is always second as paths are S3://bucket/.././
file_path = '/'.join(s3_path.split('/')[3:])
filename = os.path.basename(s3_path)
s3.Object(bucket, file_path).download_file(local_file_path)
s3_path = "s3://mybucket/sf_events.json"
local_path = "/home/ubuntu/bandsintown/sf_events.json"
download_file_from_s3(s3_path, local_path)
Or using Javascript SDK running inside of browser, but I would not recommend this because you must make your bucket public and also take care of browser compatibility issue
You can use aws S3
You can run one python script on your instance which uploads the json file to s3 whenever the json gets generated and another python script on local machine where you can use (script for sqs queue and s3 download configuration) or (script which downloads the latest file uploaded to s3 bucket).
Case1:
Whenever the json file gets uploaded to S3 you will get message in the sqs queue that the file has been uploaded to s3 and then the file gets downloaded to your local machine.
Case2:
Whenever the json file gets uploaded to s3, you can run the download script which downloads the latest json file.
upload.py:
import boto3
import os
import socket
def upload_files(path):
session = boto3.Session(
aws_access_key_id='your access key id',
aws_secret_access_key='your secret key id',
region_name='region'
)
s3 = session.resource('s3')
bucket = s3.Bucket('bucket name')
for subdir, dirs, files in os.walk(path):
for file in files:
full_path = os.path.join(subdir, file)
print(full_path[len(path)+0:])
with open(full_path, 'rb') as data:
bucket.put_object(Key=full_path[len(path)+0:], Body=data)
if __name__ == "__main__":
upload_files('your pathwhich in your case is (/home/ubuntu/)')
your other script on local machine:
download1.py with sqs queue
import boto3
import logzero
from logzero import logger
s3_resource = boto3.resource('s3')
sqs_client=boto3.client('sqs')
### Queue URL
queue_url = 'queue url'
### aws s3 bucket
bucketName = "your bucket-name"
### Receive the message from SQS queue
response_message = sqs_client.receive_message(
QueueUrl=queue_url,
MaxNumberOfMessages=1,
MessageAttributeNames=[
'All'
],
)
message=response_message['Messages'][0]
receipt_handle = message['ReceiptHandle']
messageid=message['MessageId']
filename=message['Body']
try:
s3_resource.Bucket(bucketName).download_file(filename,filename)
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code']=='404':
logger.info("The object does not exist.")
else:
raise
logger.info("File Downloaded")
download2.py with latest file downloading from s3:
import boto3
### S3 connection
s3_resource = boto3.resource('s3')
s3_client = boto3.client('s3')
bucketName = 'your bucket-name'
response = s3_client.list_objects_v2(Bucket=bucketName)
all = response['Contents']
latest = max(all, key=lambda x: x['LastModified'])
s3 = boto3.resource('s3')
key=latest['Key']
print("downloading file")
s3_resource.Bucket(bucketName).download_file(key,key)
print("file download")
You basically need to copy a file from remote machine to your local one. The most simple way is to use scp. In the following example it just copies to your current directory. If you are on Windows, open PowerShell, if you are on Linux , scp should be installed already.
scp <username>#<your ec2 instance host or IP>:/home/ubuntu/bandsintown/sf_events.json ./
Run the command, enter your password, done. The same way you are using ssh to connect to your remote machine. (I believe your username would be ubuntu)
More advanced method would be mounting your remote directory via SSHFS. It is a little cumbersome to set up, but then you will have instant access to the remote files as if they were local.
And if you want to do it pragramatically from Python, see this question.
Copying files from local to EC2
Your private key must not be publicly visible. Run the following command so that only the root user can read the file.
chmod 400 yourPublicKeyFile.pem
To copy files between your computer and your instance you can use an FTP service like FileZilla or the command scp. “scp” means “secure copy”, which can copy files between computers on a network. You can use this tool in a Terminal on a Unix/Linux/Mac system.
To use scp with a key pair use the following command:
scp -i /directory/to/abc.pem /your/local/file/to/copy user#ec2-xx-xx-xxx-xxx.compute-1.amazonaws.com:path/to/file
You need to specify the correct Linux user. From Amazon:
For Amazon Linux, the user name is ec2-user.
For RHEL, the user name is ec2-user or root.
For Ubuntu, the user name is ubuntu or root.
For Centos, the user name is centos.
For Fedora, the user name is ec2-user.
For SUSE, the user name is ec2-user or root.
Otherwise, if ec2-user and root don’t work, check with your AMI provider.
To use it without a key pair, just omit the flag -i and type in the password of the user when prompted.
Note: You need to make sure that the user “user” has the permission to write in the target directory. In this example, if ~/path/to/file was created by user “user”, it should be fine.
Copying files from EC2 to local
To use scp with a key pair use the following command:
scp -i /directory/to/abc.pem user#ec2-xx-xx-xxx-xxx.compute-1.amazonaws.com:path/to/file /your/local/directory/files/to/download
Reference: Screenshot from terminal
Hack 1: While downloading file from EC2, download folder by archiving it.
zip -r squash.zip /your/ec2/directory/
Hack 2 : You can download all archived files from ec2 to just by below command.
scp -i /directory/to/abc.pem user#ec2-xx-xx-xxx-xxx.compute-1.amazonaws.com:~/* /your/local/directory/files/to/download
Have you thought about using EFS for this? You can mount EFS on ec2 as well as on your local machine over a VPN or a direct connect? Can you not save the file on EFS so both sources can access it?
Hope this helps.

How to use .pem file inside Amazon Lambda to access EC2 instance

I'm currently working on a project that take place inside the AWS environment. I have configure a S3 bucket in order to receive mails (mails are coming from SES but that's not relevant).
What I want to do is to create a Lambda function that will be able to access a EC2 instance and launch a python scripts. So far i have the code below. The problem is that when I created my ec2 instance, I didnt create any username or password to connect via SSH. I only have a .pem file (certificate file) to authenticate to the instance.
I did some research but i couldn't find anything useful.
var SSH = require('simple-ssh');
var ssh = new SSH({
host: 'localhost',
user: 'username',
pass: 'password'
});
ssh.exec('python3.6 path/to/my/python/script.py', {
out: function(stdout) {
console.log(stdout);
}
}).start();
i've been thinking of severals solutions, but i'm not sure at all :
find an SSH library in Javascript that handle .pem file
converting .pem into a String (not secure at all, in my opinion).
maybe create a new ssh user in EC2 ?
Thanks you for your time.
A better option would be to use AWS Systems Manager to remotely run commands on your Amazon EC2 instances.
If you still choose to use simple-ssh then you need to supply an SSH key in config.key when creating your SSH object. You can store the private key in Parameter Store or Secrets Manager and retrieve it within the Lambda. In this case, you should definitely use passwordless SSH (with keypair).

How to hide credentials in a NodeJS/Express app?

I am trying to create a NodeJS server that runs locally on the clients machine. I have all the credentials to access the db in a config file and have created an exe file using pkg module so that my client can run the exe file and have the server running on their machine.
I do not want my client to get hold of the credentials but the source code of the exe file contains the credentials. How can I safeguard the database credentials?
Store your credentials encrypted and have your program directly (or indirectly through a compiled program to further hide your logic) decrypt them on startup

Amazon S3 serving file only to app user in react-native

I am building a social network in react-native / nodejs and using the S3 amazon service to handle the users personal photos (upload, serve)
But I can't seem to wrap my head around how to serve those images, my question is how do I serve those uploaded images to only the application users and not the whole world ?
At first I tried to explicitly fetch the image myself, this allowed me to directly put the S3 credentials, but it doesn't seem practical.
Is there a way to make every GET call made by an app authorized to fetch from my bucket ?
What you need is S3 signed URLs http://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURL.html
Instead of fetching images, create unique signed URLs to this images (with a custom expiration time, say 1 week) and pass those to your application. This way you can close your S3 bucket to the world, but your application will be able to obtain the images with these private links.
Thanks to #sergey I made it to what fits my needs the 'getSignedUrl' method.
Here is the code that worked for me :
import AWS from 'aws-sdk/dist/aws-sdk-react-native';
const credentials = new AWS.Crendentials({ accessKeyId: '', secretAccessKey: ''})
const s3 = new AWS.S3({ credentials, signatureVersion: 'v4', region: ''});
// and there it is.
const url = s3.getSignedUrl('getObject', { Bucket: 'your bucket name', Key: 'the filename'}).
And now each time I loop through an array containing multiple references to my photos, during each loop I create for an item a single pre-signed url that I put in my component.
You can use the new AWS Amplify library to accomplish this: https://github.com/aws/aws-amplify
There is an Auth for getting user credentials and establishing identities, both in an Authenticated and UnAuthenticated state, as well as a Storage component which has public and private access profiles.
Install via npm:
npm install --save aws-amplify-react-native
You will need to link the project if using Cognito User Pools:
react-native link amazon-cognito-identity-js
More info here: https://github.com/aws/aws-amplify/blob/master/media/quick_start.md#react-native-development
Then pull in the modules:
import Amplify, {Auth, Storage} from 'aws-amplify-react-native'
Amplify.configure('your_config_file');
Storage.configure({level: 'private'});
Storage.get('myfile');

Categories