How to use .pem file inside Amazon Lambda to access EC2 instance - javascript

I'm currently working on a project that take place inside the AWS environment. I have configure a S3 bucket in order to receive mails (mails are coming from SES but that's not relevant).
What I want to do is to create a Lambda function that will be able to access a EC2 instance and launch a python scripts. So far i have the code below. The problem is that when I created my ec2 instance, I didnt create any username or password to connect via SSH. I only have a .pem file (certificate file) to authenticate to the instance.
I did some research but i couldn't find anything useful.
var SSH = require('simple-ssh');
var ssh = new SSH({
host: 'localhost',
user: 'username',
pass: 'password'
});
ssh.exec('python3.6 path/to/my/python/script.py', {
out: function(stdout) {
console.log(stdout);
}
}).start();
i've been thinking of severals solutions, but i'm not sure at all :
find an SSH library in Javascript that handle .pem file
converting .pem into a String (not secure at all, in my opinion).
maybe create a new ssh user in EC2 ?
Thanks you for your time.

A better option would be to use AWS Systems Manager to remotely run commands on your Amazon EC2 instances.
If you still choose to use simple-ssh then you need to supply an SSH key in config.key when creating your SSH object. You can store the private key in Parameter Store or Secrets Manager and retrieve it within the Lambda. In this case, you should definitely use passwordless SSH (with keypair).

Related

Programatically upload html files to a web server with nodejs

I have a discord js bot with tickets system (A system where users can create private channel and ask staff for help). When a channel is deleted, my bot saves all messages and creates html file with messages that were in that channel. I want to upload that file to a web server so users can review it without having to download it, but I am unsure how to do it. The below variable attachment holds the html file/string to be uploaded.
const attachment = await discordTranscripts.createTranscript(channel, {
limit: -1, // Max amount of messages to fetch.
returnType: 'attachment', // Valid options: 'buffer' | 'string' | 'attachment' Default: 'attachment'
fileName: fileName, // Only valid with returnBuffer false. Name of attachment.
minify: true // Minify the result? Uses html-minifier
});
This is quite opinion-based question. There are a lot of ways to do it.
SSH
You can use Node SSH to log into your server via SSH and upload the file.
Rest API
Host a simple and secure Rest API in the server to write the HTML file and call it from your main application.
AWS S3
If you are using AWS, Create an S3 Bucket, make it publicly readable and enable static web hosting. Now you can put your HTML files to your S3 Bucket using AWS-SDK and access them via S3 Bucket Url.
If you are using some other cloud platform like Azure / GCP, Checkout their own storage service.

reading in a file from ubuntu (AWS EC2) on local machine?

I have a python script which I'm running on AWS (EC2 instance with Ubuntu). This python script outputs a JSON file daily, to a directory in /home/ubuntu:
with open("/home/ubuntu/bandsintown/sf_events.json", "w") as writeJSON:
file_str = json.dumps(allEvents, sort_keys=True)
file_str = "var sf_events = " + file_str
All works as expected here. My issue is that I'm unsure of how to read this JSON (existing on ubuntu) into a javascript file that I'm running on my local machine.
Javascript can't find the file if I call the file from ubuntu:
<script src="/home/ubuntu/bandsintown/sf_events.json"></script>
In other words, I'd like to read in the JSON that I've created in the cloud, to a file that exists on my local machine. Should I output the JSON somewhere other than home/ubuntu? Or, can my local file somehow recognize /home/ubuntu as a file location?
Thanks in advance.
The problem occurs because the file does not exist on your local machine, only on the running EC2 instance.
A possible solution is to upload the JSON file from EC2 instance to S3 and afterward download the JSON file to your local machine /home/ubuntu/bandsintown/sf_events.json.
First, install the AWS CLI toolkit on running EC2 instance AWS CLI and run the following commands in the terminal
aws configure
aws s3 cp /home/ubuntu/bandsintown/sf_events.json s3://mybucket/sf_events.json
Or install Python AWS SDK boto3 and upload it via python
s3 = boto3.resource('s3')
def upload_file_to_s3(s3_path, local_path):
bucket = s3_path.split('/')[2] #bucket is always second as paths are S3://bucket/.././
file_path = '/'.join(s3_path.split('/')[3:])
response = s3.Object(bucket, file_path).upload_file(local_path)
return response
s3_path = "s3://mybucket/sf_events.json"
local_path = "/home/ubuntu/bandsintown/sf_events.json"
upload_file_to_s3(s3_path, local_path)
Then on your local machine download file from s3 via AWS CLI
aws configure
aws s3 cp s3://mybucket/sf_events.json /home/ubuntu/bandsintown/sf_events.json
Or if you prefer python SDK:
s3 = boto3.resource('s3')
def download_file_from_s3(s3_path, local_path):
bucket = s3_path.split('/')[2] #bucket is always second as paths are S3://bucket/.././
file_path = '/'.join(s3_path.split('/')[3:])
filename = os.path.basename(s3_path)
s3.Object(bucket, file_path).download_file(local_file_path)
s3_path = "s3://mybucket/sf_events.json"
local_path = "/home/ubuntu/bandsintown/sf_events.json"
download_file_from_s3(s3_path, local_path)
Or using Javascript SDK running inside of browser, but I would not recommend this because you must make your bucket public and also take care of browser compatibility issue
You can use aws S3
You can run one python script on your instance which uploads the json file to s3 whenever the json gets generated and another python script on local machine where you can use (script for sqs queue and s3 download configuration) or (script which downloads the latest file uploaded to s3 bucket).
Case1:
Whenever the json file gets uploaded to S3 you will get message in the sqs queue that the file has been uploaded to s3 and then the file gets downloaded to your local machine.
Case2:
Whenever the json file gets uploaded to s3, you can run the download script which downloads the latest json file.
upload.py:
import boto3
import os
import socket
def upload_files(path):
session = boto3.Session(
aws_access_key_id='your access key id',
aws_secret_access_key='your secret key id',
region_name='region'
)
s3 = session.resource('s3')
bucket = s3.Bucket('bucket name')
for subdir, dirs, files in os.walk(path):
for file in files:
full_path = os.path.join(subdir, file)
print(full_path[len(path)+0:])
with open(full_path, 'rb') as data:
bucket.put_object(Key=full_path[len(path)+0:], Body=data)
if __name__ == "__main__":
upload_files('your pathwhich in your case is (/home/ubuntu/)')
your other script on local machine:
download1.py with sqs queue
import boto3
import logzero
from logzero import logger
s3_resource = boto3.resource('s3')
sqs_client=boto3.client('sqs')
### Queue URL
queue_url = 'queue url'
### aws s3 bucket
bucketName = "your bucket-name"
### Receive the message from SQS queue
response_message = sqs_client.receive_message(
QueueUrl=queue_url,
MaxNumberOfMessages=1,
MessageAttributeNames=[
'All'
],
)
message=response_message['Messages'][0]
receipt_handle = message['ReceiptHandle']
messageid=message['MessageId']
filename=message['Body']
try:
s3_resource.Bucket(bucketName).download_file(filename,filename)
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code']=='404':
logger.info("The object does not exist.")
else:
raise
logger.info("File Downloaded")
download2.py with latest file downloading from s3:
import boto3
### S3 connection
s3_resource = boto3.resource('s3')
s3_client = boto3.client('s3')
bucketName = 'your bucket-name'
response = s3_client.list_objects_v2(Bucket=bucketName)
all = response['Contents']
latest = max(all, key=lambda x: x['LastModified'])
s3 = boto3.resource('s3')
key=latest['Key']
print("downloading file")
s3_resource.Bucket(bucketName).download_file(key,key)
print("file download")
You basically need to copy a file from remote machine to your local one. The most simple way is to use scp. In the following example it just copies to your current directory. If you are on Windows, open PowerShell, if you are on Linux , scp should be installed already.
scp <username>#<your ec2 instance host or IP>:/home/ubuntu/bandsintown/sf_events.json ./
Run the command, enter your password, done. The same way you are using ssh to connect to your remote machine. (I believe your username would be ubuntu)
More advanced method would be mounting your remote directory via SSHFS. It is a little cumbersome to set up, but then you will have instant access to the remote files as if they were local.
And if you want to do it pragramatically from Python, see this question.
Copying files from local to EC2
Your private key must not be publicly visible. Run the following command so that only the root user can read the file.
chmod 400 yourPublicKeyFile.pem
To copy files between your computer and your instance you can use an FTP service like FileZilla or the command scp. “scp” means “secure copy”, which can copy files between computers on a network. You can use this tool in a Terminal on a Unix/Linux/Mac system.
To use scp with a key pair use the following command:
scp -i /directory/to/abc.pem /your/local/file/to/copy user#ec2-xx-xx-xxx-xxx.compute-1.amazonaws.com:path/to/file
You need to specify the correct Linux user. From Amazon:
For Amazon Linux, the user name is ec2-user.
For RHEL, the user name is ec2-user or root.
For Ubuntu, the user name is ubuntu or root.
For Centos, the user name is centos.
For Fedora, the user name is ec2-user.
For SUSE, the user name is ec2-user or root.
Otherwise, if ec2-user and root don’t work, check with your AMI provider.
To use it without a key pair, just omit the flag -i and type in the password of the user when prompted.
Note: You need to make sure that the user “user” has the permission to write in the target directory. In this example, if ~/path/to/file was created by user “user”, it should be fine.
Copying files from EC2 to local
To use scp with a key pair use the following command:
scp -i /directory/to/abc.pem user#ec2-xx-xx-xxx-xxx.compute-1.amazonaws.com:path/to/file /your/local/directory/files/to/download
Reference: Screenshot from terminal
Hack 1: While downloading file from EC2, download folder by archiving it.
zip -r squash.zip /your/ec2/directory/
Hack 2 : You can download all archived files from ec2 to just by below command.
scp -i /directory/to/abc.pem user#ec2-xx-xx-xxx-xxx.compute-1.amazonaws.com:~/* /your/local/directory/files/to/download
Have you thought about using EFS for this? You can mount EFS on ec2 as well as on your local machine over a VPN or a direct connect? Can you not save the file on EFS so both sources can access it?
Hope this helps.

Can't access EC2 instance IAM Role Credentials from NodeJS Javascript

I've successfully added an AWS Role to an EC2 instance. When I ssh into the instance and curl http://169.254.169.254/latest/meta-data/iam/security-credentials/, I can see the temporary credentials just fine.
On this EC2 instance, I have an NGINX reverse proxy serving a NodeJs web app. I want to be able to access DynamoDB in the Javascript of the web app but the AWS.config.credentials are null.
From what I've read, shouldn't these credentials be loaded automatically since the role was applied to the EC2 instance?
Is there some way to pass those credentials into the web app that I'm missing?
DynamoDB is being setup like this:
private dynamoDB = new AWS.DynamoDB();
private dynamoDBClient = new AWS.DynamoDB.DocumentClient({ service: this.dynamoDB, convertEmptyValues: true });

Using private key from GoDaddy on Nodejs

We purchased a domain name and SSL certificate on godaddy, but our server is not on GoDaddy. WE run Lampp and NodeJS in our server, and we are trying to set up SSL with both. There is no problem with Lampp. the private key and certificate from godaddy is working. but when i try the same files with NodeJS. it fails.
This is my js script:
ssl = {
key: fs.readFileSync("./key.pem",'utf8'),
cert: fs.readFileSync("./cert.crt",'utf8'),
ca: [fs.readFileSync('./g1.crt','utf8'),
fs.readFileSync('./g2.crt','utf8'), fs.readFileSync('./g3.crt','utf8')]
};
server = require('https').createServer(ssl, app);
This is the Error
_tls_common.js:104
c.context.setKey(options.key, options.passphrase);
^
Error: error:0909006C:PEM routines:get_name:no start line
After some googling, i have tried several solution: adding "utf8", spliting gd bundle, using nodepad++ to fix code. None of them helped.
However, nodejs can use my self-signed key and certificate files. So i would like to ask. Did i generate my key incorrectly? Should I manually generate private key/CSR locally and request a new certificate on GoDaddy? or there is something wrong in my code?
This error message would mean that those files are wrong, corrupt or was requested for other OS Enviroments. So we have some options.
Resolution about the code (importing file system library and use full path).
let yourKey = fs.readFileSync('./folderOne/folderTwo/initial.key').toString();
let yourCertificate = fs.readFileSync('./folderOne/folderTwo/certificate.crt').toString();
var credentials = { key: yourKey, cert: yourCertificate };
Resolution requesting per OS compatibility:
Request for new certificates with a note about the OS (Linux, Windows, etc) sending the initial key for the provider that was sent to you.
Important.: You only need the .crt file and the private key.

How to connect to elasticache in my case?

I am trying to connect to aws elasticache from my app.
I know the endpoint and the port but for some reason I can't connect to it.
I used this npm package:
https://www.npmjs.com/package/node-memcached-client
code:
const Memcached = require('node-memcached-client');
const client = new Memcached({
host: 'mycache.aa11c.0001.use2.cache.amazonaws.com', //fake aws cache endpoint
port: 11211
});
console.log(client); // I can see it outputs stuff
client.connect()
.then(c => {
console.log('connected');
console.log(c);
}).catch(function(err){
console.log('error connecting');
console.log(err);
});
For some reason when I run the codes, all I see is
[Memcached] INFO: Nothing any connection to mycache.aa11c.0001.use2.cache.amazonaws.com:11211, created sid:1
no errors or connected message in the console.log. Am I doing something wrong here?
Thanks!
You may want to go over the below document from AWS to access Elasticache resources from outside of AWS:
Access AWS Elasticache from outside
I would recommend setting up a local memcached instance for development and debugging, and connect to Elasticache from an EC2 instance in test and production environments.
The ROI for trying to setup NAT and mapping the IP addresses is not justifiable for dev/test unless absolutely necessary.
I ended using port forwarding to bypass the local instance can't connect to elasticache issue.
by using
ssh -L 11211:{your elasticache endpoint}:11211 {your ec2 instance ip}

Categories