How to download a PDF from S3 in Lambda using Nodejs? - javascript

I created a React application using AWS Amplify, and part of the functionality that I need to implement is a a post-processing lambda function that happens after a user has made a purchase on the site.
The function needs to (1) retrieve a PDF file from S3, (2) alter the file in a few ways, and then (3) upload the file back to S3 under a new name. I am getting stuck in that first part when it comes to downloading the file as a blob in Lambda.
I followed the documentation in Amplify that suggested the following code:
import { Storage } from "#aws-amplify/storage"
await Storage.get('test.txt', {
level: 'public'
});
I realized that the function that AWS created uses ES5 and not ES6, and I cannot import packages that way. I need to use const something = require('something'); instead.
I tried following the documentation that I listed above, but I can't seem to find a good path forward. Is Lambda not supposed to be able to run a quick pipeline and retrieve a file from S3?
Thanks for the help!

You can use the AWS SDK for Node to retrieve the object from S3 and post the update. Amplify modules are targeted for browser use. You function will also require an execution role that permits the function to getObject and putObject in the appropriate S3 bucket.
For Node.js, I would recommend using v3 of the AWS SDK. Specifically, look at the GetObjectCommand.
https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-s3/index.html
https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-s3/classes/getobjectcommand.html

Related

Single API to load JSON file in Browser and NodeJS?

Is there an existing API or library that can be used to load a JSON file in both the browser and Node?
I'm working on a module that I intend to run both from the command-line in NodeJS, and through the browser. I'm using the latest language features common to both (and don't need to support older browsers), including class keywords and the ES6 import syntax. The class in question needs to load a series of JSON files (where the first file identifies others that need to be loaded), and my preference is to access them as-is (they are externally defined and shared with other tools).
The "import" command looks like it might work for the first JSON file, except that I don't see a way of using that to load a series of files (determined by the first file) into variables.
One option is to pass in a helper function to the class for loading files, which the root script would populate as appropriate for NodeJS or the browser.
Alternatively, my current leading idea, but still not ideal in my mind, is to define a separate module with a "async function loadFile(fn)" function that can be imported, and set the paths such that a different version of that file loads for browser vs NodeJS.
This seems like something that should have a native option, or that somebody else would have already written a module for, but I've yet to find either.
For node, install the node-fetch module from npm.
Note that browser fetch can't talk directly to your filesystem -- it requires an HTTP server on the other side of the request. Node can talk to your filesystem, as well as making HTTP calls to servers.
It sounds like as of now, there is no perfect solution here. The 'fetch' API is the most promising, but only if Node implements it some day.
In the meantime I've settled for a simple solution that works seamlessly with minimal dependencies, requiring only a little magic with my ExpressJS server paths to point the served web instance to a different version of utils.js.
Note: To use the ES-style import syntax for includes in NodeJS (v14+) you must set "type":"module" in your package.json. See https://nodejs.org/api/esm.html#esm_package_json_type_field for details. This is necessary for true shared code bases.
Module Using it (NodeJS + Browser running the same file):
import * as utils from "../utils.js";
...
var data = await utils.loadJSON(filename);
...
utils.js for browser:
async function loadJSON(fn) {
return $.getJSON(fn); // Only because I'm using another JQuery-dependent lib
/* Or natively something like
let response = await fetch(fn);
return response.json();
*/
}
export { loadJSON };
utils.js for nodeJS
import * as fs from 'fs';
async function loadJSON(fn) {
return JSON.parse(await fs.promises.readFile(fn));
}
export { loadJSON };

aws-sdk use only needed package e.g. couldwatchlog and config (with credentials)

I want to use the cloudwatchlog client from AWS SDK (JS) and also set the credentials. So that I am not including the whole AWS SDK bundle inside my Application, because it is very large and slows down the page. Is there a way to configure the credentials and then only use the needed client from the AWS SDK?
so far I have tried this but it doesn't work with the config, typescript says the update method doesn't exist on Config:
import {Config} from 'aws-sdk/lib/core';
import {CloudWatchLogs} from 'aws-sdk';
Luckily I have just done this the other day to shrink down my bundle size as well.
First I would recommend getting the correct aws-sdk config library with:
let Config = require('aws-sdk/global');
To get just the individual CloudWatchLogs you would need to get it like so:
let CloudWatchLogs = require('aws-sdk/clients/cloudwatchlogs');
After this you can configure the credentials like you would before, and to get a new CloudWatchLog you can do: let cloudwatch = new CloudWatchLogs()
Hopefully this helps some.

Getting GitHub Repo Information

Is there a way to get simple repository information from GitHub (such as name, date uploaded, description, etc) with javascript and get it in a JSON file?
I am trying to get my repo information to import into my portfolio site.
Thanks in advance.
Try use github api, to get repository information, use GET /repos/:owner/:repo
As for javascript, you can use https://github.com/octokit/rest.js
After correctly importing the library (on server side, install the package via npm and require that module, on client side, download the browser library and insert a script tag in your html file), then you can try something like this
// const Octokit = require('#octokit/rest') // server only
const octokit = new Octokit();
const result = await octokit.repos.get({'owner', 'repo'});

loopback upload file by storage component

I try to build a server by loopback, which can upload and download files. But when I was reading docs, I followed its steps, but couldn't understand some descriptions. Storage component REST API. I can't understand "Arguments Container specification in POST body."
Then fail in uploading and downloading. I'm not familiar with javascript and start learning node.js for only one week.
http://loopback.io/doc/en/lb2/Storage-component.html
follow this docs. Here container is the folder name. After Following the steps in the docs then create a folder inside server called storage and create a model named container using slc cli. Then check it from explore you can see the file handling routes inside container section of the explorer.
Use the below code to configure the model inside datasource.json
"imagestorage": {
"name": "imagestorage",
"connector": "loopback-component-storage",
"provider": "filesystem",
"root": "./server/storage"
}
cannot forget to specify your on location in root

How can I read files from a subdirectory in a deployed Meteor app?

I am currently making a Meteor app and am having trouble reading files from the private subdirectory. I have been following a couple different tutorials and managed to get it to work flawlessly when I run the meteor app locally. This question (Find absolute base path of the project directory) helped me come up with using process.env.PWD to access the root directory, and from there I use .join() to access the private folder and the pertinent file inside. However, when I deployed this code, the website crashes on startup. I am very confident that it is an issue with process.env.PWD, so I am wondering what the proper method of getting Meteor's root directory on a deployed app is.
//code to run on server at startup
var path = Npm.require('path')
//I also tried using the below line (which was recommended in another Stackoverflow question) to no avail
//var meteor_root = Npm.require('fs').realpathSync( process.cwd() + '/../' );
var apnagent = Meteor.require("apnagent"),
agent = new apnagent.Agent();
agent.set('cert file', path.join(process.env.PWD, "private", "certificate-file.pem"))
agent.set('key file', path.join(process.env.PWD, "private", "devkey-file.pem"))
In development mode the file structure is different than after bundling, so you should never rely on it. Particularly, you should not access your files directly like you're doing with path methods.
Loading private assets is described in this section of Meteor's documentation. It mostly boils down to this method:
Assets.getBinary("certificate-file.pem");
and it's getText counterpart.
As for configuring APN agent, see this section of documentation. You don't have to configure the agent by passing file path as cert file param. Instead you may pass the raw data returned by Assets methods directly as cert. The same holds for key file ~ key pair and other settings.
As an alternative, you would need to submit your files independently to the production server to a different folder than your Meteor app and use their global path. This, however, would not be possible for cloud providers like Heroku, so it's better to use assets in the intended way.

Categories