AWS Lambda fs.readfile issue - javascript

I'm packing some files in my lambda package that I need. I've used some example floating around to nearly get it working.
I'm able to verify the path of a file OK
const deviceCert = path.resolve(certType + "-deviceCert.key");
which logs out to
"message": "Resolved path to TEST-deviceCert.key: /var/task/TEST-deviceCert.key"
when I attempt to read the file using
fs.readFile(deviceCert, (err, data) => {
if (err) {
log.error(`Verify deviceCert failure: ${err}`);
responseBody = Helper.buildCORSResponse(502, JSON.stringify({ message: "Unable to locate file required" }));
return callback(null, responseBody);
}
});
I get the following error
Error: ENOENT: no such file or directory, open '/var/task/TEST-deviceCert.key'"
If I can verify the path then why cant I read it?
Any ideas??

Copied from the node.js path.resolve() API documentation:
The path.resolve() method resolves a sequence of paths or path segments into an absolute path.
In other words, resolve concatenates a sequence of strings into one string, formatted as an absolute path. However, it does not check whether or not there is a file at this location. You can use either fs.stat() or fs.access() to verify the presence and access of the file.

eventually confirmed that serverless was packaging the files I needed.
Using fs.readdir I was able to debug the issue and find the path that the packaging process was creating in the Lambda package
/var/task/src//Certs/
Hope this helps someone in the future!!

Related

Node converting relative path to absolute, not desired

I have the following issue, in the function sendDownload(downlodable ,objPathArray, responseObject) I am receiving the parameter objPathArray like this:
[{"pathToFile":"./REPORTS/portfolio/onDemand/Portfolio_report_HP_17.08.2021.xlsx","file":"Portfolio_report_HP_17.08.2021.xlsx"}]
The function:
function sendDownload(downlodable ,objPathArray, responseObject) {
if (downlodable) {
responseObject.download((objPathArray[0].pathToFile), (objPathArray[0].file))
console.log('HERE ' + JSON.stringify(objPathArray))
}
}
but when I call the function in my app I receive this error Error: ENOENT: no such file or directory, stat 'C:\Work\reporting-server\REPORTS\portfolio\onDemand\Portfolio_report_HP_17.08.2021.xlsx' and that's because Node is changing my relative path for the file in an absolute one.
What is the best option for me in order to solve this issue ?
Thank you~
Node.js must resolve the path to find the file.
If you give it a relative path then it has to start somewhere and that somewhere is the current working directory.
If you want it to read the file from somewhere else then resolve it to an absolute path yourself.
Generally speaking the tools to do this will be path.resolve and something to tell it where to start from such as a configuration file, environment variable, or the directory the module is in.

Use Octokit or the GitHub Rest API to upload multiple files

I'm trying to write a method which takes files from a path and uploads them to a GitHub repo. The files have to remain intact and separate (can't zip them). This is what I've got so far:
addFiles(branch) {
const filePath = this.filePath
fs.readdirSync(filePath).forEach((file, index) => {
if (file.match('.txt')) {
const fileData = fs.readFileSync(path.resolve(filePath, file));
this.octokit.repos.createOrUpdateFile({
owner,
repo,
path: `test/${file}`,
branch,
message: `Commit ${index}`,
content: encode(fileData)
})
.catch(err => console.log(err))
}
})
}
This works to a point but it will only upload one file and then fails with the following error:
PUT /path/to/repo/contents/test/example.txt - 201 in 1224ms
PUT /path/to/repo/contents/test/example-2.txt - 409 in 1228ms
{ HttpError: is at 90db2dadca8d061e77ca06fe7196197ada6f6687 but expected b7933883cbed4ff91cc2762e24c183b797db0b74
at response.text.then.message (/project/node_modules/#octokit/request/dist-node/index.js:66:23)
Even if this worked fine though, it still wouldn't be ideal as this project is likely to scale to the point where hundreds of files are being uploaded at once, is there a way to just upload a directory or upload multiple files per commit? Failing that, can anyone solve my error?
I worked it out in the end. createFile isn't the right way to go, it should be done with createTree: https://octokit.github.io/rest.js/#octokit-routes-git-create-tree
It wasn't as simple as creating just the tree though, you also have to create a commit and then update the reference, I followed this GitHub issue for guidance:
https://github.com/octokit/rest.js/issues/1308

How to read a symlink in Node.js

I want to read a symlink, and get the details of the link itself, not the contents of the linked file. How do I do that in Node, in a cross-platform way?
I can detect symlinks easily using lstat, no problem. Once I know the path of the file, and that it is a symlink though, how can I read it? fs.readFile always reads the target file, or throws an error for reading a directory for links to directories.
There is a fs.constants.O_SYMLINK constant, which in theory solves this on OSX, but it seems to be undefined on both Ubuntu & Windows 10.
If you have determined that the file is a symlink try this:
fs.readlink("./mysimlink", function (err, linkString) {
// .. do some error handling here ..
console.log(linkString)
});
Confirmed as working on Linux.
You could then use fs.realpath() to turn it into a full path. Be aware though that linkString can be just a filename or relative path as well as a fully qualified path so you may have to get fs.realpath() for the symlink, determine its directory part and prefix it to linkString before using fs.realpath() on it.
I've just faced the same issue: sometimes fs.readlink returns a relative path, sometimes it returns an absolute path.
(proper error handling not implemented to keep things simple)
const fs = require('fs');
const pathPckg = require('path');
async function getTarLinkOfSymLink(path){
return new Promise((resolve, reject)=>{
fs.readlink(path, (err, tarPath)=>{
if(err){
console.log(err.message);
return resolve('');
}
const baseSrcPath = pathPckg.dirname(path);
return resolve( pathPckg.resolve(baseSrcPath, tarPath) );
});
});
}
// usage:
const path = '/example/symbolic/link/path';
const tarPath = await getTarLinkOfSymLink(path);
The code works if the symbolic link is either a file or a directory/folder - tested on Linux

AWS Lambda Error: "Cannot find module '/var/task/index'"

Node.js Alexa Task Issue
I'm currently coding a Node.js Alexa Task via AWS Lambda, and I have been trying to code a function that receives information from the OpenWeather API and parses it into a variable called weather. The relevant code is as follows:
var request = require('request');
var weather = "";
function isBadWeather(location) {
var endpoint = "http://api.openweathermap.org/data/2.5/weather?q=" + location + "&APPID=205283d9c9211b776d3580d5de5d6338";
var body = "";
request(endpoint, function (error, response, body) {
if (!error && response.statusCode == 200) {
body = JSON.parse(body);
weather = body.weather[0].id;
}
});
}
function testWeather()
{
setTimeout(function() {
if (weather >= 200 && weather < 800)
weather = true;
else
weather = false;
console.log(weather);
generateResponse(buildSpeechletResponse(weather, true), {});
}, 500);
}
I ran this snippet countless times in Cloud9 and other IDEs, and it seems to be working flawlessly. However, when I zip it into a package and upload it to AWS Lambda, I get the following error:
{
"errorMessage": "Cannot find module '/var/task/index'",
"errorType": "Error",
"stackTrace": [
"Function.Module._load (module.js:276:25)",
"Module.require (module.js:353:17)",
"require (internal/module.js:12:17)"
]
}
I installed module-js, request, and many other Node modules that should make this code run, but nothing seems to fix this issue. Here is my directory, just in case:
- planyr.zip
- index.js
- node_modules
- package.json
Does anyone know what the issue could be?
Fixed it! My issue was that I tried to zip the file using my Mac's built-in compression function in Finder.
If you're a Mac user, like me, you should run the following script in terminal when you are in the root directory of your project (folder containing your index.js, node_modules, etc. files).
zip -r ../yourfilename.zip *
For Windows:
Compress-Archive -LiteralPath node_modules, index.js -DestinationPath yourfilename.zip
Update to the accepted answer: When this error occurs, it means your zip file is not in the valid form which AWS requires.
If you double click on zip you will find your folder inside that your code file,but lambda wants that when you double click on zip it shoud show direct code files.
To achieve this:
open terminal
cd your-lambda-folder
zip -r index.zip *
Then, upload index.zip to AWS Lambda.
Check that file name and handler name are same:
That means that zip file has bundle.js file that exports handler function:
exports.handler = (event, context, callback) => {//...}
In my case it was because I had the handler file in inner src directory.
I had to change the 'Handler' property within Lambda from:
index.handler
to
src/index.handler
This is probably a permissions issue with files inside your deployment zip.
Try chmod 777 your files before packaging them in a zip file.
In my case the archive contained a folder "src" with index.js file, so I had to put to the handler: "src/index.handler"
In my case I had to replace
exports.handler = function eventHandler (event, context) {
with
exports.handler = function (event, context, callback) {
I got this error when I was using lambci/lambda:nodejs8.10 in windows.
I'd tried all of the solution listed above but none of which could help me deal with my issue(even though the error stack look the same as the question).
Here is my simple solution:
using --entrypoint flag to run a container to find out if the file is mounted into the container. It turns out I may got the share drive issue with my Docker Desktop.
I switched my docker daemon that day before, but everything works fine except this problem.
Anyway, remount my drive to Docker Desktop, you can both use the docker command or just open the Docker Desktop setting to apply.
In my case this was caused by Node running out of memory. I fixed that by adding --memory-size 1500 to my aws lambda create-function ... command.

Node.js fs.ReadFile always returns error

I want Node.js to read form.html when the domain name is localhost:3000/form, but for some reason, it always gives me an error 500 page.
The content parameter in the callback function of fs.readFile gets undefined, even though the path of the file is correct.
app.get('/form', function(req, res){
fs.readFile('/form.html', function(error, content){
if(error){
// This get's always executed... I don't know why.
// content = undefined.
res.writeHead(500);
res.end();
}
else{
res.writeHead(200, { 'content-type' : 'text/html' });
processFile(content);
res.end(content, 'utf-8');
}
});
});
added error message:
{ [Error: ENOENT, open 'C:\form.html'] errno: 34, code: 'ENOENT',
path: 'C:\form.html' }
Do I have to specify the full path to the file...?
After I removed the / I get this path:
C:\Users\deno_000\form.html
My files are all in the same directory, and on the left side of my editor you can see it:
http://i59.tinypic.com/2eqdp2o.jpg
/ in most file systems = root directory.
Either remove the / or add a dot infront like form.html or ./form.html.
. is the current directory
.. is the parent directory
./form.html = [current directory]/form.html]
The error is similar to file not found.
The html file would need to be in the same folder as the .js node file for this to work. If you have it in another path, use that path. \
Note you can also use:
Path#
Stability: 3 - Stable
This module contains utilities for handling and transforming file paths. Almost all these methods perform only string transformations. The file system is not consulted to check whether paths are valid.
Use require('path') to use this module.
http://nodejs.org/api/path.html
Had the same problem. Using __dirname fixed it.

Categories