I'm trying to write a method which takes files from a path and uploads them to a GitHub repo. The files have to remain intact and separate (can't zip them). This is what I've got so far:
addFiles(branch) {
const filePath = this.filePath
fs.readdirSync(filePath).forEach((file, index) => {
if (file.match('.txt')) {
const fileData = fs.readFileSync(path.resolve(filePath, file));
this.octokit.repos.createOrUpdateFile({
owner,
repo,
path: `test/${file}`,
branch,
message: `Commit ${index}`,
content: encode(fileData)
})
.catch(err => console.log(err))
}
})
}
This works to a point but it will only upload one file and then fails with the following error:
PUT /path/to/repo/contents/test/example.txt - 201 in 1224ms
PUT /path/to/repo/contents/test/example-2.txt - 409 in 1228ms
{ HttpError: is at 90db2dadca8d061e77ca06fe7196197ada6f6687 but expected b7933883cbed4ff91cc2762e24c183b797db0b74
at response.text.then.message (/project/node_modules/#octokit/request/dist-node/index.js:66:23)
Even if this worked fine though, it still wouldn't be ideal as this project is likely to scale to the point where hundreds of files are being uploaded at once, is there a way to just upload a directory or upload multiple files per commit? Failing that, can anyone solve my error?
I worked it out in the end. createFile isn't the right way to go, it should be done with createTree: https://octokit.github.io/rest.js/#octokit-routes-git-create-tree
It wasn't as simple as creating just the tree though, you also have to create a commit and then update the reference, I followed this GitHub issue for guidance:
https://github.com/octokit/rest.js/issues/1308
Related
I'm working on an automated testing project using PuppeteerJS in headless Chrome and trying to integrate existing screenshot functionality with AWS-SDK to upload images to an AWS S3 bucket on test failure.
The problem i'm having is the sub directories in a screenshots folder and the image file names are generated randomly in another file based on the current date and test environment, and run every time a test runs. The format of the generated directories/files is "screenshots/year/month/day/randomname.png".
The next step in the test is after the screenshots are generated, the folder containing the newly created images should be uploaded to AWS, and I've tried to achieve this using a glob to get every subdirectory and file with a png extension, like "screenshots/**/**/**/*.png", but i get a "no such file or directory" error". The folders/file names will be different everytime the tests run.
I've just started using AWS and I haven't been able to find a specific answer to my problem while researching.
import { PutObjectCommand } from "#aws-sdk/client-s3";
import { s3Client } from "../libs/s3Client.js";
import path from "path";
import fs from "fs";
const file = "../../screenshots/**/**/**/*.png";
const fileStream = fs.createReadStream(file);
// Set the parameters
export const uploadParams = {
Bucket: "bucket-name",
Key: path.basename(file),
// Add the required 'Body' parameter
Body: fileStream,
};
// Upload file to specified bucket.
export const run = async () => {
try {
const data = await s3Client.send(new PutObjectCommand(uploadParams));
console.log("Success", data);
return data; // For unit tests.
} catch (err) {
console.log("Error", err);
}
};
run();
Worked this out with the help of Jarmod. I needed to use the nodeJS:fs module to get the file paths recursively and returns a string which can be passed into the AWS fileStream variable for it to be uploaded to AWS. Jarmod shared the webmound article and i found the coder rocket fuel article hepful also.
https://www.webmound.com/nodejs-get-files-in-directories-recursively/
https://coderrocketfuel.com/article/recursively-list-all-the-files-in-a-directory-using-node-js
I'm packing some files in my lambda package that I need. I've used some example floating around to nearly get it working.
I'm able to verify the path of a file OK
const deviceCert = path.resolve(certType + "-deviceCert.key");
which logs out to
"message": "Resolved path to TEST-deviceCert.key: /var/task/TEST-deviceCert.key"
when I attempt to read the file using
fs.readFile(deviceCert, (err, data) => {
if (err) {
log.error(`Verify deviceCert failure: ${err}`);
responseBody = Helper.buildCORSResponse(502, JSON.stringify({ message: "Unable to locate file required" }));
return callback(null, responseBody);
}
});
I get the following error
Error: ENOENT: no such file or directory, open '/var/task/TEST-deviceCert.key'"
If I can verify the path then why cant I read it?
Any ideas??
Copied from the node.js path.resolve() API documentation:
The path.resolve() method resolves a sequence of paths or path segments into an absolute path.
In other words, resolve concatenates a sequence of strings into one string, formatted as an absolute path. However, it does not check whether or not there is a file at this location. You can use either fs.stat() or fs.access() to verify the presence and access of the file.
eventually confirmed that serverless was packaging the files I needed.
Using fs.readdir I was able to debug the issue and find the path that the packaging process was creating in the Lambda package
/var/task/src//Certs/
Hope this helps someone in the future!!
Node.js Alexa Task Issue
I'm currently coding a Node.js Alexa Task via AWS Lambda, and I have been trying to code a function that receives information from the OpenWeather API and parses it into a variable called weather. The relevant code is as follows:
var request = require('request');
var weather = "";
function isBadWeather(location) {
var endpoint = "http://api.openweathermap.org/data/2.5/weather?q=" + location + "&APPID=205283d9c9211b776d3580d5de5d6338";
var body = "";
request(endpoint, function (error, response, body) {
if (!error && response.statusCode == 200) {
body = JSON.parse(body);
weather = body.weather[0].id;
}
});
}
function testWeather()
{
setTimeout(function() {
if (weather >= 200 && weather < 800)
weather = true;
else
weather = false;
console.log(weather);
generateResponse(buildSpeechletResponse(weather, true), {});
}, 500);
}
I ran this snippet countless times in Cloud9 and other IDEs, and it seems to be working flawlessly. However, when I zip it into a package and upload it to AWS Lambda, I get the following error:
{
"errorMessage": "Cannot find module '/var/task/index'",
"errorType": "Error",
"stackTrace": [
"Function.Module._load (module.js:276:25)",
"Module.require (module.js:353:17)",
"require (internal/module.js:12:17)"
]
}
I installed module-js, request, and many other Node modules that should make this code run, but nothing seems to fix this issue. Here is my directory, just in case:
- planyr.zip
- index.js
- node_modules
- package.json
Does anyone know what the issue could be?
Fixed it! My issue was that I tried to zip the file using my Mac's built-in compression function in Finder.
If you're a Mac user, like me, you should run the following script in terminal when you are in the root directory of your project (folder containing your index.js, node_modules, etc. files).
zip -r ../yourfilename.zip *
For Windows:
Compress-Archive -LiteralPath node_modules, index.js -DestinationPath yourfilename.zip
Update to the accepted answer: When this error occurs, it means your zip file is not in the valid form which AWS requires.
If you double click on zip you will find your folder inside that your code file,but lambda wants that when you double click on zip it shoud show direct code files.
To achieve this:
open terminal
cd your-lambda-folder
zip -r index.zip *
Then, upload index.zip to AWS Lambda.
Check that file name and handler name are same:
That means that zip file has bundle.js file that exports handler function:
exports.handler = (event, context, callback) => {//...}
In my case it was because I had the handler file in inner src directory.
I had to change the 'Handler' property within Lambda from:
index.handler
to
src/index.handler
This is probably a permissions issue with files inside your deployment zip.
Try chmod 777 your files before packaging them in a zip file.
In my case the archive contained a folder "src" with index.js file, so I had to put to the handler: "src/index.handler"
In my case I had to replace
exports.handler = function eventHandler (event, context) {
with
exports.handler = function (event, context, callback) {
I got this error when I was using lambci/lambda:nodejs8.10 in windows.
I'd tried all of the solution listed above but none of which could help me deal with my issue(even though the error stack look the same as the question).
Here is my simple solution:
using --entrypoint flag to run a container to find out if the file is mounted into the container. It turns out I may got the share drive issue with my Docker Desktop.
I switched my docker daemon that day before, but everything works fine except this problem.
Anyway, remount my drive to Docker Desktop, you can both use the docker command or just open the Docker Desktop setting to apply.
In my case this was caused by Node running out of memory. I fixed that by adding --memory-size 1500 to my aws lambda create-function ... command.
I have been looking around for a way to digest and export SAS files using NodeJS. I guess it can be done by means of:
C++ extension to NodeJS
Some sort of JavaScript framework
I couldn't find anything ready made on the Internet. I haven't tried to cook one up myself. I am not considering other options such as to get SAS to export CSVs. I assume that SAS is not available to NodeJS.
Does anyone know of any ready made way of making NodeJS to work with xport and sas7bdat files?
Regards,
Vasilij
I just made this for sas7bdat files: https://github.com/dumbmatter/sas7bdat-js
It's a pure JS module for reading sas7bdat files in NodeJS. Install with:
npm install sas7bdat
Then load the module:
const SAS7BDAT = require('sas7bdat');
SAS7BDAT.createReadStream returns a stream that emits individual rows, one at a time:
const stream = SAS7BDAT.createReadStream('test.sas7bdat');
stream.on('data', row => console.log(row));
stream.on('end', () => console.log('Done!'));
stream.on('error', err => console.log(err));
SAS7BDAT.parse returns a promise that resolves to an array containing all the rows:
SAS7BDAT.parse('test.sas7bdat')
.then(rows => console.log(rows))
.catch(err => console.log(err));
As my title explains I am getting the following error:
{
"errorMessage": "Cannot find module 'index'",
"errorType": "Error",
"stackTrace": [
"Function.Module._resolveFilename (module.js:338:15)",
"Function.Module._load (module.js:280:25)",
"Module.require (module.js:364:17)",
"require (module.js:380:17)"
]
}
I have tried both solutions provided in creating-a-lambda-function-in-aws-from-zip-file and simple-node-js-example-in-aws-lambda
My config currently looks like:
and my file structure is:
and my index.js handler function looks like :
exports.handler = function(event, context) {
What else could be causing this issue aside from what was stated in those two answers above? I have tried both solutions and I have also allocated more memory to the function just incase thats why it couldn't run.
EDIT -
For the sake of trying, I created an even simpler version of my original code and it looked like this:
var Q = require('q');
var AWS = require('aws-sdk');
var validate = require('lambduh-validate');
var Lambda = new AWS.Lambda();
var S3 = new AWS.S3();
theHandler = function (event, context) {
console.log =('nothing');
}
exports.handler = theHandler();
And yet still does not work with the same error?
Try zipping and uploading the contents of the folder lambda-create-timelapse. Not the folder itself.
If this was unclear for anyone else, here are the steps:
Step 1
Navigate to the folder of your project, and open that folder so that you are inside the folder:
Step 2
Select all of the images you want to upload into to Lambda:
Step 3
Right-click and compress the files you have selected:
This will give you a .zip file, which is the file you need to upload to Lambda:
There are a lot of ways to automate this, but this is the manual procedure.
I ran into this problem a few times myself, and this indeed has to do with zipping the folder instead of just the contents like you're supposed to.
For those working from the terminal...
While INSIDE of the directory where the .js files are sitting, run the following:
zip -r ../zipname.zip *
The * is instructing the client to zip all the contents within this folder, ../zipname.zip is telling it to name the file zipname.zip and place it right outside of this current directory.
I had the same problem sometime ago - I reformatted the code.
function lambdafunc1(event, context) {
...
...
...
}
exports.handler = lambdafunc1
The problem occurs when the handler cannot be located in the zip at first level. So anytime you see such error make sure that the file is at the first level in the exploded folder.
To fix this zip the files and not the folder that has the files.
Correct Lambda function declaration can look like this:
var func = function(event, context) {
...
};
exports.handler = func;
You may have other syntax errors that prevent the index.js file from being properly ran. Try running your code locally using another file and using the index.js as your own module.
make sure in your handler following code added
exports.handler = (event, context, callback) => {
...
}
Another reason this can occur is if you don't do an npm install in the folder before packaging and deploying.