Creating a lambda function in AWS from zip file - javascript

I am trying to create a simple lambda function, and I'm running into an error.
My code is basically
console.log('Loading function');
exports.handler = function(event, context) {
console.log('value1 =', event.key1);
console.log('value2 =', event.key2);
console.log('value3 =', event.key3);
context.succeed(event.key1); // Echo back the first key value
// context.fail('Something went wrong');
}
in a helloworld.js file. I zip that up and upload it as a zip file in the creating a lambda function section, and I keep getting this error:
{
"errorMessage": "Cannot find module 'index'",
"errorType": "Error",
"stackTrace": [
"Function.Module._resolveFilename (module.js:338:15)",
"Function.Module._load (module.js:280:25)",
"Module.require (module.js:364:17)",
"require (module.js:380:17)"
]
}
Does anyone have any ideas?

The name of your file needs to match the module name in the Handler configuration. In this case, your Handler should be set to helloworld.handler, where helloworld is the file that would be require()'d and handler is the exported function. Then it should work with the same zip file.

Make sure your index.js is in the root of the zipfile and not in a subdirectory.
In my case I had the name of the module matching the name of the file and the exported handler, the real problem was macOS and the zip program which basically creates a folder inside the zip file so when uncompressed in AWS Lambda engine the index.js ends in a subdirectory.
Using Finder
Don't right click and compress the directory, instead select the files individual files like index.js, package.json and the node_modules directory and right-click to compress, you may end up with a file Archive.zip in the same directory. The name of the zip file is not going to be fancy but at least it will work when you submit it to AWS Lambda.
Using the command line
You could make the same mistake using the command line with zip -r function.zip function which basically creates a zip file with a directory called function in it, instead do:
$ zip function.zip index.js package.json node_modules
adding: index.js (deflated 47%)
adding: package.json (deflated 36%)
adding: node_modules/ (stored 0%)
How to know verify your zip file
Using finder, if you double click the zip file and it uncompresses in a subdirectory then Lambda won't be able to see the file as index.js lives in that subdirectory.
Using the command line and zipinfo:
$ zipinfo function.zip | grep index.js | more
-rw-r--rw- 2.1 unx 1428 bX defN 27-Jul-16 12:21 function/index.js
Notice how index.js ended up inside the subdirectory function, you screwed up.
$ zipinfo function.zip | grep index.js | more
-rw-r--rw- 3.0 unx 1428 tx defN 27-Jul-16 12:21 index.js
Notice that index.js is not inside a subfolder, this zip file will work in AWS Lambda.
Leveraging npm commands to zip the function
So I added a script to my package to zip the project files for me just by running npm run zip
{
"name": "function",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"zip": "zip function.zip package.json *.js node_modules"
},
"dependencies": {
"aws-sdk": "^2.4.10"
}
}
$ npm run zip
> function#1.0.0 zip
> zip function.zip package.json *.js node_modules
adding: package.json (deflated 41%)
adding: index.js (deflated 47%)
adding: local.js (deflated 42%)
adding: node_modules/ (stored 0%)

Here is an advance way with AWS CLI. It will save your time in long term use.
First of all you should install and configure AWS CLI:
http://docs.aws.amazon.com/cli/latest/userguide/installing.html
1) Create an archive
$ zip -r lambda *
It will create for us lambda.zip file with all folders and files in the our current location.
2) Get role ARN
$ aws iam list-roles | grep "your_role"
It will return to us ARN that we will use with our lambda. You should create it by your hands
Example for list-roles
3) Create our lambda
$ aws lambda create-function --function-name "your_lambda_name" --zip-file fileb://lambda.zip --handler index.handler --runtime nodejs6.10 --timeout 15 --role COPY_HERE_YOUR_ARN_FROM_THE_STEP_2
We are done!

Automation - using Grunt
Complete AWS Lambda Seed project is available on Git.
Step 1: Init npm module
npm init
Step 2: Install Grunt
npm install --save-dev grunt grunt-cli
Step 3: Install grunt-aws-lambda
npm install --save-dev grunt-aws-lambda
Step 4: Create Folder for Lambda service
# Create directory
mkdir lambdaTest
# Jump into folder
cd lambdaTest
# Create service file
touch lambdaTest.js
# Initialize npm
npm init
Keep your logic/code into lambdaTest.js
'use strict'
exports.handler = (event, context, callback) => {
console.log("Hello it's looks like working");
};
Step 5: Create Gruntfile.js
Navigate back to root folder
touch Gruntfile.js
'use strict'
module.exports = function (grunt) {
grunt.initConfig({
lambda_invoke: {
lambdaTest: {
options: {
file_name: "lambdaTest/lambdaTest.js",
event: "lambdaTest/test.json",
}
}
},
lambda_package: {
lambdaTest: {
options: {
package_folder: 'lambdaTest/'
}
}
},
lambda_deploy: {
lambdaTest: {
arn: 'arn:aws:lambda:eu-central-1:XXXXXXXX:function:lambdaTest',
options: {
credentialsJSON: 'awsCredentials.json',
region: "eu-central-1"
},
}
},
});
grunt.loadNpmTasks('grunt-aws-lambda');
grunt.registerTask('ls-deploy', ['lambda_package:lambdaTest', 'lambda_deploy:lambdaTest']);
};
Step 6: Create awsCredentials.js
Create AWS IAM User with custom policy, Custom policy should have access to lambda:GetFunction, lambda:UploadFunction, lambda:UpdateFunctionCode, lambda:UpdateFunctionConfiguration and iam:PassRole
{
"accessKeyId": "XXXXXXXXXXXXXXXXXXXX",
"secretAccessKey": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
}
Step 7: Create a zip and deploy to AWS Lambda.
ls-deploy is custom task created by in Gruntfile above, which creates a zip of source code and deploy to Lambda.
grunt ls-deploy
Complete AWS Lambda Seed project is available on Git.

Let's take a folder named 'sample' as an example which we want to zip. Let's assume there are some subfolders or files within the sample folder.
Q. What you have to do?
A: Following are the steps:
Go inside the folder 'sample'.
select all required files or subfolders.
Right click on any one and select send to.
You will see Archive.zip, simply save it in your laptop anywhere you want.
Upload this zip as Amazon lambda function.
Q. What not to do?
A: Do not zip 'sample' folder. It won't work.

The same error occurs when you use the wrong runtime language

Its because in exports.handler, you are not referencing the index function. This can be solved in a more simpler way
Try this,
console.log('Loading function');
exports.handler = function index (event, context) {
console.log('value1 =', event.key1);
console.log('value2 =', event.key2);
console.log('value3 =', event.key3);
context.succeed(event.key1); // Echo back the first key value
// context.fail('Something went wrong');
}

Related

gulp doesn't create any directory

My workspace is directory "Super gulp" and below, there is directories about my files. The problem is that I was converting my .pug files into html files, and put them in directory "goal" but when I run "dev" nothing comes out. I've tried method:(Gulp doesn't create folder?) and found the result didn't changed.
Example to convert pug files into html files and put watcher on them.
Step: 1 ->
First install npm packages for compiling pug and watch for changes
npm i -S gulp-pug gulp-watch
Step: 2 ->
Then create and config your gulpfile.js.
First import an npm modules
gulpfile.js
const pug = require('gulp-pug');
const watch = require('gulp-watch');
//Then create compiling task
gulp.task('pug',() => {
return gulp.src('./src/*.pug')
.pipe(pug({
doctype: 'html',
pretty: false
}))
.pipe(gulp.dest('./src/goal/'));
});
//And then create watcher
gulp.task('watch',() => {
return watch('./src/*.pug', { ignoreInitial: false })
.pipe(gulp.dest('pug'));
});
Step: 3 -> Run the below cmd to run the task
gulp watch

NodeJS environment variables undefined

I'm trying to create some envrioment variables but when I create the file and run the server the seem to be undefined. I'm using nodemon. I have restarted my server and no luck.
UPDATED
.env
MONGO_ATLAS_PW = "xxxx";
JWT_KEY = "secret_this_should_be_longer";
package.json
...
"scripts": {
...
"start:server": "nodemon ./server/server.js"
}
app.js
require('dotenv').config();
...
console.log(process.env.JWT_KEY); //undefined
I believe the nodemon.json file is only for setting nodemon specific configuration. If you look at the nodemon docs for a sample nodemon.json file, the only env variable they mention setting is NODE_ENV.
Have you considered putting these environment variables for your app in a .env file instead? There is a package called dotenv that is helpful for managing env variables in Node.
First, install dotenv using the command npm install dotenv
Then, create a file called .env in the root directory with the following:
MONGO_ATLAS_PW=xxxxx
JWT_KEY=secret_this_should_be_longer
Finally, inside your app.js file after your imports add the following line:
require('dotenv').config()
I believe you're referring to the dotenv package. To configure it, first create a file called .env with your keys and values stored like so:
MONGO_ATLAS_PW=xxxxx
JWT_KEY=secret_this_should_be_longer
Then, in your server.js, add this near the top:
require("dotenv").config();
Then the process.env variable will be an object containing the values in .env.
This needed to be in the root directory of my project.
nodemon.json
{
"env": {
"MONGO_ATLAS_PW": "xxxx",
"JWT_KEY": "secret_this_should_be_longer"
}
}
The env variable do not contain the trailing white spaces and also remove the quotes
MONGO_ATLAS_PW = "xxxx";
JWT_KEY = "secret_this_should_be_longer";
to
MONGO_ATLAS_PW=xxxx
JWT_KEY=secret_this_should_be_longer
and restart the server
or you can also try using the nodemon.json - create a new file called nodemon.json in your root directory
{
"env": {
"MONGO_ATLAS_PW" : "xxxx",
"JWT_KEY" : "secret_this_should_be_longer"
}
}
and restart the server
for accessing the variable
process.env.MONGO_ATLAS_PW
process.env.JWT_KEY

Running an executable before index.js

I have written a file that needs to execute before the index.js, since it uses commander to require the user to pass information to the index file. I have it placed in a bin directory, but I'm not sure how to make it run. I can cd into the directory and run node <file_name> and pass it the values needed, and it runs fine (As I export the index and import it into the file and call it at the end) but is there not a way to add it into the package.json to run it with an easier command?
Executable:
#!/usr/bin/env node
const program = require('commander');
const index = require('../src/index.js')
program
.version('0.0.1')
.option('-k, --key <key>')
.option('-s, --secret <secret>')
.option('-i, --id <id>')
.parse(process.argv);
let key = program.key;
let secret = program.secret;
let publicId = program.id;
index(key, secret, publicId);
When Node.js script is supposed to run as executable, it's specified as package.json bin option:
To use this, supply a bin field in your package.json which is a map of command name to local file name. On install, npm will symlink that file into prefix/bin for global installs, or ./node_modules/.bin/ for local installs.
It can be located in src or elsewhere:
{
...
"bin" : { "foo" : "src/bin.js" },
...
}

How do I get Bower to install a file to a specified path and name?

I have the following bower.json:
{
"name": "myname",
"dependencies": {
"stripe": "https://js.stripe.com/v2/"
}
}
This grabs the javascript at the associated url and creates the following file:
/bower_components/stripe/index
Note that the file is not index.js, but simply index. This is problematic, as my Brocfile refuses to use the index file, insisting that it has to be index.js. If I manually change the name to index.js, then the application works fine. Obviously, this isn't a satisfactory solution.
So is there a way to get bower to install the file as index.js rather than index?
If you need to set a different folder for bower you can create a .bowerrc file with the following:
{
"directory": "public/bower"
}
I'm not exactly sure of your environment, but for example if you have node.js you can create a gulp.js setup which would do the rename before whatever other processes you need to run.
quasi example gulpfile.js
var gulp = require('gulp');
var rename = require('gulp-rename');
gulp.task('prep', function () {
gulp.src('public/bower/stripe/index', {
base: 'public/bower/stripe'
})
.pipe(rename('index.js'));
.pipe(gulp.dest('./'));
});

How to auto-reload files in Node.js?

Any ideas on how I could implement an auto-reload of files in Node.js? I'm tired of restarting the server every time I change a file.
Apparently Node.js' require() function does not reload files if they already have been required, so I need to do something like this:
var sys = require('sys'),
http = require('http'),
posix = require('posix'),
json = require('./json');
var script_name = '/some/path/to/app.js';
this.app = require('./app').app;
process.watchFile(script_name, function(curr, prev){
posix.cat(script_name).addCallback(function(content){
process.compile( content, script_name );
});
});
http.createServer(this.app).listen( 8080 );
And in the app.js file I have:
var file = require('./file');
this.app = function(req, res) {
file.serveFile( req, res, 'file.js');
}
But this also isn't working - I get an error in the process.compile() statement saying that 'require' is not defined. process.compile is evaling the app.js, but has no clue about the node.js globals.
A good, up to date alternative to supervisor is nodemon:
Monitor for any changes in your node.js application and automatically restart the server - perfect for development
To use nodemon with version of Node without npx (v8.1 and below, not advised):
$ npm install nodemon -g
$ nodemon app.js
Or to use nodemon with versions of Node with npx bundled in (v8.2+):
$ npm install nodemon
$ npx nodemon app.js
Or as devDependency in with an npm script in package.json:
"scripts": {
"start": "nodemon app.js"
},
"devDependencies": {
"nodemon": "..."
}
node-supervisor is awesome
usage to restart on save for old Node versions (not advised):
npm install supervisor -g
supervisor app.js
usage to restart on save for Node versions that come with npx:
npm install supervisor
npx supervisor app.js
or directly call supervisor in an npm script:
"scripts": {
"start": "supervisor app.js"
}
i found a simple way:
delete require.cache['/home/shimin/test2.js']
If somebody still comes to this question and wants to solve it using only the standard modules I made a simple example:
var process = require('process');
var cp = require('child_process');
var fs = require('fs');
var server = cp.fork('server.js');
console.log('Server started');
fs.watchFile('server.js', function (event, filename) {
server.kill();
console.log('Server stopped');
server = cp.fork('server.js');
console.log('Server started');
});
process.on('SIGINT', function () {
server.kill();
fs.unwatchFile('server.js');
process.exit();
});
This example is only for one file (server.js), but can be adapted to multiple files using an array of files, a for loop to get all file names, or by watching a directory:
fs.watch('./', function (event, filename) { // sub directory changes are not seen
console.log(`restart server`);
server.kill();
server = cp.fork('server.js');
})
This code was made for Node.js 0.8 API, it is not adapted for some specific needs but will work in some simple apps.
UPDATE:
This functional is implemented in my module simpleR, GitHub repo
nodemon came up first in a google search, and it seems to do the trick:
npm install nodemon -g
cd whatever_dir_holds_my_app
nodemon app.js
nodemon is a great one. I just add more parameters for debugging and watching options.
package.json
"scripts": {
"dev": "cross-env NODE_ENV=development nodemon --watch server --inspect ./server/server.js"
}
The command: nodemon --watch server --inspect ./server/server.js
Whereas:
--watch server Restart the app when changing .js, .mjs, .coffee, .litcoffee, and .json files in the server folder (included subfolders).
--inspect Enable remote debug.
./server/server.js The entry point.
Then add the following config to launch.json (VS Code) and start debugging anytime.
{
"type": "node",
"request": "attach",
"name": "Attach",
"protocol": "inspector",
"port": 9229
}
Note that it's better to install nodemon as dev dependency of project. So your team members don't need to install it or remember the command arguments, they just npm run dev and start hacking.
See more on nodemon docs: https://github.com/remy/nodemon#monitoring-multiple-directories
Nodemon has been the go to for restarting server for file changes for long time. Now with Node.js 19 they have introduced a --watch flag, which does the same [experimental]. Docs
node --watch index.js
node-dev works great. npm install node-dev
It even gives a desktop notification when the server is reloaded and will give success or errors on the message.
start your app on command line with:
node-dev app.js
There is Node-Supervisor that you can install by
npm install supervisor
see http://github.com/isaacs/node-supervisor
You can use nodemon from NPM.
And if you are using Express generator then you can using this command inside your project folder:
nodemon npm start
or using Debug mode
DEBUG=yourapp:* nodemon npm start
you can also run directly
nodemon your-app-file.js
Hope this help.
There was a recent (2009) thread about this subject on the node.js mailing list. The short answer is no, it's currently not possible auto-reload required files, but several people have developed patches that add this feature.
With Node.js 19 you can monitor file changes with the --watch option. After a file is changed, the process is restarted automatically, reflecting new changes.
node --watch server.js
yet another solution for this problem is using forever
Another useful capability of Forever is that it can optionally restart
your application when any source files have changed. This frees you
from having to manually restart each time you add a feature or fix a
bug. To start Forever in this mode, use the -w flag:
forever -w start server.js
Here is a blog post about Hot Reloading for Node. It provides a github Node branch that you can use to replace your installation of Node to enable Hot Reloading.
From the blog:
var requestHandler = require('./myRequestHandler');
process.watchFile('./myRequestHandler', function () {
module.unCacheModule('./myRequestHandler');
requestHandler = require('./myRequestHandler');
}
var reqHandlerClosure = function (req, res) {
requestHandler.handle(req, res);
}
http.createServer(reqHandlerClosure).listen(8000);
Now, any time you modify myRequestHandler.js, the above code will no­tice and re­place the local re­questHandler with the new code. Any ex­ist­ing re­quests will con­tin­ue to use the old code, while any new in­com­ing re­quests will use the new code. All with­out shut­ting down the serv­er, bounc­ing any re­quests, pre­ma­ture­ly killing any re­quests, or even re­ly­ing on an in­tel­li­gent load bal­ancer.
I am working on making a rather tiny node "thing" that is able to load/unload modules at-will (so, i.e. you could be able to restart part of your application without bringing the whole app down).
I am incorporating a (very stupid) dependency management, so that if you want to stop a module, all the modules that depends on that will be stopped too.
So far so good, but then I stumbled into the issue of how to reload a module. Apparently, one could just remove the module from the "require" cache and have the job done. Since I'm not keen to change directly the node source code, I came up with a very hacky-hack that is: search in the stack trace the last call to the "require" function, grab a reference to it's "cache" field and..well, delete the reference to the node:
var args = arguments
while(!args['1'] || !args['1'].cache) {
args = args.callee.caller.arguments
}
var cache = args['1'].cache
util.log('remove cache ' + moduleFullpathAndExt)
delete( cache[ moduleFullpathAndExt ] )
Even easier, actually:
var deleteCache = function(moduleFullpathAndExt) {
delete( require.cache[ moduleFullpathAndExt ] )
}
Apparently, this works just fine. I have absolutely no idea of what that arguments["1"] means, but it's doing its job. I believe that the node guys will implement a reload facility someday, so I guess that for now this solution is acceptable too.
(btw. my "thing" will be here: https://github.com/cheng81/wirez , go there in a couple of weeks and you should see what I'm talking about)
solution at:
http://github.com/shimondoodkin/node-hot-reload
notice that you have to take care by yourself of the references used.
that means if you did : var x=require('foo'); y=x;z=x.bar; and hot reloaded
it.
it means you have to replace the references stored in x, y and z. in the hot reaload callback function.
some people confuse hot reload with auto restart
my nodejs-autorestart module also has upstart integration to enable auto start on boot.
if you have a small app auto restart is fine, but when you have a large app hot reload is more suitable. simply because hot reload is faster.
Also I like my node-inflow module.
Here's a low tech method for use in Windows. Put this in a batch file called serve.bat:
#echo off
:serve
start /wait node.exe %*
goto :serve
Now instead of running node app.js from your cmd shell, run serve app.js.
This will open a new shell window running the server. The batch file will block (because of the /wait) until you close the shell window, at which point the original cmd shell will ask "Terminate batch job (Y/N)?" If you answer "N" then the server will be relaunched.
Each time you want to restart the server, close the server window and answer "N" in the cmd shell.
my app structure:
NodeAPP (folder)
|-- app (folder)
|-- all other file is here
|-- node_modules (folder)
|-- package.json
|-- server.js (my server file)
first install reload with this command:
npm install [-g] [--save-dev] reload
then change package.json:
"scripts": {
"start": "nodemon -e css,ejs,js,json --watch app"
}
now you must use reload in your server file:
var express = require('express');
var reload = require('reload');
var app = express();
app.set('port', process.env.PORT || 3000);
var server = app.listen(app.get('port'), function() {
console.log( 'server is running on port ' + app.get('port'));
});
reload(server, app);
and for last change, end of your response send this script:
<script src="/reload/reload.js"></script>
now start your app with this code:
npm start
You can do it with browser-refresh. Your node app restarts automatically, your result page in browser also refreshes automatically. Downside is that you have to put js snippet on generated page. Here's the repo for the working example.
const http = require('http');
const hostname = 'localhost';
const port = 3000;
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/html; charset=UTF-8');
res.write('Simple refresh!');
res.write(`<script src=${process.env.BROWSER_REFRESH_URL}></script>`);
res.end();
})
server.listen(port, hostname, () => {
console.log(`Server running at http://${hostname}:${port}/`);
if (process.send) {
process.send({ event: 'online', url: `http://${hostname}:${port}/` })
}
});
Not necessary to use nodemon or other tools like that. Just use capabilities of your IDE.
Probably best one is IntelliJ WebStorm with hot reload feature (automatic server and browser reload) for node.js.
I have tried pm2 : installation is easy and easy to use too; the result is satisfying. However, we have to take care of which edition of pm2 that we want. pm 2 runtime is the free edition, whereas pm2 plus and pm2 enterprise are not free.
As for Strongloop, my installation failed or was not complete, so I couldn't use it.
If your talking about server side NodeJS hot-reloading, lets say you wish to have an Javascript file on the server which has an express route described and you want this Javascript file to hot reload rather than the server re-starting on file change then razzle can do that.
An example of this is basic-server
https://github.com/jaredpalmer/razzle/tree/master/examples/basic-server
The file https://github.com/jaredpalmer/razzle/blob/master/examples/basic-server/src/server.js will hot-reload if it is changed and saved, the server does not re-start.
This means you can program a REST server which can hot-reload using this razzle.
it's quite simple to just do this yourself without any dependency... the built in file watcher have matured enough that it dose not sucks as much as before
you don't need any complicated child process to spawn/kill & pipe std to in/out... you just need a simple web worker, that's all! A web Worker is also what i would have used in browsers too... so stick to web techniques! worker will also log to the console
import { watch } from 'node:fs/promises'
import { Worker } from 'node:worker_threads'
let worker = new Worker('./app.js')
async function reloadOnChange (dir) {
const watcher = watch(dir, { recursive: true })
for await (const change of watcher) {
if (change.filename.endsWith('.js')) {
worker.terminate()
worker = new Worker('./app.js')
}
}
}
// All the folder to watch for
['./src', './lib', './test'].map(reloadOnChange)
this might not be the best solution where you use anything else other than javascript and do not depend on some build process.
Use this:
function reload_config(file) {
if (!(this instanceof reload_config))
return new reload_config(file);
var self = this;
self.path = path.resolve(file);
fs.watchFile(file, function(curr, prev) {
delete require.cache[self.path];
_.extend(self, require(file));
});
_.extend(self, require(file));
}
All you have to do now is:
var config = reload_config("./config");
And config will automatically get reloaded :)
loaddir is my solution for quick loading of a directory, recursively.
can return
{ 'path/to/file': 'fileContents...' }
or
{ path: { to: { file: 'fileContents'} } }
It has callback which will be called when the file is changed.
It handles situations where files are large enough that watch gets called before they're done writing.
I've been using it in projects for a year or so, and just recently added promises to it.
Help me battle test it!
https://github.com/danschumann/loaddir
You can use auto-reload to reload the module without shutdown the server.
install
npm install auto-reload
example
data.json
{ "name" : "Alan" }
test.js
var fs = require('fs');
var reload = require('auto-reload');
var data = reload('./data', 3000); // reload every 3 secs
// print data every sec
setInterval(function() {
console.log(data);
}, 1000);
// update data.json every 3 secs
setInterval(function() {
var data = '{ "name":"' + Math.random() + '" }';
fs.writeFile('./data.json', data);
}, 3000);
Result:
{ name: 'Alan' }
{ name: 'Alan' }
{ name: 'Alan' }
{ name: 'Alan' }
{ name: 'Alan' }
{ name: '0.8272748321760446' }
{ name: '0.8272748321760446' }
{ name: '0.8272748321760446' }
{ name: '0.07935990858823061' }
{ name: '0.07935990858823061' }
{ name: '0.07935990858823061' }
{ name: '0.20851597073487937' }
{ name: '0.20851597073487937' }
{ name: '0.20851597073487937' }
another simple solution is to use fs.readFile instead of using require
you can save a text file contaning a json object, and create a interval on the server to reload this object.
pros:
no need to use external libs
relevant for production (reloading config file on change)
easy to implement
cons:
you can't reload a module - just a json containing key-value data
For people using Vagrant and PHPStorm, file watcher is a faster approach
disable immediate sync of the files so you run the command only on save then create a scope for the *.js files and working directories and add this command
vagrant ssh -c "/var/www/gadelkareem.com/forever.sh restart"
where forever.sh is like
#!/bin/bash
cd /var/www/gadelkareem.com/ && forever $1 -l /var/www/gadelkareem.com/.tmp/log/forever.log -a app.js
I recently came to this question because the usual suspects were not working with linked packages. If you're like me and are taking advantage of npm link during development to effectively work on a project that is made up of many packages, it's important that changes that occur in dependencies trigger a reload as well.
After having tried node-mon and pm2, even following their instructions for additionally watching the node_modules folder, they still did not pick up changes. Although there are some custom solutions in the answers here, for something like this, a separate package is cleaner. I came across node-dev today and it works perfectly without any options or configuration.
From the Readme:
In contrast to tools like supervisor or nodemon it doesn't scan the filesystem for files to be watched. Instead it hooks into Node's require() function to watch only the files that have been actually required.
const cleanCache = (moduleId) => {
const module = require.cache[moduleId];
if (!module) {
return;
}
// 1. clean parent
if (module.parent) {
module.parent.children.splice(module.parent.children.indexOf(module), 1);
}
// 2. clean self
require.cache[moduleId] = null;
};

Categories