Heroku - Headless Chrome - Connection Refused - javascript

I am currently working with the: Heroku Build Pack for headless chrome.
https://github.com/heroku/heroku-buildpack-google-chrome/
I'm encountering this infuriating error where my node script (show below) cannot connect to the chrome instance. I get a pretty definitive error being:
{ Error: connect ECONNREFUSED 127.0.0.1:30555
at Object.exports._errnoException (util.js:1018:11)
at exports._exceptionWithHostPort (util.js:1041:20)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1090:14)
code: ‘ECONNREFUSED’,
errno: ‘ECONNREFUSED’,
syscall: ‘connect’,
address: ‘127.0.0.1’,
port: 30555 }
My node super simple script:
CDP((client) => {
// extract domains
// const {Network, Page} = client;
const Network = client.Network
const Page = client.Page
// setup handlers
Network.requestWillBeSent((params) => {
console.log(params.request.url);
});
Page.loadEventFired(() => {
client.close();
});
// enable events then start!
Promise.all([
Network.enable(),
Page.enable()
]).then(() => {
return Page.navigate({url: 'https://www.something.com/'});
}).catch((err) => {
console.error(err);
client.close();
});
}).on('error', (err) => {
// cannot connect to the remote endpoint
console.error(err);
});
Has anyone had any luck getting this type of thing to work?

My Procfile looks like this to first start Chrome, then my Node.js server:
web: /app/.apt/usr/bin/google-chrome & node app/server.js
(Used in Scraping Service, a REST API for scraping dynamic websites. It uses headless Chrome and Cheerio.)

Alright I figured it out. When deploying to heroku, I was using two different Procs in the Procfile. One for web which was launching the Node script. And another for launching the headless chrome daemon.
On heroku, those two different procs don't even share the same dyno. Meaning they we're on totally separate "boxes" - at least in theory. This resulted in them having different ports set in the ENVs (not that it even mattered at that point - they might as well be in different continents)
Solution:
Have the node script start the actual headless chrome, then ultimately connect to that child process using the CDP interface.
Also - if you're here and also curious about documentation for the CDP interface for node - it doesnt exist at the moment. Your best option, which is actually pretty good, is: https://chromedevtools.github.io/debugger-protocol-viewer/
Happy hunting.
Edit:
Example of how we handled launching the chrome child process from the application source
const spawn = require('child_process').spawn
spawn('/path/to/chrome/binary',[{`--remote-debugging-port=${process.env.PORT}`]) // Set by heroku
.on('close', () => console.log('CHROME_PROCESS_CLOSE'))
.on('error', e => console.log('CHROME_PROCESS_ERROR', e))
.on('exit', (e, z, a) => console.log('CHROME_PROCESS_EXIT', e, z, a))
.on('data', () => {})

Related

node ebusy when spawning .exe

I'm working with a temp file that's downloaded from a server. but when I ran it on macOS, it ran fine. and did what it was supposed to do. but when I ran it on windows, it keeps giving me an EBUSY error when trying to spawn the child.
I tried delaying the start of the file. tried to remove the chmod so it just runs on Linux and macOS. But still getting the Ebusy error. Am I doing something wrong?
Note:
I can launch the binary from another node instance outside. Like from a cmd. but launching it from the node instances that created it leads to a Ebusy error.
temp.open('', (err, info) => {
if (err) throw err;
console.log('File: ', info.path);
console.log('Filedescriptor: ', info.fd);
var data = fs.createWriteStream(info.path);
res.data.pipe(data);
data.on('close', async () => {
fs.chmodSync(info.path, 0o755);
await delay(1000);
var child = cp.spawn(info.path, [key, jwt], { stdio: ['inherit', 'inherit', 'inherit', 'ipc'] });
Error:
Error: spawn EBUSY
at ChildProcess.spawn (node:internal/child_process:415:11)
at Object.spawn (node:child_process:707:9)
at WriteStream.<anonymous>
at WriteStream.emit (node:events:394:28)
at emitCloseNT (node:internal/streams/destroy:138:10)
at processTicksAndRejections (node:internal/process/task_queues:82:21) {
errno: -4082,
code: 'EBUSY',
syscall:
Edit:
I created a new module to try to spawn the child that way. it will be forked in the main process. but I'm still getting the same error in the fork. still the same error.
const cp = require('child_process')
const childpath = process.argv[2]
var argv = [];
for (let index = 3; index < process.argv.length; index++) {
const element = process.argv[index];
argv.push(element)
}
console.log(argv)
var child = cp.spawn(childpath, argv, {
stdio: ['inherit', 'inherit', 'inherit', 'ipc']
})
child.on('message', (msg) => {
if (process.send) process.send(msg)
else process.emit('message', msg)
})
child.on('error', (msg) => {
console.log(msg)
process.emit('error', msg)
})
child.on('close', (msg) => {
process.exit(msg)
})
Update:
I've noticed that I cannot run the file until the process the created it is ended. meaning that the process that needs to use it is using it. but I cannot do the thing I want to do with it.EI spawn it
Update 2:
The next thing I tried was to create a symbolic link with the node. And still nothing but I noticed that the file doesn't become runnable until the main process is ended. which means that the process that I'm running has something to do with it. So I need to be able to unlike it from the process that's running. it seems like windows need to do some initialization after the file is made. and because it's still connected to the main process in some way it's not able to. Im guess this is why when I ended the process, the node icon showed up on the symlink and that's why I'm able to run it manually.
Final Update:
I'm gonna work on a file system module that acts like temp files but is just regular files. that the process is tracking. giving the function of temp files but not really. this will allow me to use the function of temp files but without the file being unable to be executed. It seems like making it a temp file made it so it could not be executed by the same process that created it. it seems like a windows file system thing and how they handle temp file permissions.
The way Temp files are handled in windows is different than macOS. to my best estimate, it seems like Temp files in windows are linked to the process that created them and aren't able to be changed or accessed by that process. The problem lies in trying to execute these temp files which windows has assigned to a process already making it unassignable. NodeJs needs to be able to access the file to be able to launch it and windows permission will not allow it until the main process is killed or ended. this unlinks the file from that process making it accessible to the rest of the system.

How to run daemon within a js file

I'm trying to start a server daemon from within my js code, then to access it from the same program. However, when I use the execFile() method from the child_process module, this blocks the entire program: the execFile() call never stops, and I am unable to access the server. However, I know the daemon is started since I see a process of the same name start up from my activity monitor (macos equivalent of task manager).
I've also tried exec() and spawn() from the same module, and it gave the same results.
What I'd want to be able to do is to start the daemon as a separate process, forget about it, then stop it when I'm done using it. Is there any way I could do at least the first two?
Here is my code (the runArduino function is where I start the daemon, and the main() function is where I access it):
const grpcLib = require('grpc');
const protoLoader = require('#grpc/proto-loader');
const pathLib = require("path");
const utilLib = require('util');
const exec = utilLib.promisify(require('child_process').execFile);
const RPC_PATH = pathLib.join(__dirname, "arduino-cli/rpc")
var PROTO_PATH = pathLib.join(RPC_PATH, "/commands/commands.proto");
const options = {
keepCase: true,
longs: String,
enums: String,
defaults: true,
oneofs: true,
includeDirs:
[
RPC_PATH
]
}
const packageDefinition = protoLoader.loadSync(PROTO_PATH, options);
const arduinoCli = grpcLib.loadPackageDefinition(packageDefinition).cc.arduino.cli.commands;
function runArduino()
{
exec(__dirname+"/arduino-cli_macos", ['daemon'],function(err, data)
{
console.log(err);
console.log(data);
});
}
function main()
{
var client = new arduinoCli.ArduinoCore('localhost:50051', grpcLib.credentials.createInsecure());
client.Version({}, function(err, response){
console.log("Running version: ", response); //returns a version number
});
}
runArduino();
main();
The first time I run it, this is what I get (execution doesn't stop):
Running version: undefined
Once the daemon is up and I run it, I get this (I am able to access the server now and execution does end):
Running version: { version: '0.11.0' }
Error: Command failed: /Users/Herve/Desktop/MyStuff/ArduinoX/ArduinoX/arduino-cli_macos daemon
Failed to listen on TCP port: 50051. Address already in use.
at ChildProcess.exithandler (child_process.js:303:12)
at ChildProcess.emit (events.js:315:20)
at maybeClose (internal/child_process.js:1021:16)
at Socket.<anonymous> (internal/child_process.js:443:11)
at Socket.emit (events.js:315:20)
at Pipe.<anonymous> (net.js:674:12) {
killed: false,
code: 5,
signal: null,
cmd: '/Users/Herve/Desktop/MyStuff/ArduinoX/ArduinoX/arduino-cli_macos daemon'
}
I believe you should either await the exec or run main() in exec's callback. Right now your main() executes before child process is started.
exec(__dirname+"/arduino-cli_macos", ['daemon'],function(err, data)
{
console.log(err);
console.log(data);
});
This is why at the first run you're getting undefined. Child process is not killed automatically, I guess, this is why at second run you can do the RPC, but can't really start the child process again, as it's already running (and is occupying the 50051 port).
If your application is starting the child process, I believe it has to take care of killing it as well:
var childProcessHandler = exec(__dirname+"/arduino-cli_macos", ['daemon'],function(err, data)
{
console.log(err);
console.log(data);
});
// ... and later in your code:
childProcessHandler.kill()
This way you could start/stop the application without having to worry about cleaning the processes. Only thing which you have to consider is taking care of cleanup in case an exception occurs.
Edit okay, it appears that to start process as a daemon, you'd have to use spaw with detached option:
const { spawn } = require('child_process');
const child = spawn(__dirname+"/arduino-cli_macos", {
detached: true
});
child.unref();

Create React App not reusing existing tab

I have been developing a web app (for a few months) that is built on top of CRA. Everything has been working as intended until this morning when I realized that the npm (or yarn) start script is no longer reusing the existing tab I have open in Chrome. It instead opens a new tab at localhost:3000 even if there are existing tabs at localhost:3000. I have been investigating this for a few hours but am yet to find a solution. I added a log statement in the CRA script (within my node modules) that handles the reusing of existing tabs upon startup of the app.
function startBrowserProcess(browser, url) {
// If we're on OS X, the user hasn't specifically
// requested a different browser, we can try opening
// Chrome with AppleScript. This lets us reuse an
// existing tab when possible instead of creating a new one.
const shouldTryOpenChromeWithAppleScript =
process.platform === 'darwin' &&
(typeof browser !== 'string' || browser === OSX_CHROME);
if (shouldTryOpenChromeWithAppleScript) {
try {
// Try our best to reuse existing tab
// on OS X Google Chrome with AppleScript
execSync('ps cax | grep "Google Chrome"');
execSync('osascript openChrome.applescript "' + encodeURI(url) + '"', {
cwd: __dirname,
stdio: 'ignore',
});
return true;
} catch (err) {
console.log(err);
// Ignore errors.
}
}
This is the output of the log statement:
{ Error: Command failed: osascript openChrome.applescript "http://localhost:3000/"
at checkExecSyncError (child_process.js:621:11)
at execSync (child_process.js:658:13)
at startBrowserProcess (/Users/***/Desktop/WorkSpace/React/***/node_modules/react-dev-utils/openBrowser.js:78:7)
at openBrowser (/Users/***/Desktop/WorkSpace/React/***/node_modules/react-dev-utils/openBrowser.js:122:14)
at Server.devServer.listen.err (/Users/***/Desktop/WorkSpace/React/***/node_modules/react-scripts/scripts/start.js:100:7)
at Server.returnValue.listeningApp.listen (/Users/***/Desktop/WorkSpace/React/***/node_modules/webpack-dev-server/lib/Server.js:604:10)
at Object.onceWrapper (events.js:273:13)
at Server.emit (events.js:182:13)
at emitListeningNT (net.js:1328:10)
at process.internalTickCallback (internal/process/next_tick.js:72:19)
status: 1,
signal: null,
output: [ null, null, null ],
pid: 2691,
stdout: null,
stderr: null }
It seems to have an issue running the Applescript command that handles this but I am unsure why. One of the other developers that is working on the same app locally is not having this issue. One change I made recently was upgrading to the new macOS Mojave. But the other developer just upgraded as well and is not having this issue.
Does anyone know what could be wrong?
To avoid opening a new tab,
you'll need to create an .env file in the root of your project and add this line,
BROWSER=none.
As shown here
Turns out after I upgraded Mojave I denied interface access to Google Chrome from my terminal. So it was unable to run the "reuse" tab script.

Node.js - execFile throws spawn UNKNOWN

I am trying to spawn a service created with pyInstaller from an electron application. I am using the following code for that:
return new Promise((reject, resolve)=>{
var exec = require('child_process').execFile;
exec(path.join(install_path, 'myService.exe'), ['--startup=auto', 'install'], function(err, data) {
if(err) {
reject(err);
return;
}
console.log(data.toString());
exec(path.join(install_path, 'myService.exe'), ['start'], function(err, data){
if(err) {
reject(err);
return;
}
resolve(data.toString());
})
});
}
Unfortunately, this throws an
Uncaught Error: spawn UNKNOWN
on a testing system, which does not have node installed and is running Windows 10 x64. On my machine it is working fine.
Does anyone have tips how I could investigate this further? I am especially curious how this error is uncaught, because the callback functions obviously contain simple error handling.
Okay, after I built in better error handling thanks to Keiths help and rebuilt the project, the testers could not reproduce the issue anymore. I am still not sure if that actually fixed the problem or if the testers pulled an old version the last time.
Anyway, this is solved.

AWS Lambda Function "Process exited before completing request" caused by hanging?

I have developed a node application which I would like to have called as an AWS Lambda Application.
The application works as intended as an AWS Lambda, however my CloudWatch logs always finish with the following error: Process exited before completing request.
I wrote some code to ensure that my context.succeed() and context.fail() calls were taking place and they are. However, when ran locally, I also noted a lag between my logging of success in start.js and the command prompt appearing again, making me believe there could be some node processes still taking place once those calls have been made. Could that be causing the error, and if so what is a good way to triage and resolve the issue?
The relevant code is below:
lambda-handle.js
import log from './log';
import database from './database';
import User from './database/models/user';
export function handle(event, context) {
log.info('CS Blogs Feed Aggregator Started');
database.sync()
.then(() =>
User.findAll({
attributes: ['id', 'blogFeedURI']
}))
.then(users => {
users.forEach(user => {
log.info({ user }, 'User loaded from database');
});
})
.then(() => {
// context.done() called so AWS knows function completed successfully
context.succeed();
})
.catch(error => {
context.fail(error);
});
}
start.js (used to test context.succeed/fail being called)
// This function invokes the AWS Lambda Handle as AWS would
// but allows you to do it from your local machine for development
// or from a non-AWS server
import { handle } from './lambda-handle';
import log from './log';
handle({}, {
succeed: result => {
log.info({ result: result || 'No result returned' }, 'Process succeeded');
},
fail: error => {
log.error({ error: error || 'No error returned' }, 'Process failed');
}
});
The code is being transpiled by babel before being deployed. However, I suspect there is more of a logic issue so I have shown you the original source code.
If any more information is required the repository is available here: https://github.com/csblogs/feed-downloader/tree/fix/lambda-implementation-details
Thanks
I am pretty sure this is caused by at least 1 native module dependency in bunyan(dtrace-provider). Native modules need to be built/installed on the system that they will run on. So in the case of Lambda you need to run npm install on a linux ec2 instance or possibly vagrant to get the right version of dtrace-provider built.
See:
Cross-compile node module with native bindings with node-gyp
https://aws.amazon.com/blogs/compute/nodejs-packages-in-lambda/ (scroll to Native Modules)
You could probably just remove Bunyan to verify it works and then go down the ec2/vagrant compile route if that fixes it.

Categories