Why is Puppeteer not displayed as a child process on IIS? - javascript

I have a node.js server which runs a child process on the server when the user requests.
This child process is managed by npm package 'child_process' and running a 'Puppeteer' script.
The whole process works as expected on local.
The problem is when I check it on the server, and then - the process IS executed and logs are printed, BUT I don't see the browser of puppeteer even though it's in headless false.
The server uses IIS and I suspect that maybe it's related.
p.s. I also tried the npm package 'execa' for the execution of the child process, and nothing changed.
Any ideas?
Thanks

Related

Serving files from Vite server.fs does not work in node 18+

We've been using node.js 16.16.0 with Vite(our monorepo is managed using rush with pnpm under the hood), and we were serving some files using server.fs configuration(https://vitejs.dev/config/server-options.html#server-fs-allow).
In node 16.16 everything works as expected, more or less, allow option as well as deny option serve their purpose. However, after upgrading node to 18.13, the functionality of serving files from filesystem does not work at all, and there is very little to work with.
Every request for specific file, that works in node 16.16 returns with Error: connect ECONNREFUSED 127.0.0.1:5001 where 5001 is our dev port. Host and port are set correctly.No suspicious error messages in the console etc.
It looks like the files are not beeing served under this urls at all. Using deny changes nothing, still the response is ECONNREFUSED(in 16.16 it's 403 Restricted).
I upgraded Vite to the newest version, error still the same.
I'm slowly running out of ideas, I will appreciate any help.
Ok, I found the issue... It's so dummy and has nothing to do with Vite... In node 17+ they are no longer treating IPv4 as default, but they are taking the OS system configuration into account.
Node.js no longer re-sorts results of IP address lookups and returns them as-is (i.e. it no longer ignores how your OS has been configured). You can change the behaviour via the verbatim option to dns.lookup() or set the --dns-result-order command line option to change the default.
It means that application on this node version will no longer serve anything on 127.0.0.1(IPv4), but on ::1(IPv6). Switching from '127.0.0.1' to 'localhost' did the trick in my case.

Understanding how http requests work on Mocha

I understand that mocha uses superagent under the hood, but how does it work when I haven't started a local server? Does mocha start a server by itself and send http requests to it or am I misunderstanding how mocha works in the first place?
For clarity, I checked and currently I do not have a local server running and that's when I got a bit confused as to how mocha tests work under the hood
Mocha runs JavaScript code with Node, just like your npm scripts which runs your local server. You just don't see it on the console like you do when you enter npm run debug or whatever your script is. So under the hood it's just Node running the JavaScript.
Which is why you don't need to start a server because it's already running while tests are executing. You just aren't seeing the console logs you're use to, except for what Mocha prints of course. See their page for more details.

Intern test case execution hangs forever after importing snapsvg-cjs in ts file

After importing snapsvg-cjs in .ts file(in Dojo2 Project) intern test case got stuck after executing all test cases.
Any idea what I am doing wrong?
When Intern is stuck open after test cases have finished, it typically means that a file or socket is still open. Intern is done, but the Node process won't shut down while handles are still open. This can happen if, say, an asynchronous process were started in a test without returning a Promise.
One way to diagnose what's preventing the Intern process from exiting is to run it with wtfnode. When wtfnode is used to run a node program and the program is manually killed (like with with ctrl-c), wftnode will display any open resources. Run Intern with wtfnode, ctrl-c when it appears to be hung, then see what's still open.

How to fork a child process with electron

I have a simple nodeJS app that has a function to scrape file metadata. Since scraping metadata can be intensive I made the app run this as a child process using fork.
const metaParser = child.fork( fe.join(__dirname, 'parse-metadata.js'), [jsonLoad]);
Everything worked great until I ported this to electron. When run in main.js the process is successfully created, but immediately exits. I added some logging to parse-metadata.js and found out that parse-metadata.js executed successfully and ran long enough to run the first few lines of code and then exited.
How do I get electron to fork parse-metadata.js and keep it alive until the end?
I'm using electron v1.4.15 and Node v6
When using the detached option to start a long-running process, the process will not stay running in the background unless it is provided with a stdio configuration that is not connected to the parent.
Also it seems related to the env.
Look at this: https://github.com/electron/electron/issues/6868

MONGO_URL for running multiple Meteor apps on one server

I have one Meteor application running on my Ubuntu server (Digital Ocean). I use Meteor Up (MUP) to deploy and keep the app running. Everything works fine.
However, when I try to deploy a second app on the same server, something goes wrong in connecting to the MongoDB. I get a long and unreadable error message that starts "Invoking deployment process: FAILED" and then ends with
Waiting for MongoDB to initialize. (5 minutes)
connected
myapp start/running, process 25053
Waiting for 15 seconds while app is booting up
Checking is app booted or not?
myapp stop/waiting
myapp start/running, process 25114
And the app refuses to run. I have tried a number of things to fix this and will edit this post if more info is requested, but I'm not sure what's relevant. Essentially I don't understand the Error message, so I need to know what the heck is going on?
EDIT:
I want to add that my app runs fine if I go into the project folder and use the "meteor" command. Everything runs as expected. It is only when I try to deploy it for long-term production mode with MUP that I get this error.
EDIT:
I moved on to trying mupx instead of mup. This time I can't even get past the installation process, I get the following error message:
[Neal] x Installing MongoDB: FAILED
-----------------------------------STDERR-----------------------------------
Error response from daemon: no such id: mongodb
Error: failed to remove containers: [mongodb]
Error response from daemon: Cannot start container c2c538d34c15103d1d07bcc60b56a54bd3d23e50ae7a8e4f9f7831df0d77dc56: failed to create endpoint mongodb on network bridge: Error starting userland proxy: listen tcp 127.0.0.1:27017: bind: address already in use
But I don't understand why! Mongod is clearly already running on port 27017 and a second application should just add a new database to that instance, correct? I don't know what I'm missing here, why MUP can't access MongoDB.
It's tricky without your mup.json to see what's going on here. Given what you said, it looks like your 2nd app deployment tries to override/boot mongodb over the 1st one which is locked, the mongodb environment fails to boot, causing then the fail. You should tackle this different ways:
If your objective is to share your mongoDB, point the MONGO_URL from your 2nd mup.jon on your first mongodb instance. It's generally something along the 2701X ports. As it's a shared DB, changes in one database could affect the other.
meteor-up oversees the deployment of your app from a meteor-nice-to-test thing to a node+mongodb environment. You can spawn another mongod instance with :
mongod --port 2701X --dbpath /your/dbpath --fork --logpath /log/path on your DO server and then point MONGO_URL there.
Last but not least, mupx having docker under the hood. Using mupx for your deployments should isolate both apps from each other.

Categories