How to run child process in Mean Stack - javascript

I have an Mean application which uses nodejs, angularjs and expressjs.
Here I have called my server from the angular controller as below
Angular Controller.js
$http.post('/sample', $scope.sample).then(function (response) {
--
--
}
and in Server.js as below
app.post('/sample', userController.postsample);
Here I am doing my operation with mongodb in that post sample from above code.
Here I got struck how to do my calculation part like I have a big calculation which takes some time (assume 1 hour) to complete. So from client side I will trigger that calculation from my angular controller.
My problem is that calculation should run in separately so that other UIs and operations of other pages should not be interupted.
I had seen that child process in nodejs but I didn't understand how to trigger or exec that from child process from controller and if it get request in app.post then is it possible to access other pages.
EDIT:
I have planned to do in Spawn a child_process but I have another problem continuing the above.
Lets consider application contains 3 users and 2 users are accessing the application at same time.
My case is If first person triggered the child_process name it as first operation and it is in process and at that moment when second person need to trigger the process name it as 2nd operation as he also needed to calculate.
Here my questions are
What happens if another person started the spawn command. If it hangs or keep in queue or both execute parallel.
If 2nd operation is in queue then when it will start the operation.
If 2nd operation is in queue then how can i know how many are in queue at a point of time
Can anyone help to solve.

Note: the question was edited - see updates below.
You have few options to do it.
The most straightforward way would be to spawn the child process from your Express controller that would return the response to the client once the calculation is done, but if it takes so long then you may have problems with socket timeouts etc. This will not block your server or the client (if you don't use "Sync" function on the server and synchronous AJAX on the client) but you will have problems with the connection hanging for so long.
Another option would be to use WebSocket or Socket.io for those requests. The client could post a message to the server that it wants some computation to get started and the server could spawn the child process, do other things and when the child returns just send the message to the client. The disadvantage of that is a new way of communication but at least there would be no problems with timeouts.
To see how to combine WebSocket or Socket.io with Express, see this answer that has examples for both WebSocket and Socket.io - it's very simple actually:
Differences between socket.io and websockets
Either way, to spawn a child process you can use:
spawn
exec
execFile
fork
from the core child_process module. Just make sure to never use any functions with "Sync" in their name for what you want to do because those would block your server from serving other requests for the entire time of waiting for the child to finish - which may be an hour in your case, but even if it would be a second it could still ruin the concurrency completely.
See the docs:
https://nodejs.org/api/child_process.html
Update
Some update for the edited question. Consider this example shell script:
#!/bin/sh
sleep 5
date -Is
It waits for 5 seconds and prints the current time. Now consider this example Node app:
let child_process = require('child_process');
let app = require('express')();
app.get('/test', (req, res) => {
child_process.execFile('./script.sh', (err, data) => {
if (err) {
return res.status(500).send('Error');
}
res.send(data);
});
});
app.listen(3344, () => console.log('Listening on 3344'));
Or using ES2017 syntax:
let child_process = require('mz/child_process');
let app = require('express')();
app.get('/test', async (req, res) => {
try {
res.send((await child_process.execFile('./script.sh'))[0]);
} catch (err) {
res.status(500).send('Error');
}
});
app.listen(3344, () => console.log('Listening on 3344'));
It runs that shell script for requests on GET /test and returns the result.
Now start two requests at the same time:
curl localhost:3344/test & curl localhost:3344/test & curl localhost:3344/test &
and see what happens. If the returned times differ by 5 seconds and you get one response after another with 5 seconds intervals then the operations are queued. If you get all responses at the same time with more or less the same timestamp then those are all run in parallel.
Sometimes it's best to make an experiment like this to see what happens.

Related

How does single-threaded Node.js handles requests concurrently?

I am currently deeply learning Nodejs platform. As we know, Nodejs is single-threaded, and if it executes blocking operation (for example fs.readFileSync), a thread should wait to finish that operation. I decided to make an experiment: I created a server that responses with the huge amount of data from a file on each request
const { createServer } = require('http');
const fs = require('fs');
const server = createServer();
server.on('request', (req, res) => {
let data;
data =fs.readFileSync('./big.file');
res.end(data);
});
server.listen(8000);
Also, I launched 5 terminals in order to do parallel requests to a server. I waited to see that while one request is being handled, the others should wait for finishing blocking operation from the first request. However, the other 4 requests were responded concurrently. Why does this behavior occur?
What you're likely seeing is either some asynchronous part of the implementation inside of res.end() to actually send your large amount of data or you are seeing all the data get sent very quickly and serially, but the clients can't process it fast enough to actually show it serially and because the clients are each in their own separate process, they "appear" to show it arriving concurrently just because they're too slow reacting to show the actually arrival sequence.
One would have to use a network sniffer to see which of these is actually occurring or run some different tests or put some logging inside the implementation of res.end() or tap into some logging inside the client's TCP stack to determine the actual order of packet arrival among the different requests.
If you have one server and it has one request handler that is doing synchronous I/O, then you will not get multiple requests processes concurrently. If you believe that is happening, then you will have to document exactly how you measured that or concluded that (so we can help you clear up your misunderstanding) because that is not how node.js works when using blocking, synchronous I/O such as fs.readFileSync().
node.js runs your JS as single threaded and when you use blocking, synchronous I/O, it blocks that one single thread of Javascript. That's why you should never use synchronous I/O in a server, except perhaps in startup code that only runs once during startup.
What is clear is that fs.readFileSync('./big.file') is synchronous so your second request will not get started processing until the first fs.readFileSync() is done. And, calling it on the same file over and over again will be very fast (OS disk caching).
But, res.end(data) is non-blocking, asynchronous. res is a stream and you're giving the stream some data to process. It will send out as much as it can over the socket, but if it gets flow controlled by TCP, it will pause until there's more room to send on the socket. How much that happens depends upon all sorts of things about your computer, it's configuration and the network link to the client.
So, what could be happening is this sequence of events:
First request arrives and does fs.readFileSync() and calls res.end(data). That starts sending data to the client, but returns before it is done because of TCP flow control. This sends node.js back to its event loop.
Second request arrives and does fs.readFileSync() and calls res.end(data). That starts sending data to the client, but returns before it is done because of TCP flow control. This sends node.js back to its event loop.
At this point, the event loop might start processing the third or fourth requests or it might service some more events (from inside the implementation of res.end() or the writeStream from the first request to keep sending more data. If it does service those events, it could give the appearance (from the client point of view) of true concurrency of the different requests).
Also, the client could be causing it to appear sequenced. Each client is reading a different buffered socket and if they are all in different terminals, then they are multi-tasked. So, if there is more data on each client's socket than it can read and display immediately (which is probably the case), then each client will read some, display some, read some more, display some more, etc... If the delay between sending each client's response on your server is smaller than the delay in reading and displaying on the client, then the clients (which are each in their own separate processes) are able to run concurrently.
When you are using asynchronous I/O such as fs.readFile(), then properly written node.js Javascript code can have many requests "in flight" at the same time. They don't actually run concurrently at exactly the same time, but one can run, do some work, launch an asynchronous operation, then give way to let another request run. With properly written asynchronous I/O, there can be an appearance from the outside world of concurrent processing, even though it's more akin to sharing of the single thread whenever a request handler is waiting for an asynchronous I/O request to finish. But, the server code you show is not this cooperative, asynchronous I/O.
Maybe is not related directly to your question but i think this is useful,
You can use a stream instead of reading the full file into memory, for example:
const { createServer } = require('http');
const fs = require('fs');
const server = createServer();
server.on('request', (req, res) => {
const readStream = fs.createReadStream('./big.file'); // Here we create the stream.
readStream.pipe(res); // Here we pipe the readable stream to the res writeable stream.
});
server.listen(8000);
The point of doing this is:
Looks nicer.
You don't store the full file in RAM.
This works better because is non blocking, and the res object is already a stream, and this means the data will be transfered in chunks.
Ok so streams = chunked
Why not read chunks from the file and send them in real time instead of reading a really big file and divide that in chunks after?
Also why is really important on a real production server?
Because every time a request is received, your code is going to add that big file into ram, to that add this is concurrent so you are expecting to serve multiple files at the same time, so let's do the most advanced math my poor education allows:
1 request for a 1gb file = 1gb in ram
2 requests for a 1gb file = 2gb in ram
etc
That clearly doesn't scale nicely right?
Streams allows to decouple that data from the current state of the function (inside that scope), so in simple terms its going to be (with the default chunk size of 16kb):
1 request for 1gb file = 16kb in ram
2 requests for 1gb file = 32kb in ram
etc
And also, the OS its already passing a stream to node (fs) so it works with streams end to end.
Hope it helps :D.
PD: Never use sync operations (blocking) inside async operations (non blocking).

How to run simultaneous Node child processes

TL;DR: I have an endpoint on an Express server that runs some cpu-bound logic in a child_process. The problem is that if the server gets more than one request for that endpoint it won't run both requests simultaneously- it queues them up and runs them one-at-a-time. Is there a way to use Node child_process so that my server will perform multiple child processes simultaneously?
Long-Version: The major downfall of Node is that it is single-threaded and a logic-heavy (cpu-bound) request can make the server stop dead in its tracks so that it can't take anymore requests until that logic is finished running. I thought that I could work around this using child_process, which is working great in freeing up my server to take other requests. BUT- it will only execute child_processes one at a time, creating a queue that can get pretty backed-up. I also have a Node cluster setup so that my server is split into 8 separate "virtual servers" (8-core machine), so I guess I can technically run 8 of these child processes at once, but I want to be able to handle more traffic than that. Looking for a solution that will still allow me to use Node and Express, please only suggest using different technologies if you are absolutely sure this can't be efficiently done in my current environment. Thanks in advance for the help!
Endpoint:
app.get('/cpu-exec-file', function(req, res) {
child_process.execFile('node', ['./blocking_tasks/mathCruncher.js'], {timeout:30000}, function(err, stdout, stderr) {
var data = JSON.parse(stdout);
res.send(data);
})
});
mathCruncher.js:
var obj = {}
function myLoop (i) {
setTimeout(function () {
obj[i] = Math.random() * 100;
if (--i) {
myLoop(i);
} else {
string = JSON.stringify(obj);
console.log(string); // goes to stdout.
}
}, 1000)
};
myLoop(10);
Is there a way to use Node child_process so that my server will perform multiple child processes simultaneously?
message queue and back-end process.
i do exactly what you're wanting, using RabbitMQ. there are several other great messaging systems out there, like ZeroMQ and even Redis w/ some pub-sub libraries on top of it.
the gist of it is to send a request to your queueing system and have another process pick up the message, then run the process to do the work.
if you need a response from the worker, you can use bi-directional messaging with either a Request/Reply setup, or use status messages for really-long-running things.
if you're interested in the RabbitMQ side of things, I have a free email course on various patterns with RabbitMQ, including Request/Reply and status emails: http://derickbailey.com/email-courses/rabbitmq-patterns-for-applications/
and if you're interested in ground-up training on RMQ w/ Node, check out my training course at http://rabbitmq4devs.com

Node.js httpserver.listen method ambiguity

I have been working in Node.js and I am wondering what exactly does listen method do, in terms of eventloop. If I had a long running request, does it mean that server will never listen since it can only do one work at a time.
var http = require('http');
function handleRequest(request, response) {
response.end('Some Response at ' + request.url);
}
var server = http.createServer(handleRequest);
server.listen(8083, function() {
console.log('Listening...')
})
Is server.listen listening to some event?
You can think of server.listen() as starting your web server so that it is actually listening for incoming requests at the TCP level. From the node.js http documentation for .listen():
Begin accepting connections on the specified port and hostname.
The callback passed to server.listen() is optional. It is only called once to indicate that the server has been successfully started and is now listening for incoming requests. It is not what is called on every new incoming request. The callback passed to .createServer() is what is called for every new incoming request.
Multiple incoming requests can be in process at the same time though due to the single-threaded nature of node.js only one request is actually executing JS code at once.
But, a long running request is generally idle most of the time (e.g. waiting for database I/O or disk I/O or network I/O) so other requests can be processed and run during that idle time. This is the async nature of node.js and why it is important to use asynchronous I/O programming with node.js rather than synchronous I/O processing because asynchronous I/O allows other requests to run during the time when node.js is just waiting for I/O.
Yes, it basically binds an event listener to that port; similar to how event listeners work in your own code. Going more in depth would involve sockets, etc...
https://nodejs.org/api/net.html#net_server_listen_port_host_backlog_callback
The other answers are essentially correct, but I wanted to add more detail.
When you call createServer, the handler you pass in is what gets called on every incoming HTTP connection. But that is merely setting that up: it does not actually start the server or start listening for those connections. That doesn't happen until you call listen.
The (optional) callback for listen is just what gets called when the server has successfully started and is now listening for connections. Most of the time, it's simply used to log to the console that the server is started. You could also use it to record server start time for uptime monitoring. That callback is NOT invoked for every HTTP request: only once on server startup.
You don't even have to supply the callback for listen; it works fine without it. Here are some common variations (note that it's a good practice to let the port be specified by an environment variable, usually PORT; if that environment variable isn't set, there is a default):
// all in one line, no startup message
var server = http.createServer(handler).listen(process.env.PORT || 8083);
// two lines, no startup message
var server = http.createServer(handler); // server NOT started
server.listen(process.env.PORT || 8083); // server started, no confirmation
// most typical variation
var server = http.createServer(handler);
server.listen(process.env.PORT || 8083, function() {
// server started, startup confirmed - note that this only gets called once
console.log('server started at ' + Date.now());
});

Meteor - How to Run Multiple Server Processes Simultaneously?

My Meteor app needs to run 13 separate server processes, each on a setInterval. Essentially, I am pinging 13 different external APIs for new data, and performing calculations on the response and storing the results in Mongo. Each process looks something like this:
Meteor.setInterval(function () {
try {
var response = Meteor.http.call("GET", <url>, {
params: <params>,
timeout: <timeout>
});
}
catch (err) {
console.log(err);
}
if (response.statusCode === 200) {
// handle the response
...
}
}, 10000);
Unfortunately, Meteor chokes up after only three of these interval functions are turned on and running side by side. I start getting socket hangup errors and JS Allocation Failed errors thrown in console. I presume this has something to do with Node's single-threading. Does anybody know what the solution is for this? I've looked long and hard... I'm really wondering if I have to split out the back-end from 1 Meteor app with 13 processes (which doesn't seem to run) to 13 Meteors (or Node.js apps), each with 1 process. Thanks!
Try https://atmospherejs.com/vsivsi/job-collection
Benefits:
Jobs can be added to a queue, and you have granular control over when they succeed or fail... failed jobs can easily be re-queued.
It's automatically clustered against all of your Meteor processes that are tied to the same collection.
Status update: a large part of the problem has to do with Node being single-threaded. I solved the CPU limitation problem by splitting out this monolithic Meteor app into 13 microservices Meteor apps, all connected to the same MongoDB replica set.
This way, all cores on the CPU are being utilized, rather than Meteor trying to handle all requests and processes on just one.

Node/Express pending request

I'm bit new on the Node.js front and for now its awesome. I've encountered a small problem while running node (/w express) locally - Every request after the 10th one hangs and is marked as Pending in the Chrome Inspect Network.
As for modules i'm using less-middleware, express, jade and MySQL and i only do one SQL query (using mysql.createPool). Why is this request still Pending and how can i fix this?
Since i'm new at Node i'm not sure if i've tried everything so any help would be appreciated!
It sounds like you're not releasing the MySQL connection you're getting back from the pool. If you don't, the pool will run out of free connections and will start waiting for any to become available (and until then, stall the request).
So your code should look similar to this:
var pool = mysql.createPool(...);
...
// in your request handler:
pool.getConnection(function(err, connection) {
if (err) ...handle error...;
connection.query(function(err, results) {
// release connection
connection.release();
// handle results
...
// send back a response
res.send(...);
});
});

Categories