I have a Node.js daemon application that runs on my Debian home server 24/7.
I would like it to process triggers generated by motion, a program that monitors the video signal from cameras that is installed on the same machine. Motion can execute a command on certain events, for example when motion was detected, or camera connection was lost.
I can write a script that will process these events and record them in the database and in my daemon I can continuously poll the database. But that would be highly inefficient, right?
What would be the optimal way to process external triggers in Node.js applications?
Have a look at dnode. It allows you to do exactly what you are looking for.
In your daemon you will have something like this.
var dnode = require('dnode');
var server = dnode({
transform : function (eventObject, cb) {
//handle the event
cb(callbackDataHere)
}
});
server.listen(5004);
You will then need to create the command that Motion will call
var dnode = require('dnode');
var d = dnode.connect(5004);
d.on('remote', function (remote) {
var eventDataToSend = {}
remote.transform(eventDataToSend, function (s) {
//Do stuff with arguments sent back from the callback on the server
});
});
Related
Code on server-side sends a message immediately after connection is opened (it sends initial configuration/greetings to a client).
And the following code is on client-side:
var sock = new WebSocket(url);
sock.addEventListener('error', processError);
sock.addEventListener('close', finish);
sock.addEventListener('message', processMessage);
I worry about losing this first configuration/greetings-related message from server. Theoretically nothing prevents it from being received before message event handler is set.
On the other hand, practically it never occurred to me. And AFAIK JavaScript WebSocket API doesn't have countermeasures against this theoretical issue: the WebSocket constructor neither allows message event handler to be set, nor allows WebSocket to be created in suspended state.
So:
Either I am missing something, and loss of message with the above code is impossible even theoretically.
Or it is bug in JavaScript WebSocket API design.
Or everyone is just happy because message loss is practically impossible.
Or such behavior (sending message from server on connection) is somewhy considered bad practice, so no one bothers about possibility to implement it theoretically correct.
?
P.S.: Do such simple-but-theoretical questions better fit Stack Overflow or Programmers#Stack Exchange?
Don't worry.
Your code is running within a single threaded event loop.
This line: var sock = new WebSocket(url); doesn't initiate a websocket connection at all. The spec says that it must perform the actual connection only after returning the web socket, in parallel with the thread handling the event loop your code is running on:
Return a new WebSocket object, but continue these steps [in parallel][2].
That alone wouldn't be sufficient, but all subsequent WebSocket events for that socket are scheduled inside the same single-threaded event loop that is running your code. Here's what the spec says about receiving a message:
When a WebSocket message has been received with type type and data data, the user agent must queue a task to follow these steps
That task is queued on the same event loop. That means that the task to process the message cannot be run until the task where you created your WebSocket has run to completion. So your code will finish running before the event loop will process any connection related messages.
Even if you're running your code in a browser that uses many threads, the specific code will run on a single threaded event loop and each event loop will be independent.
Different event loops can and do communicate by pushing tasks into each other's task-queues. But these tasks will be executed within the single-threaded event-loop that received the task, keeping your code thread-safe.
The task "handle this event" will be handled by the single threaded event loop finding the appropriate event handler and calling its callback... but this will only happen once the task is already being handled.
To be clearer:
I'm not claiming that each event-loop actually handles the IO - but the IO scheduler will send your code events and these events will run sequentially within a single thread (sort of, they do have priority management that uses different "task queues").
EDIT: client code concerns
It should be noted that the Websocket API wasn't designed for the DOM's function addEventListener.
Instead, the Websocket API follows the HTML4 paradigm, where event callbacks are object properties (rather than the EventListener collection). i.e.:
// altered DOM API:
sock.addEventListener('message', processMessage);
// original WebSocket API:
sock.onmessage = processMessage;
Both APIs work correctly on all the browsers I tested (including safe delivery of first message). The difference in approaches is probably handled by the HTML4 compatibility layer.
However the specification regarding event scheduling is different, so the use of addEventListener should probably be avoided.
EDIT 2 : Testing the Theory
Regarding Bronze Man's answer concerning failed message responses...
I couldn't reproduce the claimed issue, even though I wrote a test using a small Ruby application and a small Javascript Client.
The Ruby application starts up a Websocket echo server with a welcome message (I'm using plezi.io).
The Javascript client contains a busy-wait loop that causes the Javascript thread to hang (block) for the specified amount of time (2 seconds in my tests).
The onmessage callback is set only after the block is released (after 2 seconds) - so the welcome message from the server will arrive at the browser before the callback is defined.
This allows us to test if the welcome message is lost on any specific browser (which would be a bug in the browser).
The test is reliable since the server is a known quantity and will send the message to the socket as soon as the upgrade is complete (I wrote the Iodine server backend in C as well as the plezi.io framework and I chose them because of my deep knowledge of their internal behavior).
The Ruby application:
# run from terminal using `irb`, after `gem install plezi`
require 'plezi'
class WebsocketEcho
def index
"Use Websockets"
end
def on_message data
# simple echo
write data
end
def on_open
# write a welcome message
# will ths message be lost?
write "Welcome to the WebSocket echo server."
puts "New Websocket connection opened, welcome message was sent."
end
end
# adds mixins to the class and creates route
Plezi.route("/", WebsocketEcho)
# running the server from the terminal
Iodine.threads = 1
Iodine::Rack.app = Plezi.app
Iodine.start
The Javascript Client:
function Client(milli) {
this.ws = new WebSocket("ws" + window.document.location.href.slice(4, -1));
this.ws.client = this;
this.onopen = function (e) { console.log("Websocket opened", e); }
this.ws.onopen = function (e) { e.target.client.onopen(e); }
this.onclose = function (e) { console.log("Websocket closed", e); /* reconnect? */ }
this.ws.onclose = function (e) { e.target.client.onclose(e); }
if(milli) { // busy wait, blocking the thread.
var start = new Date();
var now = null;
do {
now = new Date();
} while(now - start < milli);
}
this.onmessage = function (e) { console.log(e.data); }
// // DOM API alternative for testing:
// this.ws.addEventListener('message', function (e) { e.target.client.onmessage(e); });
// // WebSocket API for testing:
this.ws.onmessage = function (e) { e.target.client.onmessage(e); }
}
// a 2 second window
cl = new Client(2000);
Results on my machine (MacOS):
Safari 11.01 initiates the Websocket connection only after the new client was creation is complete (after the thread is done processing the code, as indicated by the Ruby application's delayed output). The message obviously arrived once the connection was made.
Chrome 62.0 initiates the Websocket connection immediately. The message arrives once the 2 second window ends. Message wasn't lost even though it arrived before the onmessage handler was set.
FireFox 56.0 behaves the same as Chrome, initiating the Websocket connection immediately. The message arrives once the 2 second window ends. Message wasn't lost.
If someone could test on Windows and Linux, that would be great... but I don't think the browsers will have implementation issues with the event scheduling. I believe the specifications can be trusted.
Your theory is true and real.
I ACTUALLY got into this situation on chrome 62 on ubuntu 1404 when my chrome extension background page open a websocket connection to 127.0.0.1 server. My server send serval messages first to the app. And the first serval messages may lost and may not lost. But this bug do not happen on my mac chrome 62. I think this is what data race looks like.It may never happen, but it may happen in theory. So we need to prevent it happen.
Here is my client code looks like:
var ws = new WebSocket(url);
var lastConnectTime = new Date();
ws.onerror = processError;
ws.onclose = finish;
ws.onmessage = processMessage;
Solution
The solution should be the server must wait client first message(even if it do not have any information) then send message to client.
Here is my solution in client js in code:
var ws = new WebSocket(url);
var lastConnectTime = new Date();
ws.onerror = processError;
ws.onclose = finish;
ws.onmessage = processMessage;
ws.onopen = function(){
ws.send("{}");
};
Here is my solution in golang server:
func (s *GoServer)ServeHTTP(w http.ResponseWriter, r *http.Request){
fmt.Println("WebsocketServeHttp recv connect",r.RemoteAddr)
conn,err:=websocket.Upgrade(w,r,nil,10240,10240)
if err!=nil{
panic(err)
}
_,_,err=conn.ReadMessage()
if err!=nil{
panic(err)
}
//... (you can send message to the client now)
}
Confirming that the problem does exist (as a rare but real situation) on Chrome 62 and 63 on Ubuntu: occasional loss of first message from server. I confirmed with tcpdump that there is indeed a handshake packet and then the packet for the first message. In the client, the first message even shows up in the Networking tab as a first frame on the websocket. Then onopen callback is called, but onmessage is NOT.
I agree that it doesn't seem possible, and looking at WebKit's implementation of WebSocket, it doesn't seem possible, and I've never seen it on Chrome Mac or in Firefox, so my only guess is that Chrome on Ubuntu introduced a race condition with some optimization.
You can definitely lose messages! The accepted answer is misleading. All that has to happen is that you do an operation that relinquishes the thread of control between the open event and configuring a message listener.
Is that likely to happen? Who knows, it depends on your application. Here's the situation that led me to waste too much time debugging this (the api sucks!) using the ws library on the server:
On the server:
async handleOpen(socket, request) {
const session = await getSession(cookie.parse(request.headers.cookie).id);
const user = new User(session.user.id, socket);
user.socket.addEventListener('message', this.handleMessage.bind(this, user));
}
See that await? That relinquishes control and allows events to be lost. For what it's worth, the session was stored in memcached so was not immediately available.
I have made small reasearch about patterns supported by zeromq. I would like to describe problem with PUB/SUB pattern, but probably I discover this problem in my recent project also in PUSH/PULL pattern. I use NodeJS zeromq implementation.
I prepare two examples (server.js & client.js). I recognized that first message from server.js is lost every time I restart server (message is send every 1 second). client.js doesn't get first message. It is probably caused by to short time before sending messages. When I start sending messages after some time (e.g. 1 second) everything works fine. I thing that zmq needs some time for initialization connection between publisher and subscriber.
I would like to know when producer (server) is ready to sending messages for subscribed clients. How get this information?
I don't understand why client.js connected and subscribed for messages doesn't get it, because server is not ready for support subscriptions after restart.
Maybe it works like this by design.
server.js:
var zmq = require('zmq');
console.log('server zmq: ' + zmq.version);
var publisher = zmq.socket('pub');
publisher.bindSync("tcp://*:5555");
var i = 0;
var msg = "get_status OK ";
function sendMsg () {
console.log(msg + i);
publisher.send(msg + i);
i++;
setTimeout(sendMsg, 1000);
}
sendMsg();
process.on('SIGINT', function() {
publisher.close();
process.exit();
});
client.js:
var zmq = require('zmq');
console.log('client zmq: ' + zmq.version);
var subscriber = zmq.socket('sub');
subscriber.subscribe("get_status");
subscriber.on('message', function(data) {
console.log(data.toString());
});
subscriber.connect("tcp://127.0.0.1:5555");
process.on('SIGINT', function() {
subscriber.close();
process.exit();
});
In the node zmq lib repo you have stated the supported monitoring events. Subscribing to this will allow you to monitor your connection, in this case the accept event. However don't forget that you'll also have to call the monitor() function on the socket to activate monitoring.
You should end up with something like:
var publisher = zmq.socket('pub');
publisher.on('accept', function(fd, ep) {
sendMsg();
});
publisher.monitor(100, 0);
publisher.bindSync("tcp://*:5555");
I currently forced to create a new RabbitMQ connection every time a user loads a page on my website.
This is creating a new TCP connection every time. However, i'm trying to reduce the number of TCP connections i make to Rabbit with the NodeJS AMQP plug in. Here is what i have:
var ex_conn = get_connection(uri); //http:rabbitm.com
if(ex_conn == false) {
var tempConn = amqp.createConnection({
url: uri
});
connections.push({
host: uri,
obj: tempConn
});
}
else {
var tempConn = ex_conn.obj;
}
The issue i'm running into is that if i try to do:
tempConn.on('ready', function() {
});
Then the ready function does not get triggered. I'm assuming, that is because the ready call back was already defined and it is not going to be re triggered. What i'm looking to do is bind a new queue by doing:
tempConn.queu('', {});
Any thoughts on how to get around this issue is much appreciated.
thanks.
Being inside of a NodeJS process, how can I listen for events from bash?
For example
NodeJS side
obj.on("something", function (data) {
console.log(data);
});
Bash side
$ do-something 'Hello World'
Then in the NodeJS stdout will appear "Hello World" message.
How can I do this?
I guess it's related to signal events.
The problem with using signals is that you can't pass arguments and most of them are reserved for system use already (I think SIGUSR2 is really the only safe one for node since SIGUSR1 starts the debugger and those are the only two that are supposed to be for user-defined conditions).
Instead, the best way that I've found to do this is by using UNIX sockets; they're designed for inter process communication.
The easiest way to setup a UNIX socket in node is by setting up a standard net server with net.createServer() and then simply passing a file path to server.listen() to create the socket at the path you specified. Note: It's important that a file at that path doesn't exist, otherwise you'll get a EADDRINUSE error.
Something like this:
var net = require('net');
var server = net.createServer(function(connection) {
connection.on('data', function(data) {
// data is a Buffer, so we'll .toString() it for this example
console.log(data.toString());
});
});
// This creates a UNIX socket in the current directory named "nodejs_bridge.sock"
server.listen('nodejs_bridge.sock');
// Make sure we close the server when the process exits so the file it created is removed
process.on('exit', function() {
server.close();
});
// Call process.exit() explicitly on ctl-c so that we actually get that event
process.on('SIGINT', function() {
process.exit();
});
// Resume stdin so that we don't just exit immediately
process.stdin.resume();
Then, to actually send something to that socket in bash, you can pipe to nc like this:
echo "Hello World" | nc -U nodejs_bridge.sock
What about using FIFOs?
NodeJS code:
process.stdin.on('readable', function() {
var chunk = process.stdin.read();
if (chunk !== null) {
process.stdout.write('data: ' + chunk);
}
});
NodeJS startup (the 3>/tmp/... is a trick to keep FIFO open):
mkfifo /tmp/nodeJsProcess.fifo
node myProgram.js </tmp/nodeJsProcess.fifo 3>/tmp/nodeJsProcess.fifo
Bash linkage:
echo Hello >/tmp/nodeJsProcess.fifo
The signals described in the page that you've linked are used to send some specific "command" to processes. This is called "Inter Process Communication". You can see here a first definition of IPC.
You can instruct you node.js code to react to a specific signal, as in this example:
// Start reading from stdin so we don't exit.
process.stdin.resume();
process.on('SIGUSR1', function() {
console.log('Got SIGUSR1. Here you can do something.');
});
Please note that the signal is sent to the process, and not to a specific object in the code.
If you need to communicate in a more specific way to the node.js daemon you can listen on another port too, and use it to receive (and eventually send) control commands.
I am working on a node.js application that will connect to a UNIX socket (on a Linux machine) and facilitate communication between a web page and that socket. So far, I have been able to create socket and communicate back and forth with this code in my main app.js:
var net = require('net');
var fs = require('fs');
var socketPath = '/tmp/mysocket';
fs.stat(socketPath, function(err) {
if (!err) fs.unlinkSync(socketPath);
var unixServer = net.createServer(function(localSerialConnection) {
localSerialConnection.on('data', function(data) {
// data is a buffer from the socket
});
// write to socket with localSerialConnection.write()
});
unixServer.listen(socketPath);
});
This code causes node.js to create a UNIX socket at /tmp/mysocket and I am getting good communication by testing with nc -U /tmp/mysocket on the command line. However...
I want to establish a connection to an already existing UNIX socket from my node.js application. With my current code, if I create a socket from the command line (nc -Ul /tmp/mysocket), then run my node.js application, there is no communication between the socket and my application (The 'connect' event is not fired from node.js server object).
Any tips on how to go about accomplishing this? My experiments with node.js function net.createSocket instead of net.createServer have so far failed and I'm not sure if that's even the right track.
The method you're looking for is net.createConnection(path):
var client = net.createConnection("/tmp/mysocket");
client.on("connect", function() {
... do something when you connect ...
});
client.on("data", function(data) {
... do stuff with the data ...
});
I was just trying to get this to work with Linux's abstract sockets and found them to be incompatible with node's net library. Instead, the following code can be used with the abstract-socket library:
const abstract_socket = require('abstract-socket');
let client = abstract_socket.connect('\0my_abstract_socket');
client.on("connect", function() {
... do something when you connect ...
});
client.on("data", function(data) {
... do stuff with the data ...
});
You can also connect to a socket like this:
http://unix:/path/to/my.sock: