I am trying to set API rate limit on my app using express-rate-limit. It works if it is from the same IP address. I have an error message once it reaches a max of 5. However, it fails when it is tried from different IP address/computer. Any idea how I can fix this? I tried using 127.0.0.1 to generate a key regardless of which IP address but that failed as well.
Below is my code:
// Rate Limit
var RateLimit = require('express-rate-limit');
app.enable('trust proxy');
var limiter = new RateLimit({
windowMs: 365*24*60*60*1000, // 1 year
max: 5, // limit each IP to 1 requests per windowMs
delayMs: 365*24*60*60*1000, // delaying - 365 days until the max limit is reached
message: "Sorry, the maximum limit of 50 letters sent has been reached. Thank you for participating!",
keyGenerator: function (req) {
req.ip = "127.0.0.1";
// req.ip = "ip address";
return req.ip;
}
});
app.use('/api/letter', limiter);
The memory store implementation used by express-rate-limit uses setTimeout() to clear the store after windowMs milliseconds.
According to the Node.js documentation for setTimeout(),
When delay is larger than 2147483647 or less than 1, the delay will be set to 1.
In your case, the delay is larger than that amount, namely 31536000000 milliseconds. This results in the store never storing any data for more than 1ms.
To solve this, you probably have to implement your own store (see the store option), or perhaps look for an alternative rate limiter that doesn't have this limit (it seems to me that with such large expiry times, you'll need some sort of persistent storage anyway).
I think it's perfectly reasonable to call this "rate limiting". Just because the time period is big (yearly) doesn't mean it's not a limit per time period.
https://www.ratelim.it/documentation/once_and_only_once Takes this even further and let's you do N times per "infinite" which is super useful.
You should be able to use this service to do 5 per year. (I run ratelim.it).
rate-limiter-flexible package with Mongo can help set up rate limits for 1 year
const { RateLimiterMongo } = require('rate-limiter-flexible');
const { MongoClient } = require('mongodb');
const mongoOpts = {
useNewUrlParser: true,
reconnectTries: Number.MAX_VALUE, // Never stop trying to reconnect
reconnectInterval: 100, // Reconnect every 100ms
};
const mongoConn = MongoClient.connect(
'mongodb://localhost:27017',
mongoOpts
);
const opts = {
mongo: mongoConn,
points: 5, // Number of points
duration: 365*24*60*60, // Per 1 year
};
const rateLimiter = new RateLimiterMongo(opts);
app.use('/api/letter', (req, res, next) => {
rateLimiter.consume(req.ip)
.then(() => {
next();
})
.catch((rejRes) => {
res.status(429).send('Too Many Requests');
});
);
It is also recommended to set up insuranceLimiter and block strategy. Read more here
Related
The problem is that the rate limit is not enforced for the amount of time I specify. Instead of lasting 35 minutes, it lasts for only about 20 seconds. Also, if I keep making the request, the limit is always enforced, so that seems to refresh the time limit, which I think is also unexpected.
Apart from these issues, it works as expected, limiting the number of requests I specify in "max", as long as I make them quickly enough. I have tested locally, and on a Heroku server.
Here is the relevant code:
app.js
var express = require('express');
var dbRouter = require('./routes/db');
var limiter = require('express-rate-limit');
var app = express();
app.set('trust proxy', 1);
// This is a global limiter, not the one I'm having issues with.
// I've tried removing it, but the issue remained.
app.use(limiter({
windowMs: 10000,
max: 9
}));
app.use('/db', dbRouter);
module.exports = app;
db.js
var express = require('express');
var router = express.Router();
var level_controller = require('../controllers/levelController');
var limiter = require('express-rate-limit');
var level_upload_limiter = limiter({
windowMS: 35 * 60 * 1000,
max: 1,
message: 'Too many level uploads. Please try again in about 30 minutes.'
});
router.post('/level/create', level_upload_limiter, level_controller.level_create_post);
module.exports = router;
levelController.js
exports.level_create_post = [
(req, res, next) => {
// ...
}
];
It's the typo you made in your settings: windowMS -> windowMs
I have following options set when connecting to Redis
var client = new Redis({
port: 63xx, // Redis port
host: REDISHOST, // Redis host
family: 4, // 4 (IPv4) or 6 (IPv6)
db: 0,
lazyConnect: true,
// The milliseconds before a timeout occurs during the initial connection to the Redis server.
connectTimeout: 3000,
retryStrategy: function (times) {
if (times > 3) {
logger.error("redisRetryError", 'Redis reconnect exhausted after 3 retries.');
return null;
}
return 200;
}
});
Later on, I am exporting this client throughout project for redis queries.
The issue is when Request 1 comes and there is some issue with redis, it tries to auto-connect for 4 times(+1 for initial try). Then throws error which is handled.
So for now times variable(used in retrystrategy()) will have 4 as value.
Next time when Request 2 comes and we see redis is disconnected so we reconnect using client.connect() method:
static async getData(key) {
try {
// if connection is ended then we are trying to reconnect it.
if (client.status === 'end') {
await logger.warning(`reconnectingRedis`, 'Redis is not connected. Trying to reconnect to Redis!');
await client.connect();
}
let output = await client.get(key);
return JSON.parse(output);
} catch (error) {
ApiError.throw(error, errorCode.REDIS_GET_ERROR_CODE);
}
}
this time redis tries to reconnect but it doesn't resets the times variable used in retrystrategy(), So this variable now have 5.
And if this attempt too fails, retrystrategy() just throws error as times > 3
So effectively Request 1 gets 4 tries and Request 2 gets only 1
How can I fix this, So that Request 2 also gets 4 tries ?
To fix this issue I changed retryStrategy function used while creating redis in following way:
retryStrategy: function (times) {
if (times % 4 ==0) {
logger.error("redisRetryError", 'Redis reconnect exhausted after 3 retries.');
return null;
}
return 200;
}
Notice I took mod 4 of times variable and Doing so we will always get a value in range of 0-3.
So for request 2 when times variable has 5, its mod 4 will give 1 and will be tried,
next time times will be 6, So mode 4 is 2 and will be tried and so on. Till it becomes 8, In this case, mod 4 will give 0 and retry stops.
This fixed the issue for me.
I create an empty array, drop the value of the first name key from a separate object into it, send it to localhost by creating a server in Node.js, and loop back to pick up new information from another object to push to the array. But when I loop to the point of hosting on the server on the second loop, I get an error saying "throw error, address already in use at 127.0.0.1:3000
I don't know what alternatives to try to deliver continuous information to a running server
var p = 0, repeat = 4
var indices = []
function f() {
//example of array information which is fed into the loop
var mc = [{ name: "Henry", output: -30 }, { name: "Kevin", output: -15 }, { name: "Jeremy", output: -40 }, {name: "Steven", output: 43}]
p++
if (p < repeat) {
setTimeout(f, 100)
}
var open = mc[0].name
indices.push(open)
//this is where the error occurs on the second loop, as the server is already running from the first loop and it can't access the address in use already
var toserver = JSON.stringify(indices)
const http = require('http');
const hostname = '127.0.0.1';
const port = 3000;
const server = http.createServer(function (req, res) {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
res.end(toserver);
});
server.listen(port, hostname, function () {
console.log('Server running at http://' + hostname + ':' + port + '/');
});
}
f()
I get "throw error, address already in use at 127.0.0.1:3000". I hope someone can show me how to continually update this server. The information in the array continuously updates and can be seen on console.log. But it's the updating to the server so I can view it on the browser which is the issue. Thanks for your kind consideration.
For every cycle you try to create new server therefore in the second run it fails and throws error because you are already listening.
Start the server just once and use websockets or other techniques as Jaromanda suggested.
I have pretty high traffic peaks, thus I'd like to overwrite the dynamodb retry limit and retry policy.
Somehow I'm not able to find the right config property to overwrite the retry limit and function.
my code so far
var aws = require( 'aws-sdk');
var table = new aws.DynamoDB({params: {TableName: 'MyTable'}});
aws.config.update({accessKeyId: process.env.AWS_ACCESS_KEY_ID, secretAccessKey: process.env.AWS_SECRET_KEY});
aws.config.region = 'eu-central-1';
I found the following amazon variables and code snippets, however I'm not sure how to wire this up with the config?
retryLimit: 15,
retryDelays: function retryDelays() {
var retryCount = this.numRetries();
var delays = [];
for (var i = 0; i < retryCount; ++i) {
if (i === 0) {
delays.push(0);
} else {
delays.push(60*1000 *i); // Retry every minute instead
// Amazon Defaultdelays.push(50 * Math.pow(2, i - 1));
}
}
return delays;
}
The config is pretty limited, and the only retry parameter you can set on it is maxRetries.
maxRetries (Integer) — the maximum amount of retries to attempt with a request. See AWS.DynamoDB.maxRetries for more information.
You should set the maxRetries to a value that is appropriate to your use case.
aws.config.maxRetries = 20;
The retryDelays private API uses internally the maxRetries config setting, so setting that parameter globally like in my code above should work. The retryLimit is completely useless, and forget about it.
The number of retries can be set through configuration, but seems that there is not an elegant way to set the retry delay/backoff strategy etc.
The only way to manipulate those is to listen to the retry event, and manipulate the retry delay (and related behavior) in a event handler callback:
aws.events.on('retry', function(resp) {
// Enable or disable retries completely.
// disabling is equivalent to setting maxRetries to 0.
if (resp.error) resp.error.retryable = true;
// retry all requests with a 2sec delay (if they are retryable)
if (resp.error) resp.error.retryDelay = 2000;
});
Be aware that there is an exponential backoff strategy that runs internally, so the retryDelay is not literally 2s for subsequent retries. If you look at the internal service.js file you will see how the function looks:
retryDelays: function retryDelays() {
var retryCount = this.numRetries();
var delays = [];
for (var i = 0; i < retryCount; ++i) {
delays[i] = Math.pow(2, i) * 30;
}
return delays;
}
I don't think it's a good idea to modify internal API's, but you could do it by modifying the prototype of the Service class:
aws.Service.prototype.retryDelays = function(){ // Do some }
However, this will affect all services, and after looking in depth at this stuff, it is obvious their API wasn't built to cover your use-case in an elegant way, through configuration.
The javascript AWS SDK does not allow the DynamoDB service to overwrite the retryDelayOptions and thus does not allow the customBackoff to be defined. These configurations are allowed for the rest of the services, but for some reason does not work for DynamoDB.
This page notes that :
Note: This works with all services except DynamoDB.
Therefore, if you want to define a customBackoff function, ie: determine a retryDelay, it is not possible through configuration. The only way I have found was to overwrite the private method retryDelays of the DynamoDB object (aws-sdk-js/lib/services/dynamodb.js).
Here is an example of this being done where a exponential backoff with jitter is implemented :
AWS.DynamoDB.prototype.retryDelays = (retryCount: number): number => {
let temp = Math.min(cap, base * Math.pow(2, retryCount));
let sleep = Math.random() * temp + 1;
return sleep;
};
Max retries or retry limit can be set through the maxRetries property of the DynamoDB configuration object as such :
let dynamodb = new AWS.DynamoDB({
region: 'us-east-1',
maxRetries: 30
});
See Also :
https://github.com/aws/aws-sdk-js/issues/402
https://github.com/aws/aws-sdk-js/issues/1171
https://github.com/aws/aws-sdk-js/issues/1100
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Programming.Errors.html#Programming.Errors.RetryAndBackoff
https://www.awsarchitectureblog.com/2015/03/backoff.html
Here is my code:
I have more than 500,000 records in my database and I want to loop and request some information from another server. I've successfully wrote all functions except delays between each request. If I send all request with node.js remote server goes downs or can not answer each request. So I need to create a queue for looping But my code is not working still sending all request very fast.
var http = require('http')
, mysql = require('mysql');
var querystring = require('querystring');
var fs = require('fs');
var url = require('url');
var client = mysql.createClient({
user: 'root',
password: ''
});
client.useDatabase('database');
client.query("SELECT * from datatable",
function(err, results, fields) {
if (err) throw err;
for (var index in results) {
username = results[index].username;
setInterval(function() {
requestinfo(username);
}, 5000 );
}
}
);
client.end();
}
Your problem lies in the for loop since you set all the requests to go off every 5 seconds. Meaning after 5 seconds all the requests will be fired relatively simultaneously. And since it is done with setInterval then it will happen every 5 seconds.
You can solve it in 2 ways.
First choice is to set an interval to create a new request every 5 seconds if the index is a number, so instead of a for loop you do something like:
var index = 0;
setInterval(function(){
requestinfo(results[index].username);
index++;
}, 5000)
The second choice is to set all the requests with an increasing timeout so you modify your current script like so:
var timeout = 0;
for (var index in results) {
username = results[index].username;
setTimeout(function() {
requestinfo(username);
}, timeout );
timeout += 5000;
}
This will set timeouts for 0,5,10,15... etc seconds so every 5 seconds a new request is fired.