My code:
const limit = require('simple-rate-limiter')
const request = limit(require('request').defaults({
gzip: true
})).to(15).per(10 * 1000) // 15 requests per 10 seconds
request(API_ENDPOINT, callback) // call API thousands of times
Libraries: simple-rate-limiter and the well-known request library.
I need to call a 3rd-party API thousands of times that only allows 15 requests every 10 seconds. The above code doesn't limit my requests correctly so the server sends a 429 Too Many Requests HTTP status code.
I can send 15 requests at once but then the program will have to wait for 10 seconds before it can send any more requests or it'll get a 429 response again.
I think this is because the connection to the server takes anywhere from hundreds of milliseconds to few seconds (300ms-2s). So my request time is different from the time the server receives the request.
Responses from the server contain a Date header. Can that be use somehow to limit the requests correctly? Is there a library that makes this easy? Even after implementing correct rate limiting, if you get a 429 response, is there a simple way to retry that too?
OK so the approach is to check how much time has passed from the start till the end of call. I will assume that it is safe to call API every 667 milliseconds.
var startTime,
endTime;
function timeDiff() {
return (endTime.getTime() - startTime.getTime());
}
function startRequest() {
startTime = new Date();
request(API_ENDPOINT, requestCallback)
}
function requestCallback(data) {
//do what you please with data
endTime = new Date();
var diff = timeDiff();
if(diff < 667) {
//To early to start API, need to wait.
setTimeout(startRequest, 667 - diff);
} else {
//It is fine to start request now.
setTimeout(startRequest);
}
}
startRequest();
This code will keep calling API as soon as the call is finished unless it finished faster than 667 milliseconds.
Related
Using the PerformanceResourceTiming, the duration value returned includes the resource scheduling time too.
Here is an example:
Here is the data observed using a Performance Observer:
The values match with the network panel. But this value corresponds to the total time. This total time has added up the resource scheduling time too.
Is there any way to get the duration from the API excluding the resource scheduling time? Usually the API is adding this time into the total duration of the request.
Here is the entry in the network panel table.
As you can see in the above photos : 244.13ms is the sum of 240ms (~Resource Inflight Time) + 4ms (~Resource Scheduling Time).
As noted above, the value logged is the sum of stalled time and time logged in the entry of network table. Which means it is not exclusively the in-flight time; I am looking for that.
you can calculate that time
var startTime = performance.now()
doSomething() // <---- measured code goes between startTime and endTime
var endTime = performance.now()
console.log(`Call to doSomething took ${endTime - startTime} milliseconds`)
and if you want to know the time your request took before the server recieved it, start a "performance.now()" when you send the request and another "performance.now()" at the beginning of the function on the server.
then substract them as shown in the example above
now you know the time it took for the server to recieve your request
This Code gives you total duration with and without latency, as you request excluding the resource scheduling time.
const resourcses =performance.getEntriesByType('resource').reduce((o,entry,i)=>{
const {name,requestStart,responseStart,
domainLookupEnd,
domainLookupStart,
connectEnd,
connectStart,
secureConnectionStart,
responseEnd} = entry;
const dnslookup = domainLookupEnd - domainLookupStart;
const TCPhandshake = connectEnd - connectStart;
const secureConn = (secureConnectionStart > 0) ? (connectEnd - secureConnectionStart) : 0;
const wl = (responseEnd-requestStart)+(dnslookup+TCPhandshake+secureConn);
const whl = (responseStart-requestStart)+(dnslookup+TCPhandshake+secureConn);
const l = wl-whl;
o[i] = {
url:name,
dnslookup,
TCPhandshake,
secureConn,
withLatency:wl,
withoutLatency:whl,
networkLatency:l
}
return o;
})
console.dir(resourcses)
Above Image shows the response time without latency of 43.43 (green color) and with latency of 45.26 (sky blue color)
Latency is time required by network to fetch the resource from server to client, it may differ by your network speed.
i would like to understand DOS (denial-of-service) attacks better and I would like to know what my options are for learning about it with an example.
i have a basic express server.
app.get('/ping', (req, res) => {
res.send({ pong: 'pong', time: new Date().valueOf(), memory: process.memoryUsage()})
})
I will separately create some javascript code that with make multiple requests to the server. but I don't know to devise strategies to try and bring down the server (consider that this is all running on localhost)
I want to see what the upper limit of making requests is possible when locally testing this. i am experiencing what is described here: Sending thousands of fetch requests crashes the browser. Out of memory
... the suggestions on that thread are more along the lines of "browser running out of memory" and that I should "throttle requests".... but I am actively trying to max out the requests the browser can make without crashing. so far my observations are that the server does not have any difficulty. (so maybe I should also make requests from my phone and tablet?)
the code have run on the browser isn't much more than:
const makeRequestAndAlogTime = () => {
const startTime = new Date().valueOf();
fetch('http://localhost:4000/ping')
.then(async (response) => {
const { time, memory } = await response.json();
console.log({
startTime: 0,
processTime: time - startTime,
endTime: new Date().valueOf() - startTime,
serverMemory: memory,
browserMemory: performance['memory']
})
})
}
for(let x = 0; x < 100; x++) {
makeRequestAndAlogTime()
}
but depending on what value I put in for the number of times to go through the for loop, performance is slower and eventually crashes (as expected)... but I want to know if there is a way I could automate determining the upper limit of requests that I can make on my browsers?
look at the title
idk how to do specifically, a ping in javascript,
but I made this itty bitty snippet that returns
the time the request was made,
the time the server received the request, (done from server side)
and the time the server sent the response.
async function getResponseTimeOnce(){
var times = {};
times.beforeRequest = Date.now();
await fetch("https://randobytes.yimmee.repl.co/ping").then((serverReceive)=>{
serverReceive.json().then((serverReceive)=>{
times.serverReceive = serverReceive;
});
times.afterRequest = Date.now();
});
return times;
}
all I'm asking, is, which values am I supposed to subtract to get the ping time?
In standard ping programs, the latency measurement is always the round-trip-time (RTT). You should subtract beforeRequest from afterRequest to get the difference.
Source: https://developer.mozilla.org/en-US/docs/Glossary/Round_Trip_Time_(RTT)
$ ping example.com
PING example.com (216.58.194.174): 56 data bytes
64 bytes from 216.58.194.174: icmp_seq=0 ttl=55 time=25.050 ms
64 bytes from 216.58.194.174: icmp_seq=1 ttl=55 time=23.781 ms
64 bytes from 216.58.194.174: icmp_seq=2 ttl=55 time=24.287 ms
64 bytes from 216.58.194.174: icmp_seq=3 ttl=55 time=34.904 ms
64 bytes from 216.58.194.174: icmp_seq=4 ttl=55 time=26.119 ms
--- google.com ping statistics ---
5 packets transmitted, 5 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 23.781/26.828/34.904/4.114 ms
For this particular application, you may also consider using the Performance API to get a high resolution timestamp.
Ping uses RTD/RTT (Round-trip delay/time). The RTD should be calculated from the time you sent the request to the time you receive the response.
I have a game app (Electron) and a web app (testing on android chrome). That pass messages via a websocket server. I want to coordinate a countdown between the 2 processes. It doesn't have to be perfect but What I've observed is that if I run in an environment with low latency it's fine. But the more lag their is in the system it seems like the Electron app tries to start far earlier then the web app. I've tested all my math and it should synchronize but it just doesn't.
First the web app iniates the start of the countdown by passing a starting time to the game app
const timeToGameStart:number = peerConnection.timeToGameStart(); // time to game start = 3 x (the longest time it toke to pass a previous msg from game app to web app)
const currUnixTime:number = peerConnection.currUnixTime();
const startGameTime:number = currUnixTime + timeToGameStart;
const startGame:StartGame = <StartGame>{
msg_data_type:Msg_Data_Type.StartGame,
game_start_time:startGameTime
}
peerConnection.passMsg(startGame);
setTimeout(timer.start, timeToGameStart);
Below is the app portion of the code that responds to the msg passed to the server
const gameStartTime:number = (<StartGame> msgData).game_start_time;
const currUnixTime:number = ServerConnection.currUnixTime();
// if we are on time, wait till its right time else if we are late, start at the next inc 3,2,1
const countDownLength:number = 3;
if (currUnixTime <= gameStartTime) {
setTimeout(()=>startCountDown(countDownLength), currUnixTime - gameStartTime);
} else {
const timeWeAreLateBy:number = currUnixTime - gameStartTime;
const timeWeAreLateByInSec:number = Math.ceil(timeWeAreLateBy / 1000);
const shortCountDownLen:number = Math.max(countDownLength - timeWeAreLateByInSec, 0);
const timeToNextSec:number = Math.max((1000 * timeWeAreLateByInSec) - timeWeAreLateBy, 0);
setTimeout(()=>startCountDown(shortCountDownLen), timeToNextSec);
}
The problem is that these two separate processes are on separate OSs. Which both have a different definition of time. i.e. (new Date()).getTime() return different numbers. the difference was 2 seconds so the controller thought their was no lag in the connection and was telling the app to start ASAP.
The solution was I had to define a consistent measurement of time. After each app connects to the server they sync their time with the server by making a request for the time the server has.
I didn't need a super precise time sync so I used a simple algorithm that got the job done. The algorithm was trying to calculate the difference in time the process was off from the server. The formula I used was server_time_diff = server_time - (received_time - RTT/2). Now to get unified (or server time) you just need to call new Date() + server_time_diff. RTT is the time it took to request the time from the server.
Any improvement ideas to my algorithm is welcome.
We are building a chatroom with our own notification system without depending on GCM but with service worker + SSE.
On desktop it is fine , but on mobile android app (which uses cordova-crosswalk , chromium 53 ) .
The long running notification connection become stuck after 20-30 minutes and it is in foreground activity.
It dosen't die with an error , just not reciving data. No error at all which is very weird. No way to reconnect since we do not know if the connection is dead at all.
What would be the cleanest way? Restarting connection every 5 minutes is one idea but it is not clean.
code
runEvtSource(url, fn) {
if (this.get('session.content.isAuthenticated') === true) {
var evtSource = new EventSource(url, {
withCredentials: true
});
}}
Agressive reconnect code
var evtSource = this.runEvtSource(url, fn)
var evtSourceErrorHandler = (event) => {
var txt;
switch (event.target.readyState) {
case EventSource.CONNECTING:
txt = 'Reconnecting...';
evtSource.onerror = evtSourceErrorHandler;
break;
case EventSource.CLOSED:
txt = 'Reinitializing...';
evtSource = this.runEvtSource(url, fn)
evtSource.onerror = evtSourceErrorHandler;
break;
}
console.log(txt);
evtSource.onerror = evtSourceErrorHandler
I normally add a keep-alive layer on top of the the SSE connection. It doesn't happen that often, but sockets can die without dying properly, so your connection just goes quiet and you don't get an error.
So, one way is, inside your get data function:
if(timer)clearTimeout(timer);
timer = setTimeout(reconnect, 30 * 1000);
...process the data
In other words, if it is over 30 seconds since you last got data, reconnect. Choose a value based on the frequency of the data you send: if 10% of the time there is a 60 second gap between data events, but never a 120 second gap, then setting the time-out to something higher than 120 seconds makes sense.
You might also want to keep things alive by pushing regular messages from the server to client. This is a good idea if the frequency of messages from the server is very irregular. E.g. I might have the server send the current timestamp every 30 seconds, and use a keep-alive time-out of 45 seconds on the client.
As an aside, if this is a mobile app, bear in mind if the user will appreciate the benefit of reduced latency of receiving chat messages against the downside of reduced battery life.