Javascript: How to get accurate Resource Timing API entry from browser - javascript

Using the PerformanceResourceTiming, the duration value returned includes the resource scheduling time too.
Here is an example:
Here is the data observed using a Performance Observer:
The values match with the network panel. But this value corresponds to the total time. This total time has added up the resource scheduling time too.
Is there any way to get the duration from the API excluding the resource scheduling time? Usually the API is adding this time into the total duration of the request.
Here is the entry in the network panel table.
As you can see in the above photos : 244.13ms is the sum of 240ms (~Resource Inflight Time) + 4ms (~Resource Scheduling Time).
As noted above, the value logged is the sum of stalled time and time logged in the entry of network table. Which means it is not exclusively the in-flight time; I am looking for that.

you can calculate that time
var startTime = performance.now()
doSomething() // <---- measured code goes between startTime and endTime
var endTime = performance.now()
console.log(`Call to doSomething took ${endTime - startTime} milliseconds`)
and if you want to know the time your request took before the server recieved it, start a "performance.now()" when you send the request and another "performance.now()" at the beginning of the function on the server.
then substract them as shown in the example above
now you know the time it took for the server to recieve your request

This Code gives you total duration with and without latency, as you request excluding the resource scheduling time.
const resourcses =performance.getEntriesByType('resource').reduce((o,entry,i)=>{
const {name,requestStart,responseStart,
domainLookupEnd,
domainLookupStart,
connectEnd,
connectStart,
secureConnectionStart,
responseEnd} = entry;
const dnslookup = domainLookupEnd - domainLookupStart;
const TCPhandshake = connectEnd - connectStart;
const secureConn = (secureConnectionStart > 0) ? (connectEnd - secureConnectionStart) : 0;
const wl = (responseEnd-requestStart)+(dnslookup+TCPhandshake+secureConn);
const whl = (responseStart-requestStart)+(dnslookup+TCPhandshake+secureConn);
const l = wl-whl;
o[i] = {
url:name,
dnslookup,
TCPhandshake,
secureConn,
withLatency:wl,
withoutLatency:whl,
networkLatency:l
}
return o;
})
console.dir(resourcses)
Above Image shows the response time without latency of 43.43 (green color) and with latency of 45.26 (sky blue color)
Latency is time required by network to fetch the resource from server to client, it may differ by your network speed.

Related

How do i prevent players from spamming move messages (Node.js)?

Essentially, i want to learn of the industry standard, and proper practices of preventing players from spamming movement packets.
To demonstrate my current dilemma, i have made a 1-dimensional example, using the key concepts from https://www.gabrielgambetta.com/client-server-game-architecture.html except for interpolation. As such, my movement messages consist of the direction (-1 or 1) and the client's timestamp (used for reconciliation).
// Client move function, dir = -1 or 1
function move (dir, dt) {
localxPos += dir;
last_ts_local = Date.now();
socket.emit('move', {
ts: last_ts_local,
dir: dir
});
}
Since the server simply adds the direction to the players position for EVERY movement packet sent, a flurry of movement messages can be sent to move faster.
...
// Server receive move
socket.on('move', function(msg) {
clients[socket.id].process_queue_moves.push(msg);
});
...
// Server processes movement queue (run every 1 seconds - very slow for example purposes)
for (var i = 0; i < clientIDs.length; i++) {
clientID = clientIDs[i];
// process movement queue
for (var j = 0; j < clients[clientID].process_queue_moves.length; j++) {
clients[clientID].xPos += (clients[clientID].process_queue_moves[j].dir)/Math.abs(clients[clientID].process_queue_moves[j].dir);
}
if (clients[clientID].process_queue_moves.length > 0) {
clients[clientID].last_ts = clients[clientID].process_queue_moves[clients[clientID].process_queue_moves.length-1].ts;
} else {
clients[clientID].last_ts == -1;
}
// clear movement queue
clients[clientID].process_queue_moves = [];
}
I initially thought that i could base the client's framerate - or packet rate on the amount of packets they send. However, i quickly realised that if a client sends 2 packets out, it doesn't mean they have 2 FPS. They could simply be standing still, and moved for 2 frames.
After this realisation, i discovered that i could send the move packet even when the player is not moving - more like an input packet. This way, the client could send the move message with a direction of 0 when not moving.
This eliminates the malicious potential for players, since, if the player sends 1000 packets, the server can infer that the player simply has 1000 FPS, and limit the movement as such.
Now, i'm not sure if this is the best way to handle this, or if sending a message every frame is too intensive. If there is a better way to do this, could you please let me know :).
Thanks
I have solved this by sending fragments of input (to keep the packets down - i have it at 20 tps). The input packets also contain when the player is not moving to perform accurate validation. If you'd like to know more info on how i solved this, pm me :).

Syncing Events between 2 JS apps

I have a game app (Electron) and a web app (testing on android chrome). That pass messages via a websocket server. I want to coordinate a countdown between the 2 processes. It doesn't have to be perfect but What I've observed is that if I run in an environment with low latency it's fine. But the more lag their is in the system it seems like the Electron app tries to start far earlier then the web app. I've tested all my math and it should synchronize but it just doesn't.
First the web app iniates the start of the countdown by passing a starting time to the game app
const timeToGameStart:number = peerConnection.timeToGameStart(); // time to game start = 3 x (the longest time it toke to pass a previous msg from game app to web app)
const currUnixTime:number = peerConnection.currUnixTime();
const startGameTime:number = currUnixTime + timeToGameStart;
const startGame:StartGame = <StartGame>{
msg_data_type:Msg_Data_Type.StartGame,
game_start_time:startGameTime
}
peerConnection.passMsg(startGame);
setTimeout(timer.start, timeToGameStart);
Below is the app portion of the code that responds to the msg passed to the server
const gameStartTime:number = (<StartGame> msgData).game_start_time;
const currUnixTime:number = ServerConnection.currUnixTime();
// if we are on time, wait till its right time else if we are late, start at the next inc 3,2,1
const countDownLength:number = 3;
if (currUnixTime <= gameStartTime) {
setTimeout(()=>startCountDown(countDownLength), currUnixTime - gameStartTime);
} else {
const timeWeAreLateBy:number = currUnixTime - gameStartTime;
const timeWeAreLateByInSec:number = Math.ceil(timeWeAreLateBy / 1000);
const shortCountDownLen:number = Math.max(countDownLength - timeWeAreLateByInSec, 0);
const timeToNextSec:number = Math.max((1000 * timeWeAreLateByInSec) - timeWeAreLateBy, 0);
setTimeout(()=>startCountDown(shortCountDownLen), timeToNextSec);
}
The problem is that these two separate processes are on separate OSs. Which both have a different definition of time. i.e. (new Date()).getTime() return different numbers. the difference was 2 seconds so the controller thought their was no lag in the connection and was telling the app to start ASAP.
The solution was I had to define a consistent measurement of time. After each app connects to the server they sync their time with the server by making a request for the time the server has.
I didn't need a super precise time sync so I used a simple algorithm that got the job done. The algorithm was trying to calculate the difference in time the process was off from the server. The formula I used was server_time_diff = server_time - (received_time - RTT/2). Now to get unified (or server time) you just need to call new Date() + server_time_diff. RTT is the time it took to request the time from the server.
Any improvement ideas to my algorithm is welcome.

Can service worker "Content Download" be longer than the actual fetch?

I'm trying to understand what chromium is actually doing when serving content from a service worker. But I'm seeing a really strange behavior.
Here is the test case : i created a very simple app which expose a link. When the user clicks the link, it fetches a 2MB javascript file (which has been previously store in cache during service worker install phase). The service worker intercepts the fetch and serves the file using the cache.
I added a console.log in the main thread to measure how long takes the fetch to respond :
function fetchScript() {
const t = performance.now();
fetch("portal.js")
.then((response) => console.log("took", performance.now() - t, response.text()));
}
And i compared this whith the Network tab in the devtools :
If we open the details of one of the request in the network tab, we will see that the time is being spent on Content Download, which in the official spec refers to The browser is receiving the response.
How Content Download operation can be longer than the actual fetch ? I was expecting the console log to show a larger time than the one in the network tabs (or at least equal). Does someone actually know what is occuring during Content Download ?
So it appears that Content Download phase refers to the time for response body to be read (from when the headers are available to when the body has been read)
function fetchScript() {
const t = performance.now();
fetch("portal.js")
.then((response) => console.log("took", performance.now() - t,
response.text()));
}
Fetch is resolved when the header is available, not when the body has been read. That's why the logged time can be smaller than the Content Download time from the network time. To include Content Download time into the console logged time, we need to read the response :
function fetchScript() {
const t = performance.now();
fetch("portal.js")
.then(response => response.text())
.then(response => console.log("took", performance.now() - t));
}
(However, Content Download time is a browser measure, it doesn't take into account the event loop and more specifically the time it takes for the event loop to process the microtask enqueued after the response has been read : response => console.log("took", performance.now() - t). As a consequence, we won't measure the same time between Network tabs, and console.log)

Chrome on Android : connection become dead after 30 minutes

We are building a chatroom with our own notification system without depending on GCM but with service worker + SSE.
On desktop it is fine , but on mobile android app (which uses cordova-crosswalk , chromium 53 ) .
The long running notification connection become stuck after 20-30 minutes and it is in foreground activity.
It dosen't die with an error , just not reciving data. No error at all which is very weird. No way to reconnect since we do not know if the connection is dead at all.
What would be the cleanest way? Restarting connection every 5 minutes is one idea but it is not clean.
code
runEvtSource(url, fn) {
if (this.get('session.content.isAuthenticated') === true) {
var evtSource = new EventSource(url, {
withCredentials: true
});
}}
Agressive reconnect code
var evtSource = this.runEvtSource(url, fn)
var evtSourceErrorHandler = (event) => {
var txt;
switch (event.target.readyState) {
case EventSource.CONNECTING:
txt = 'Reconnecting...';
evtSource.onerror = evtSourceErrorHandler;
break;
case EventSource.CLOSED:
txt = 'Reinitializing...';
evtSource = this.runEvtSource(url, fn)
evtSource.onerror = evtSourceErrorHandler;
break;
}
console.log(txt);
evtSource.onerror = evtSourceErrorHandler
I normally add a keep-alive layer on top of the the SSE connection. It doesn't happen that often, but sockets can die without dying properly, so your connection just goes quiet and you don't get an error.
So, one way is, inside your get data function:
if(timer)clearTimeout(timer);
timer = setTimeout(reconnect, 30 * 1000);
...process the data
In other words, if it is over 30 seconds since you last got data, reconnect. Choose a value based on the frequency of the data you send: if 10% of the time there is a 60 second gap between data events, but never a 120 second gap, then setting the time-out to something higher than 120 seconds makes sense.
You might also want to keep things alive by pushing regular messages from the server to client. This is a good idea if the frequency of messages from the server is very irregular. E.g. I might have the server send the current timestamp every 30 seconds, and use a keep-alive time-out of 45 seconds on the client.
As an aside, if this is a mobile app, bear in mind if the user will appreciate the benefit of reduced latency of receiving chat messages against the downside of reduced battery life.

Rate limiting requests to a 3rd-party API

My code:
const limit = require('simple-rate-limiter')
const request = limit(require('request').defaults({
gzip: true
})).to(15).per(10 * 1000) // 15 requests per 10 seconds
request(API_ENDPOINT, callback) // call API thousands of times
Libraries: simple-rate-limiter and the well-known request library.
I need to call a 3rd-party API thousands of times that only allows 15 requests every 10 seconds. The above code doesn't limit my requests correctly so the server sends a 429 Too Many Requests HTTP status code.
I can send 15 requests at once but then the program will have to wait for 10 seconds before it can send any more requests or it'll get a 429 response again.
I think this is because the connection to the server takes anywhere from hundreds of milliseconds to few seconds (300ms-2s). So my request time is different from the time the server receives the request.
Responses from the server contain a Date header. Can that be use somehow to limit the requests correctly? Is there a library that makes this easy? Even after implementing correct rate limiting, if you get a 429 response, is there a simple way to retry that too?
OK so the approach is to check how much time has passed from the start till the end of call. I will assume that it is safe to call API every 667 milliseconds.
var startTime,
endTime;
function timeDiff() {
return (endTime.getTime() - startTime.getTime());
}
function startRequest() {
startTime = new Date();
request(API_ENDPOINT, requestCallback)
}
function requestCallback(data) {
//do what you please with data
endTime = new Date();
var diff = timeDiff();
if(diff < 667) {
//To early to start API, need to wait.
setTimeout(startRequest, 667 - diff);
} else {
//It is fine to start request now.
setTimeout(startRequest);
}
}
startRequest();
This code will keep calling API as soon as the call is finished unless it finished faster than 667 milliseconds.

Categories