Essentially, i want to learn of the industry standard, and proper practices of preventing players from spamming movement packets.
To demonstrate my current dilemma, i have made a 1-dimensional example, using the key concepts from https://www.gabrielgambetta.com/client-server-game-architecture.html except for interpolation. As such, my movement messages consist of the direction (-1 or 1) and the client's timestamp (used for reconciliation).
// Client move function, dir = -1 or 1
function move (dir, dt) {
localxPos += dir;
last_ts_local = Date.now();
socket.emit('move', {
ts: last_ts_local,
dir: dir
});
}
Since the server simply adds the direction to the players position for EVERY movement packet sent, a flurry of movement messages can be sent to move faster.
...
// Server receive move
socket.on('move', function(msg) {
clients[socket.id].process_queue_moves.push(msg);
});
...
// Server processes movement queue (run every 1 seconds - very slow for example purposes)
for (var i = 0; i < clientIDs.length; i++) {
clientID = clientIDs[i];
// process movement queue
for (var j = 0; j < clients[clientID].process_queue_moves.length; j++) {
clients[clientID].xPos += (clients[clientID].process_queue_moves[j].dir)/Math.abs(clients[clientID].process_queue_moves[j].dir);
}
if (clients[clientID].process_queue_moves.length > 0) {
clients[clientID].last_ts = clients[clientID].process_queue_moves[clients[clientID].process_queue_moves.length-1].ts;
} else {
clients[clientID].last_ts == -1;
}
// clear movement queue
clients[clientID].process_queue_moves = [];
}
I initially thought that i could base the client's framerate - or packet rate on the amount of packets they send. However, i quickly realised that if a client sends 2 packets out, it doesn't mean they have 2 FPS. They could simply be standing still, and moved for 2 frames.
After this realisation, i discovered that i could send the move packet even when the player is not moving - more like an input packet. This way, the client could send the move message with a direction of 0 when not moving.
This eliminates the malicious potential for players, since, if the player sends 1000 packets, the server can infer that the player simply has 1000 FPS, and limit the movement as such.
Now, i'm not sure if this is the best way to handle this, or if sending a message every frame is too intensive. If there is a better way to do this, could you please let me know :).
Thanks
I have solved this by sending fragments of input (to keep the packets down - i have it at 20 tps). The input packets also contain when the player is not moving to perform accurate validation. If you'd like to know more info on how i solved this, pm me :).
Related
Using the PerformanceResourceTiming, the duration value returned includes the resource scheduling time too.
Here is an example:
Here is the data observed using a Performance Observer:
The values match with the network panel. But this value corresponds to the total time. This total time has added up the resource scheduling time too.
Is there any way to get the duration from the API excluding the resource scheduling time? Usually the API is adding this time into the total duration of the request.
Here is the entry in the network panel table.
As you can see in the above photos : 244.13ms is the sum of 240ms (~Resource Inflight Time) + 4ms (~Resource Scheduling Time).
As noted above, the value logged is the sum of stalled time and time logged in the entry of network table. Which means it is not exclusively the in-flight time; I am looking for that.
you can calculate that time
var startTime = performance.now()
doSomething() // <---- measured code goes between startTime and endTime
var endTime = performance.now()
console.log(`Call to doSomething took ${endTime - startTime} milliseconds`)
and if you want to know the time your request took before the server recieved it, start a "performance.now()" when you send the request and another "performance.now()" at the beginning of the function on the server.
then substract them as shown in the example above
now you know the time it took for the server to recieve your request
This Code gives you total duration with and without latency, as you request excluding the resource scheduling time.
const resourcses =performance.getEntriesByType('resource').reduce((o,entry,i)=>{
const {name,requestStart,responseStart,
domainLookupEnd,
domainLookupStart,
connectEnd,
connectStart,
secureConnectionStart,
responseEnd} = entry;
const dnslookup = domainLookupEnd - domainLookupStart;
const TCPhandshake = connectEnd - connectStart;
const secureConn = (secureConnectionStart > 0) ? (connectEnd - secureConnectionStart) : 0;
const wl = (responseEnd-requestStart)+(dnslookup+TCPhandshake+secureConn);
const whl = (responseStart-requestStart)+(dnslookup+TCPhandshake+secureConn);
const l = wl-whl;
o[i] = {
url:name,
dnslookup,
TCPhandshake,
secureConn,
withLatency:wl,
withoutLatency:whl,
networkLatency:l
}
return o;
})
console.dir(resourcses)
Above Image shows the response time without latency of 43.43 (green color) and with latency of 45.26 (sky blue color)
Latency is time required by network to fetch the resource from server to client, it may differ by your network speed.
I have a game app (Electron) and a web app (testing on android chrome). That pass messages via a websocket server. I want to coordinate a countdown between the 2 processes. It doesn't have to be perfect but What I've observed is that if I run in an environment with low latency it's fine. But the more lag their is in the system it seems like the Electron app tries to start far earlier then the web app. I've tested all my math and it should synchronize but it just doesn't.
First the web app iniates the start of the countdown by passing a starting time to the game app
const timeToGameStart:number = peerConnection.timeToGameStart(); // time to game start = 3 x (the longest time it toke to pass a previous msg from game app to web app)
const currUnixTime:number = peerConnection.currUnixTime();
const startGameTime:number = currUnixTime + timeToGameStart;
const startGame:StartGame = <StartGame>{
msg_data_type:Msg_Data_Type.StartGame,
game_start_time:startGameTime
}
peerConnection.passMsg(startGame);
setTimeout(timer.start, timeToGameStart);
Below is the app portion of the code that responds to the msg passed to the server
const gameStartTime:number = (<StartGame> msgData).game_start_time;
const currUnixTime:number = ServerConnection.currUnixTime();
// if we are on time, wait till its right time else if we are late, start at the next inc 3,2,1
const countDownLength:number = 3;
if (currUnixTime <= gameStartTime) {
setTimeout(()=>startCountDown(countDownLength), currUnixTime - gameStartTime);
} else {
const timeWeAreLateBy:number = currUnixTime - gameStartTime;
const timeWeAreLateByInSec:number = Math.ceil(timeWeAreLateBy / 1000);
const shortCountDownLen:number = Math.max(countDownLength - timeWeAreLateByInSec, 0);
const timeToNextSec:number = Math.max((1000 * timeWeAreLateByInSec) - timeWeAreLateBy, 0);
setTimeout(()=>startCountDown(shortCountDownLen), timeToNextSec);
}
The problem is that these two separate processes are on separate OSs. Which both have a different definition of time. i.e. (new Date()).getTime() return different numbers. the difference was 2 seconds so the controller thought their was no lag in the connection and was telling the app to start ASAP.
The solution was I had to define a consistent measurement of time. After each app connects to the server they sync their time with the server by making a request for the time the server has.
I didn't need a super precise time sync so I used a simple algorithm that got the job done. The algorithm was trying to calculate the difference in time the process was off from the server. The formula I used was server_time_diff = server_time - (received_time - RTT/2). Now to get unified (or server time) you just need to call new Date() + server_time_diff. RTT is the time it took to request the time from the server.
Any improvement ideas to my algorithm is welcome.
I am having trouble consuming the response from my WebFlux server via JavaScript's new Streams API.
I can see via Curl (with the help of --limit-rate) that the server is slowing down as expected, but when I try to consume the body in Google Chrome (64.0.3282.140), it it not slowing down like it should. In fact, Chrome downloads and buffers about 32 megabytes from the server even though only about 187 kB are passed to write().
Is there something wrong with my JavaScript?
async function fetchStream(url, consumer) {
const response = await fetch(url, {
headers: {
"Accept": "application/stream+json"
}
});
const decoder = new TextDecoder("utf-8");
let buffer = "";
await response.body.pipeTo(new WritableStream({
async write(chunk) {
buffer += decoder.decode(chunk);
const blocks = buffer.split("\n");
if (blocks.length === 1) {
return;
}
const indexOfLastBlock = blocks.length - 1;
for (let index = 0; index < indexOfLastBlock; index ++) {
const block = blocks[index];
const item = JSON.parse(block);
await consumer(item);
}
buffer = blocks[indexOfLastBlock];
}
}));
}
According the the specification for Streams,
If no strategy is supplied, the default behavior will be the same as a
CountQueuingStrategy with a high water mark of 1.
So it should slow down the promise returned by consumer(item) resolves very slowly, right?
Looking at the Backpressure support in the Streams API, it seems that Backpressure information is communicated within the Streams chain and not over the network. In this case, we can assume an unbounded queue somewhere and this would explain the behavior you're seeing.
This other github issue suggests that the Backpressure information indeed stops at the TCP level - they just stop reading from the TCP socket which, depending on the current TCP window size/TCP configuration, means the buffers will be filled and then TCP flow control kicks in. As this issue states, they can't set the window size manually and they have to let the TCP stack handle things from there.
HTTP/2 supports flow control at the protocol level, but I don't know if the browser implementations leverage that with the Streams API.
I can't explain the behavior difference you're seeing, but I think you might be reading too much in the Backpressure support here and that this works as expected according to the spec.
We are building a chatroom with our own notification system without depending on GCM but with service worker + SSE.
On desktop it is fine , but on mobile android app (which uses cordova-crosswalk , chromium 53 ) .
The long running notification connection become stuck after 20-30 minutes and it is in foreground activity.
It dosen't die with an error , just not reciving data. No error at all which is very weird. No way to reconnect since we do not know if the connection is dead at all.
What would be the cleanest way? Restarting connection every 5 minutes is one idea but it is not clean.
code
runEvtSource(url, fn) {
if (this.get('session.content.isAuthenticated') === true) {
var evtSource = new EventSource(url, {
withCredentials: true
});
}}
Agressive reconnect code
var evtSource = this.runEvtSource(url, fn)
var evtSourceErrorHandler = (event) => {
var txt;
switch (event.target.readyState) {
case EventSource.CONNECTING:
txt = 'Reconnecting...';
evtSource.onerror = evtSourceErrorHandler;
break;
case EventSource.CLOSED:
txt = 'Reinitializing...';
evtSource = this.runEvtSource(url, fn)
evtSource.onerror = evtSourceErrorHandler;
break;
}
console.log(txt);
evtSource.onerror = evtSourceErrorHandler
I normally add a keep-alive layer on top of the the SSE connection. It doesn't happen that often, but sockets can die without dying properly, so your connection just goes quiet and you don't get an error.
So, one way is, inside your get data function:
if(timer)clearTimeout(timer);
timer = setTimeout(reconnect, 30 * 1000);
...process the data
In other words, if it is over 30 seconds since you last got data, reconnect. Choose a value based on the frequency of the data you send: if 10% of the time there is a 60 second gap between data events, but never a 120 second gap, then setting the time-out to something higher than 120 seconds makes sense.
You might also want to keep things alive by pushing regular messages from the server to client. This is a good idea if the frequency of messages from the server is very irregular. E.g. I might have the server send the current timestamp every 30 seconds, and use a keep-alive time-out of 45 seconds on the client.
As an aside, if this is a mobile app, bear in mind if the user will appreciate the benefit of reduced latency of receiving chat messages against the downside of reduced battery life.
I'm trying to write a piece of Javascript that switches between two videos at timed intervals (don't ask). To make matters worse, each video has to start at specific place (about ten seconds, and again, don't ask.)
I got the basics working by just using the YUI Async library to fire to switch the videos at intervals:
YUI().use('async-queue', function (Y) {
// AsyncQueue is available and ready for use.
var cumulativeTime = 0;
var q = new Y.AsyncQueue()
for (var x = 0; x < settings.length; x++) {
cumulativeTime = cumulativeTime + (settings[x].step * 1000)
q.add( {
fn: runVideo,
args: settings[x].mainWindow,
timeout: cumulativeTime
})
}
q.run()
});
So far, so good. The problem is that I can't seem to get the video to start at ten seconds in.
I'm using this code to do it:
function runVideo(videoToPlay) {
console.log('We are going to play -> ' + videoToPlay)
var video = document.getElementById('mainWindow')
video.src = '/video?id=' + videoToPlay
video.addEventListener('loadedmetadata', function() {
this.currentTime = 10 // <-- Offending line!
this.play();
})
}
The problem is that this.currentTime refuses to hold any value I set it to. I'm running it through Chrome (the file is served from Google Storage behind a Google App Engine Instance) and when the debugger goes past the line, the value is always zero.
Is there some trick I'm missing in order to set this value?
Thanks in advance.
Try use Apache server.
setCurrentTime not working with some simple server.
ex) python built in server, php built in server
HTTP server should be Support partial content response. (HTTP Status 206)