I have a game app (Electron) and a web app (testing on android chrome). That pass messages via a websocket server. I want to coordinate a countdown between the 2 processes. It doesn't have to be perfect but What I've observed is that if I run in an environment with low latency it's fine. But the more lag their is in the system it seems like the Electron app tries to start far earlier then the web app. I've tested all my math and it should synchronize but it just doesn't.
First the web app iniates the start of the countdown by passing a starting time to the game app
const timeToGameStart:number = peerConnection.timeToGameStart(); // time to game start = 3 x (the longest time it toke to pass a previous msg from game app to web app)
const currUnixTime:number = peerConnection.currUnixTime();
const startGameTime:number = currUnixTime + timeToGameStart;
const startGame:StartGame = <StartGame>{
msg_data_type:Msg_Data_Type.StartGame,
game_start_time:startGameTime
}
peerConnection.passMsg(startGame);
setTimeout(timer.start, timeToGameStart);
Below is the app portion of the code that responds to the msg passed to the server
const gameStartTime:number = (<StartGame> msgData).game_start_time;
const currUnixTime:number = ServerConnection.currUnixTime();
// if we are on time, wait till its right time else if we are late, start at the next inc 3,2,1
const countDownLength:number = 3;
if (currUnixTime <= gameStartTime) {
setTimeout(()=>startCountDown(countDownLength), currUnixTime - gameStartTime);
} else {
const timeWeAreLateBy:number = currUnixTime - gameStartTime;
const timeWeAreLateByInSec:number = Math.ceil(timeWeAreLateBy / 1000);
const shortCountDownLen:number = Math.max(countDownLength - timeWeAreLateByInSec, 0);
const timeToNextSec:number = Math.max((1000 * timeWeAreLateByInSec) - timeWeAreLateBy, 0);
setTimeout(()=>startCountDown(shortCountDownLen), timeToNextSec);
}
The problem is that these two separate processes are on separate OSs. Which both have a different definition of time. i.e. (new Date()).getTime() return different numbers. the difference was 2 seconds so the controller thought their was no lag in the connection and was telling the app to start ASAP.
The solution was I had to define a consistent measurement of time. After each app connects to the server they sync their time with the server by making a request for the time the server has.
I didn't need a super precise time sync so I used a simple algorithm that got the job done. The algorithm was trying to calculate the difference in time the process was off from the server. The formula I used was server_time_diff = server_time - (received_time - RTT/2). Now to get unified (or server time) you just need to call new Date() + server_time_diff. RTT is the time it took to request the time from the server.
Any improvement ideas to my algorithm is welcome.
Related
Using the PerformanceResourceTiming, the duration value returned includes the resource scheduling time too.
Here is an example:
Here is the data observed using a Performance Observer:
The values match with the network panel. But this value corresponds to the total time. This total time has added up the resource scheduling time too.
Is there any way to get the duration from the API excluding the resource scheduling time? Usually the API is adding this time into the total duration of the request.
Here is the entry in the network panel table.
As you can see in the above photos : 244.13ms is the sum of 240ms (~Resource Inflight Time) + 4ms (~Resource Scheduling Time).
As noted above, the value logged is the sum of stalled time and time logged in the entry of network table. Which means it is not exclusively the in-flight time; I am looking for that.
you can calculate that time
var startTime = performance.now()
doSomething() // <---- measured code goes between startTime and endTime
var endTime = performance.now()
console.log(`Call to doSomething took ${endTime - startTime} milliseconds`)
and if you want to know the time your request took before the server recieved it, start a "performance.now()" when you send the request and another "performance.now()" at the beginning of the function on the server.
then substract them as shown in the example above
now you know the time it took for the server to recieve your request
This Code gives you total duration with and without latency, as you request excluding the resource scheduling time.
const resourcses =performance.getEntriesByType('resource').reduce((o,entry,i)=>{
const {name,requestStart,responseStart,
domainLookupEnd,
domainLookupStart,
connectEnd,
connectStart,
secureConnectionStart,
responseEnd} = entry;
const dnslookup = domainLookupEnd - domainLookupStart;
const TCPhandshake = connectEnd - connectStart;
const secureConn = (secureConnectionStart > 0) ? (connectEnd - secureConnectionStart) : 0;
const wl = (responseEnd-requestStart)+(dnslookup+TCPhandshake+secureConn);
const whl = (responseStart-requestStart)+(dnslookup+TCPhandshake+secureConn);
const l = wl-whl;
o[i] = {
url:name,
dnslookup,
TCPhandshake,
secureConn,
withLatency:wl,
withoutLatency:whl,
networkLatency:l
}
return o;
})
console.dir(resourcses)
Above Image shows the response time without latency of 43.43 (green color) and with latency of 45.26 (sky blue color)
Latency is time required by network to fetch the resource from server to client, it may differ by your network speed.
i would like to understand DOS (denial-of-service) attacks better and I would like to know what my options are for learning about it with an example.
i have a basic express server.
app.get('/ping', (req, res) => {
res.send({ pong: 'pong', time: new Date().valueOf(), memory: process.memoryUsage()})
})
I will separately create some javascript code that with make multiple requests to the server. but I don't know to devise strategies to try and bring down the server (consider that this is all running on localhost)
I want to see what the upper limit of making requests is possible when locally testing this. i am experiencing what is described here: Sending thousands of fetch requests crashes the browser. Out of memory
... the suggestions on that thread are more along the lines of "browser running out of memory" and that I should "throttle requests".... but I am actively trying to max out the requests the browser can make without crashing. so far my observations are that the server does not have any difficulty. (so maybe I should also make requests from my phone and tablet?)
the code have run on the browser isn't much more than:
const makeRequestAndAlogTime = () => {
const startTime = new Date().valueOf();
fetch('http://localhost:4000/ping')
.then(async (response) => {
const { time, memory } = await response.json();
console.log({
startTime: 0,
processTime: time - startTime,
endTime: new Date().valueOf() - startTime,
serverMemory: memory,
browserMemory: performance['memory']
})
})
}
for(let x = 0; x < 100; x++) {
makeRequestAndAlogTime()
}
but depending on what value I put in for the number of times to go through the for loop, performance is slower and eventually crashes (as expected)... but I want to know if there is a way I could automate determining the upper limit of requests that I can make on my browsers?
I have a simple nodejs express server where I am running into an issue where the first request after a long idle time is extremely slow. Eg - 3-4 mins. The same or similar request for the 2nd and 3rd time takes a few milliseconds.
I took a look at this page - https://expressjs.com/en/advanced/best-practice-performance.html and have done the following things -
Use gzip compression
Set NODE_ENV to production
But I still run into the issue where the first request is extremely slow.
The server is doing the following -
At startup I read from a large text file that contains a list of strings. Each of these strings is added to an array. The size of the array is normally around 3.5 million entries.
Users provide a string input and I loop over all the entries in the array and search for matches using indexOf and return the matches.
I have also tried increasing the memory for the server --max-old-space-size from 4096 to 8192 but this does not help. I am new to using nodejs/express, please let me know if there is anything else I need to consider/look into.
Thanks.
Here is the source -
var compression = require('compression')
const express = require('express')
var cors = require('cors')
var bodyParser = require('body-parser')
const fs = require('fs')
// Get command line arguments
const datafile = process.argv[2];
const port = Number(process.argv[3]);
if(process.env.NODE_ENV === 'production') {
console.log("Starting in production mode");
}
// Init
const app = express()
app.use(cors());
app.use(bodyParser.text());
app.use(compression());
app.post('/', (request, response) => {
var query = JSON.parse(request.body).query;
var results = SearchData(query);
response.send(results);
})
// Init server
app.listen(port, (err) => {
if (err) {
return console.log('Something bad happened', err)
}
console.log(`server is listening on port ${port}`)
})
console.log('Caching Data');
var data = fs.readFileSync('/datafile.txt', 'utf8');
var datalist = data.toString().split('\n');
var loc = [];
for (var i = 0; i < datalist.length; i++) {
const element = datalist[i];
const dataRaw = element.split(',');
const dataStr = dataRaw[0];
const dataloc = processData(dataRaw[1]);
datalist[i] = dataStr;
loc.push(dataloc);
}
console.log('Cached ' + datalist.length + ' entries');
function SearchData (query) {
const resultsLimit = 32;
var resultsCount = 0;
var results = [];
for (var i = 0; i < datalist.length; i++) {
if (datalist[i].indexOf(query) === -1) {
contiue;
}
results.push(datalist[i] + loc[i]);
resultsCount++;
if (resultsCount == resultsLimit) break;
}
return results;
}
More details after using the --trace-gc flag.
Launched the process & waited till all the strings were loaded into memory.
A request with a particular query string at 5:48 PM took around 520 ms.
The same request at 8:11 PM look around 157975 ms. Server was idle in between.
I see a lot of messages such as the following during startup -
[257989:0x3816030] 33399 ms: Scavenge 1923.7 (1956.9) -> 1921.7 (1970.9) MB, 34.2 / 0.0 ms (average mu = 0.952, current mu = 0.913) allocation failure
The last message from the gc logs showed something like this -
[257989:0x3816030] 60420 ms: Mark-sweep 1927.9 (1944.9) -> 1927.1 (1939.1) MB, 164.0 / 0.0 ms (+ 645.6 ms in 81 steps since start of marking, biggest step 123.0 ms, walltime since start of marking 995 ms) (average mu = 0.930, current mu = 0.493) finalize incremental marking via task GC in old space requested
I did not see anything else from gc when the response was really slow.
Please let me know if anyone can infer anything from these logs.
These are the nodejs and express server versions -
node --version -> 12.20.0
express --version -> 4.16.4
It seems like the server goes to sleep and takes a lot of time to wake up.
I was able to find a solution to this problem using a Rust based implementation but the root cause of this behavior was not the nodejs/express server but the machine where I was deploying this code.
First I moved to a Rust based implementation using actix-web framework and noticed similar performance issues which I was seeing using nodejs/express.
Then I used the Rust rayon library to process the large arrays in parallel and this resolved the performance issues which I was seeing.
I think the root cause of the issue was that the server to which I was deploying this code had a smaller processor and I was not running into this issue on my developer machine since it has a better processor -
Server machine - Intel Core Processor 2100MHz 8 Cores 16 Threads
Dev machine - Intel Xeon Processor 3.50GHz 6 Cores 12 Threads
Probably using any parallel processing library with a nodejs/express implementation would have also solved this issue.
We are building a chatroom with our own notification system without depending on GCM but with service worker + SSE.
On desktop it is fine , but on mobile android app (which uses cordova-crosswalk , chromium 53 ) .
The long running notification connection become stuck after 20-30 minutes and it is in foreground activity.
It dosen't die with an error , just not reciving data. No error at all which is very weird. No way to reconnect since we do not know if the connection is dead at all.
What would be the cleanest way? Restarting connection every 5 minutes is one idea but it is not clean.
code
runEvtSource(url, fn) {
if (this.get('session.content.isAuthenticated') === true) {
var evtSource = new EventSource(url, {
withCredentials: true
});
}}
Agressive reconnect code
var evtSource = this.runEvtSource(url, fn)
var evtSourceErrorHandler = (event) => {
var txt;
switch (event.target.readyState) {
case EventSource.CONNECTING:
txt = 'Reconnecting...';
evtSource.onerror = evtSourceErrorHandler;
break;
case EventSource.CLOSED:
txt = 'Reinitializing...';
evtSource = this.runEvtSource(url, fn)
evtSource.onerror = evtSourceErrorHandler;
break;
}
console.log(txt);
evtSource.onerror = evtSourceErrorHandler
I normally add a keep-alive layer on top of the the SSE connection. It doesn't happen that often, but sockets can die without dying properly, so your connection just goes quiet and you don't get an error.
So, one way is, inside your get data function:
if(timer)clearTimeout(timer);
timer = setTimeout(reconnect, 30 * 1000);
...process the data
In other words, if it is over 30 seconds since you last got data, reconnect. Choose a value based on the frequency of the data you send: if 10% of the time there is a 60 second gap between data events, but never a 120 second gap, then setting the time-out to something higher than 120 seconds makes sense.
You might also want to keep things alive by pushing regular messages from the server to client. This is a good idea if the frequency of messages from the server is very irregular. E.g. I might have the server send the current timestamp every 30 seconds, and use a keep-alive time-out of 45 seconds on the client.
As an aside, if this is a mobile app, bear in mind if the user will appreciate the benefit of reduced latency of receiving chat messages against the downside of reduced battery life.
I'm trying to write a piece of Javascript that switches between two videos at timed intervals (don't ask). To make matters worse, each video has to start at specific place (about ten seconds, and again, don't ask.)
I got the basics working by just using the YUI Async library to fire to switch the videos at intervals:
YUI().use('async-queue', function (Y) {
// AsyncQueue is available and ready for use.
var cumulativeTime = 0;
var q = new Y.AsyncQueue()
for (var x = 0; x < settings.length; x++) {
cumulativeTime = cumulativeTime + (settings[x].step * 1000)
q.add( {
fn: runVideo,
args: settings[x].mainWindow,
timeout: cumulativeTime
})
}
q.run()
});
So far, so good. The problem is that I can't seem to get the video to start at ten seconds in.
I'm using this code to do it:
function runVideo(videoToPlay) {
console.log('We are going to play -> ' + videoToPlay)
var video = document.getElementById('mainWindow')
video.src = '/video?id=' + videoToPlay
video.addEventListener('loadedmetadata', function() {
this.currentTime = 10 // <-- Offending line!
this.play();
})
}
The problem is that this.currentTime refuses to hold any value I set it to. I'm running it through Chrome (the file is served from Google Storage behind a Google App Engine Instance) and when the debugger goes past the line, the value is always zero.
Is there some trick I'm missing in order to set this value?
Thanks in advance.
Try use Apache server.
setCurrentTime not working with some simple server.
ex) python built in server, php built in server
HTTP server should be Support partial content response. (HTTP Status 206)