Creating a max-speed clock using setInterval only yields a maximum frequency of about 250Hz. For the project I'm working on I need a way faster clock. However, I do not want to run a 1000Hz clock in the main thread because of obvious reasons.
For now, the only plausible way I managed to reach a frequency of 1000Hz is by creating a second worker, running a while(true), sending a message to its "master" worker every time 1 millisecond has passed. This works, and the timer is accurate, however this is extremely resource-intensive. Just running this loop uses about 30% of my CPU.
Is there ANY other way of getting a fast clock to work? It can be as hacky as possible and abuse as many APIs as it needs, it just has to be A: Faster than a maximum setInterval speed & B: As resource-efficient as possible.
Related
I am essentially trying to create a web app where one person can start a timer, and everyone else's timers (on different computers/phones) will start at the exact same time. I am currently using node.js and websockets. When the "master" timer hits start, the server uses websockets to tell all the devices to start. Since all the users should be on the same LAN, I thought I would not have to compensate for latency, but the timers are starting a few hundred milliseconds off of each other, and for my purposes, it is very noticeable. There isn't much delay between PCs, but mobile phones tend to be the most off of each other.
What would be the best way to get everything to start at the same exact time, within a margin of error of let's say, 50ms? I do not mind if the timers take a few extra seconds to start if the delay between them is within 50ms.
Send a timestamp to the clients when to start the timer.
Then the accuracy is tied to the accuracy of the system time.
If you can't ensure that the system time is accurate, another way would be to meassure latency and add it as an offset.
I'm trying to build a game which uses WebSockets. The collision detection and game state is handled on the server, which runs a game loop approx every 16ms. Each iteration, it sends out a message with the updated state to all players, which update the local copy and render the game.
About half the messages arrive fine, but sometimes there will be a batch where hundreds of ms of game time arrives instantly.
I created a minimal test case, which sends the current timestamp every 16ms. On the client, you can see it buffer messages every couple of seconds:
I've profiled the application, and over the duration of that gif there was only one dropped frame, with it otherwise maintaining a consistent 60fps.
I'm guessing GC could be the cause of one of the delays, but as for the others and resolving this I'm pretty stuck.
The application itself is Vue, however the game part is implemented in plain JS + Canvas.
Is your game loop using Window.requestAnimationFrame()? I would try to run the websocket code separetly, like in a timer.
You can also try other values for the websocket refresh like 36ms, 60ms, 120ms and see if this problem is still active. Maybe there are too many requests and some caching going on in the browser or in the server side.
I can't reprocude your problem, you should make sure this is because of the GC. (If this is the case you can try to eliminate the heavy GC calls (try reuse objects/arrays). Or maybe you can use your websocket code in a web worker somehow, but if the GC runs then the mainthread is blocked so you can put logic in the web worker also, but this is just a wild speculation, anyway you can test the communication in a webworker without game logic, just timestamps values).
I am testing performance in my JavaScript application, a game using canvas. One problem I have is major fluctuations with FPS: going from 60 to 2 in ms.
As you can see, there are major spikes. It is not due to painting, scripting, rendering, or loading. I think it is because requestAnimationFrame doesn't assign a set FPS rate and it might be too flexible? Should I use setTimeout? Is it usually more reliable in these cases because it forces the application to run in only one set FPS rate?
Performance is always about specifics. Without more information on your app, (e.g. the specific code that renders your app ). It is hard to say how you should structure your code.
Generally, you should always use requestAnimationFrame. Especially for rendering.
Optionally store the delta time and multiply your animation attributes by that delta. This will create a smooth animation when the frame rate is not consistent.
I've also found random frame rate changes are usually related to garbage collection. Perhaps do some memory profiling to find if there are places you can avoid recreating objects each frame.
requestAnimationFrame is superior to setTimeout in nearly every way. It won't run as a background tab. It saves battery. It gives the browser more information about the type of app you are developing, and this lets the given browser make many safe performance increasing assumptions.
I highly recommend watching this talk by Nat Duca on browser performance.
According to https://developer.mozilla.org/en-US/docs/Web/API/window/requestAnimationFrame referring to requestAnimationFrame
The number of callbacks is usually 60 times per second, but will generally match the display refresh rate
I've not managed to find a system that runs at something other than 60fps. If the Javascript is busy, then I can certainly get it to be less, but only in that case.
Is there a system that, if there are enough resources, runs requestAnimationFrame at something other than 60fps?
I am in the middle of creating a node.js project however I am concerned as to the performance of the website once it is up and running.
I am expecting it to have a surge of maybe 2000 users for 4-5 hours over a period of one night per week.
The issue is that each user could be receiving a very small message once every second. i.e a timer adjustment or a price change.
2000*60 = 120000 messages in total per minute.
Would this be possible it would be extremely important that there was minimum lag, less than 1 second if possible?
Thanks for the help
You can certainly scale the users with socket.io but the question is how you want to do that. It is unclear whether you have considered the cluster module as that will significantly take the load of the single node process for that amount of users and reduce the latency time. Of course when you do this you need to stop using the in-memory store that socket.io uses by default and use something like redis instead so that you don't end up with duplicate authentication handshakes. Socket.io even has a document to explain how to do this.
What you really need to do is test the performance of your application by creating 2000 clients and simulating 2000 users. The socket.io client can let you set this up in a node application (and then perhaps sign up for the free tier of an EC2 machine to run them all).
It's worth noting that I haven't actually seen v1.0 benchmarks and really the socket.io developers should have a page on it dedicated to benchmarks as this is always a common question with developers.