I am currently developing a peer to peer game in JavaScript using WebRTC. It treats one of the peers (i.e. the host) as the server and any other peers who join connect to the host through a node.js brokering server.
I am currently trying to solve an issue where the game stops updating for everyone if the host switches tabs such that the game is no longer the active tab. After doing some research, I discovered that this is because I'm using something like:
setTimeout(callback, 1000 / 60);
in my game loop. setTimeout (at least in Chrome and Firefox, which are the browsers I'm concerned with) is defined such that if the page calling it is not in your active tab, it can be called a maximum of once per second.
I read that Web Workers don't have this constraint, but in order to make that work I would need to run all of my game logic inside the web worker. I tried sending my game object to the worker using JSON.stringify(), but it said that the object had a circular reference (in the game loop) and it couldn't be converted to JSON. So I'm not sure what to do about that.
I also looked into implementing my own timer which kept running regardless of whether the page was in the active tab, but I'm not sure how to do this, either.
I don't really have a problem doing it either of these ways, or even some other way I haven't thought of yet, provided it doesn't incur a large performance overhead. Any suggestions would be greatly appreciated.
So, as I said above, Web Workers are able to call setTimeout() without the 1 second delay for inactive tabs. My solution then was to create a Web Worker which was only responsible for calling setTimeout() (called in its onmessage event listener). Then, at the end of each game loop I called:
this.worker.postMessage(null)
It could be argued that it would be more efficient to give the Web Worker more responsibility than just calling setTimeout(), since I've already added the overhead of waiting for messages to be sent between the main thread and the worker. This is something I might look at in the future.
The main problem with doing it this way is compatibility with IE; IE did not get support for web workers until version 10.0. This isn't really a concern for me but I think it's worth mentioning.
Related
I'm trying to build a game which uses WebSockets. The collision detection and game state is handled on the server, which runs a game loop approx every 16ms. Each iteration, it sends out a message with the updated state to all players, which update the local copy and render the game.
About half the messages arrive fine, but sometimes there will be a batch where hundreds of ms of game time arrives instantly.
I created a minimal test case, which sends the current timestamp every 16ms. On the client, you can see it buffer messages every couple of seconds:
I've profiled the application, and over the duration of that gif there was only one dropped frame, with it otherwise maintaining a consistent 60fps.
I'm guessing GC could be the cause of one of the delays, but as for the others and resolving this I'm pretty stuck.
The application itself is Vue, however the game part is implemented in plain JS + Canvas.
Is your game loop using Window.requestAnimationFrame()? I would try to run the websocket code separetly, like in a timer.
You can also try other values for the websocket refresh like 36ms, 60ms, 120ms and see if this problem is still active. Maybe there are too many requests and some caching going on in the browser or in the server side.
I can't reprocude your problem, you should make sure this is because of the GC. (If this is the case you can try to eliminate the heavy GC calls (try reuse objects/arrays). Or maybe you can use your websocket code in a web worker somehow, but if the GC runs then the mainthread is blocked so you can put logic in the web worker also, but this is just a wild speculation, anyway you can test the communication in a webworker without game logic, just timestamps values).
What i want to be able to do is in effect is check for messages from my web workers at a set point. so at the moment I have it set up so i tell my web workers to go ahead and get on with some complex calculations in the background and then my main frame gets on with some stuff too and then when the web workers are done they interrupt the main frame to do their callback function and it usually does this at a non ideal time effecting performance. So is there a way to kind of queue the callbacks and then run them when main frame says it's ok to do so...?
The only option around this i can see at the moment is to set up an extra worker where the other workers post their results to using 'MessageChannel' when they are done which can then be checked by the main frame on it's own terms!
thanks...
Update
after messing around with this for a while, i've realised what i thought was my problem is not actually my problem! what's actually happening is that when my web worker's finish this actually slows down my code... and given that i'm running an animation in the main frame this is seen as a stutter when each worker finishes! If I keep the worker going in the background with random tasks then this doesn't happen :-/ Very confusing??
Why would this happen? It's not as if I'm calling anything when the script finishes...
update 2
after a ridiculous amount of searching and playing around it turns out that this is only an issue when chrome (62) devtools is open!
I'm developing an app that should receive a .CSV file, save it, scan it, and insert data of every record into DB and at the end delete the file.
With a file with about 10000 records there aren't problems but with a larger file the PHP script is correctly runned and all data are saved into DB but is printed ERROR 504 The server didn't respond in time..
I'm scanning the .CSV file with the php function fgetcsv();.
I've already edit settings into php.ini file (max execution time (120), etc..) but nothing change, after 1 minute the error is shown.
I've also try to use a javascript function to show an alert every 10 seconds but also in this case the error is shown.
Is there a solution to avoid this problem? Is it possible pass some data from server to client every tot seconds to avoid the error?
Thank's
Its typically when scaling issues pop up when you need to start evolving your system architecture, and your application will need to work asynchronously. This problem you are having is very common (some of my team are dealing with one as I write) but everyone needs to deal with it eventually.
Solution 1: Cron Job
The most common solution is to create a cron job that periodically scans a queue for new work to do. I won't explain the nature of the queue since everyone has their own, some are alright and others are really bad, but typically it involves a DB table with relevant information and a job status (<-- one of the bad solutions), or a solution involving Memcached, also MongoDB is quite popular.
The "problem" with this solution is ultimately again "scaling". Cron jobs run periodically at fixed intervals, so if a task takes a particularly long time jobs are likely to overlap. This means you need to work in some kind of locking or utilize a scheduler that supports running the job sequentially.
In the end, you won't run into the timeout problem, and you can typically dedicate an entire machine to running these tasks so memory isn't as much of an issue either.
Solution 2: Worker Delegation
I'll use Gearman as an example for this solution, but other tools encompass standards like AMQP such as RabbitMQ. I prefer Gearman because its simpler to set up, and its designed more for work processing over messaging.
This kind of delegation has the advantage of running immediately after you call it. The server is basically waiting for stuff to do (not unlike an Apache server), when it get a request it shifts the workload from the client onto one of your "workers", these are scripts you've written which run indefinitely listening to the server for workload.
You can have as many of these workers as you like, each running the same or different types of tasks. This means scaling is determined by the number of workers you have, and this scales horizontally very cleanly.
Conclusion:
Crons are fine in my opinion of automated maintenance, but they run into problems when they need to work concurrently which makes running workers the ideal choice.
Either way, you are going to need to change the way users receive feedback on their requests. They will need to be informed that their request is processing and to check later to get the result, alternatively you can periodically track the status of the running task to provide real-time feedback to the user via ajax. Thats a little tricky with cron jobs, since you will need to persist the state of the task during its execution, but Gearman has a nice built-in solution for doing just that.
http://php.net/manual/en/book.gearman.php
I am writing a CPU intensive javascript application. I am running into a problem where sometimes the UI is locked while CPU-intensive calculation occurs. I know that the standard approach to solving this is to call setTimeout and let the event loop respond to UI events. However, that doesn't work for me and here's why.
When the page loads, the javascript vm needs to do a bunch of parsing and analyzing of chunks of data. This is truly background stuff, and I am calling setTimeout to run each chunk. However, this means that the user gets a very choppy UI experience until all chunks have been completed (can be up to 10 seconds for large files) and on every save. This is not acceptable.
I can think of 2 solutions, neither of which I really like:
be more granular about the chunks, thus providing more opportunities for the event loop to run. But, I don't like this because the cpu code is already quite complex, but it typically runs well. Calling setTimeout throughout the cpu bound code would make it far more complicated
Do more work on the server. However, I am running a node server and this would simply push the problem from the client to the server, with the added problem of increased bandwidth.
Fixing this would be trivial on a traditional thread-based VM. What should I do for Javascript?
UPDATE:
Some points that I forgot to mention:
We are not concerned with legacy browsers and all users will be required to use a modern Firefox, Chrome, Opera, Safari, IE, etc.
Our initial prototype has the client and server co-located, but there should be nothing preventing us from moving to a remote server.
The data lives on the client (well...obviously, if the client and server are the same machine, but this will be the case even when we move to remote servers).
Webworkers might be the solution, but they do still seem flaky. Does anyone have experience with them? Are they stable? Which modern browsers do not support them well? Are there any general problems with them?
Depending on whether this application will ever become public or not, you have to decide whether you can use Web Workers, split the data up more or do server-side processing. For real-world applications the real solution would be doing heavy computation on the server since you can't expect the user to have the latest processor, it might be a mere netbook which will probably only cough a few times and then crash.
Web workers would be a solution when you can be sure that users have the latest browsers that support it, however if that's not the case, there's no way to shim it like most HTML5 stuff.
Based on what I know about your application, I'd say that you should send precomputed data to the client. Furthermore, Node.js is bad at doing hardcore computations so you might want to look into different data processing options on the server. Also, I don't think bandwidth will be a problem since you have to give the client the initial data anyway. How much bigger is the processed data?
I'm interested in adding an HTML/web-browser based "log window" to my net-enabled device. Specifically, my device has a customized web server and an event log, and I'd like to be able to leave a web browser window open to e.g. http://my.devices.ip.address/system_log and have events show up as text in the web browser window as they happen. People could then use this as a quick way to monitor what the system is doing, without needing run any special software.
My question is, what is the best way to implement this? I've tried the obvious approach -- just have my device's embedded web server hold the HTTP/TCP connection open indefinitely, and write the necessary text to the TCP socket when an event occurs -- but the problem with that is that most web browsers (e.g. Safari) don't display the web page until the server has closed the TCP connection has been closed, and so the result is that the log data never appears in the web browser, it just acts as if the page is taking forever to load.
Is there some trick to make this work? I could implement it as a Java applet, but I'd much prefer something more lightweight/simple, either using only HTML or possibly HTML+JavaScript. Also I'd like to avoid having the web browser 'poll' the server, since that would either introduce too much latency (if the reload delay was large) or put load on the system (if the delay was small)
If you don't want to use polling, you're more-or-less stuck writing your log viewer completely outside of a browser. However, it's quite simple to write your polling which does a good job of minimizing both latency and system load (as you mentioned concerns about).
The trick to this is to use Comet, which is essentially Ajax + long polling.
Well, since you do say you are willing to do it in javascript:
Have your process continuously write, without closing the connection
In the client (the browser) use an xmlhttpobject and monitor the ready state.
0 = uninitialized
1 = loading
2 = loaded
3 = interactive
4 = complete
obviously 3 and 4 are what you are looking after. When you get output, all you have to do is write responseText to a div, and you're set. Look here for xmlhttpobject properties and usage.
Since this is quite old now. You can use web sockets for this