I am making a game. The game has 1 main loops:
//draws a new frame and game logic
function draw()
{
player.gameTick();
game.gameTick();
lastTime=newTime;
background.draw(ctx);
player.draw(ctx);
enemies.draw(ctx);
setTimeout(draw,50);
}
Normally this operates fine, and I get 20fps reported to #console. Once in a while, however, the fps spikes up to >125. (meaning draw is being called less than 50 milliseconds after the previous call). When this happens, the game starts to lag for a couple of seconds, and then the fps goes back down. (that's also counter intuitive, why does HIGHER fps cause lag?)
Anyway, does anyone know why this is the case?
Yes I have tried setInterval() also, and the same thing happens. =/
JavaScript is single threaded. If you're scheduling two timeouts for 50 ms independently, there's a chance that they eventually collide, or come close to it, and cause an odd processing blockage for a second before they sort themselves out and can space out again. You should probably consolidate the code and create a single main loop that calls the other two functions. That way you can ensure that they both process once each per 50 ms.
Your flow looks something like this:
Process1() -> takes some amount of time, hopefully less than 50ms, but guaranteed to be > 0ms.
Process1 sets a timeout for itself 50ms in the future. The 2nd run of Process1 will occur more than 50ms after it started the 1st time.
Process2() -> takes some amount of time, greater than 0ms. Same as Process1.
Process2 sets a timeout for itself. Same rules apply. The 2nd run of Process2 will occur more than 50ms after it started the 1st time.
If, theoretically, Process1 takes 5ms and Process2 takes 7ms, every 7 runs of Process1 or 5 runs Process2 will cause the next timeout set for 50ms in the future to correspond exactly with the scheduled time for the next run of the other function. This type of collision will cause inconsistent behavior when the interpreter, which can only do one thing at a time, is asked to handle multiple events concurrently.
-- Edits for your revisions to the question --
You're still setting two independent timeouts for 50ms in the future. There's still no way to prevent these from colliding, and I'm still not entirely certain why you're approaching it this way. You should have something like this:
function mainLoop() {
player.tick();
game.tick();
background.draw();
players.draw();
enemies.draw();
setTimeout(mainLoop, 50);
}
mainLoop()
Note the absence of repetitive setTimeout calls. Everything can happen in one go. You're creating collisions in both versions of your demonstrated code.
This problem was caused by browser javascript timer throttling. I noticed the problem was exacerbated when I have many tabs open or when I switch between tabs. Chrome had less of an issue, most likely due to tabs being in isolated processes.
Firefox 7 seems to have fixed this problem for FF.
Related
People usually do this:
var DBOpenRequest = window.indexedDB.open("toDoList");
DBOpenRequest.onsuccess = function(event) {//Good};
If the second line of the code is not executed in a timely manner, the onsuccess will miss the event.
Well, the problem does not happen often because the delay between those two lines is usually very short. But, still, the outcome of those two lines is not deterministic. On my machine, if I simulate 270 ms delay between those two lines, the event will be missed. I think the current signature of the open() is inadequate.
The correct asynchronous design pattern is to set the event handler first, then to start the actual asynchronous operation. The open() function should take a callback as an argument.
Any comments?
Updated questions:
async function test(delay)
{
var req = indexedDB.open("test");
//Simulate a delay
await new Promise(resolve => setTimeout(resolve, delay));
req.onsuccess = function (evt) {console.log("Good");};
}
test(1);
If the delay is 1 ms, the "Good" will be logged. If the delay is 1000 ms like test(1000), the "Good" will not be logged, meaning the event handler is not called.
Review the basics of the EventListener design pattern. indexedDB basically adopts the same pattern used throughout several Javascript components such as DOM elements and XMLHttpRequest, and assumes your familiarity with the pattern.
Review how JavaScript's event loop operates because it is important to understand asynchronous code.
Whether you bind a listener function before or after an event is dispatched is irrelevant within the context of asynchronous code. Basically, the bind that you state occurs later, because it is written in a statement on the following line of code, does not actually occur later. Regardless of where it the line is written (barring some pedantic exceptions), it occurs within the same tick of the event loop as the previous line, the call to open. The event does not fire until, at the earliest, the next event loop epoch, which will always be after the binding occurred in the previous event loop epoch. Always. Guaranteed.
The time delay between the calling of the code that does something that causes an event, and the eventual occurrence and reception of that event, is irrelevant. This delay is related to many other things, like how powerful your machine is, how many resources are available to your pc, how busy your script is trying to do other things, possibly even how much junk you have loaded into the dom, because any of those things could contribute into extending the lifetime of the current epoch of the event loop. The delay is implemented as an indefinite wait period. It is basically coded as "occur in some later event loop epoch". This could be 1 nanosecond later, it could be 10000 seconds later, the amount of the delay is irrelevant. The only relevant concern is that the event triggers in the next event loop epoch, which is some time after everything else occurred in the prior event loop epoch.
The second line will always be executed in a timely manner, because the basic criteria for timely here is simply "in the same epoch of the event loop", and here, timely, again, could mean any amount of time.
The outcome is deterministic. Stop thinking of time ticks as an amount of milliseconds elapsed or something like that. Instead think of time as ticks of an event loop. Each tick can take a variable amount of time. Ticks are ordered. Ticks are serial. Tick 2 will occur after Tick 1. Tick 3 will occur after Tick 2. Etc. This is therefore deterministic with regard to execution order, accounting for variable amounts of time per tick. A tick is just a period of time, and despite the periods perhaps having variable amounts, you can still state claims such as the fact that some period occurs before or after some other period. Also, no two periods overlap, ever (not true concurrency, not actually multi-threaded, not strictly interleaved).
I dunno, imagine a stopwatch, or an old wristwatch, or a clock on the wall. The hand travels around the face of the watch. Let's pretend it takes 1 second for the hand to travel from the starting point, all the way around 360 degrees, and return to the starting point. Each roundtrip, let's call it an epoch, or a tick. We can then count how many roundtrips occur, by counting the number of passes back across the starting point. Basically the number of epochs, basically there is cardinality.
Now imagine two watches. The first watch still takes a full second to travel all the way around. The second watch, however, let's pretend, it is an old slow watch, takes 2 seconds to travel all the way around. Half the frequency. Now, the thing is, even though the timing is different between the two watches, we still can make claims like "watch 1 did 10 roundtrips" and "watch 2 did 5 roundtrips".
Now, take it further. Let's take a watch, and introduce random external factors. There are cosmic rays that introduce sporadic gravitational effects on the speed of the hand as it rotates around the watch face. So, some roundtrips hit little speed bumps, and take longer. So we get a distribution of roundtrip times. We still have an average round trip time, but not a constant round trip time. In roundtrip 1, our watch may make the trip in 1 second. In round 2, it may take 2.5 seconds. In round 3, it may take 1 nanosecond. The time is variable. But, this variability does not prevent us from stating things like "well there were 3 round trips when we observed it", and "well the second round trip occurred after the first one", and "no roundtrip travel ever occurred at the same time as another (round trip 1 and 2 and 3 have no overlap)".
These roundtrips are the epochs of the javascript event loop. Your bindings all take place in roundtrip #1. The event that occurs as result of opening indexedDB never takes place in roundtrip #1, no matter where you write it, in whatever order you write it, etc. The event always occurs in either roundtrip #2, or #3, or basically any epoch after roundtrip #1. Even if the event is magically technically 'ready' in #1, it still will not be exposed until #2 or later.
Because the call to open and the binding of the open listener both occur in the first epoch, it is irrelevant to talk about whether the listener is bound before or after the call to open. Because those all happen in #1, and the open event doesn't happen until at the earliest #2.
I'm troubleshooting a performance issue with my webapp. I have a reproduceable bug that causes frame times of 6-9 seconds. This is obviously not ideal performance, so I'm using the Performance tab of the Chrome DevTools to look into it.
During the long frame, it appears that my function (the highest purple row) is called hundreds of times, each call lasting for 0.7ms to 5ms. (Hovering the mouse over them reveals that each one has a different time associated) Then, over to the right there's one much longer call (515ms) which to me seems more realistic (the function does a lot of complex stuff and I'm haven't optimised it yet).
I put a console.log() statement as the first line in my function and it only gets called once. If my function is only called once why does it appear so many times in the Performance chart? Am I reading this wrongly?
Thanks,
Josh
I'm setting a timer in Node.js that waits for 3 hours to emit an event, once that time is reached it emits the event to all browsers that are listening.
The browsers gathers the information that such event before it happens and then calculates the time remaining ticking a countdown every 1 second, and expecting that when the clock reaches 0 the event will be triggered.
So one is using setTimeout (Node.js) and the other is using setInterval (browser) counting per second (countdown).
can I be sure that?
By the time the countdown reaches 0, the event will be triggered with an error range of around 1 to 2 seconds. (browser).
That the Node.js setTimeout is accurate enough to be called with a less than 1 second error range.
I've read about timers being 500ms to even 1000ms innacurate, which is fine for my needs, but I have never heard of them being used for this much time as I want to do.
Are they accurate enough or should I use a different solution? especially in the Node.js side which has to be the most accurate of them all.
Alternatives are to make a interval in Node.js that runs around 4 times per second, calculates the Date milliseconds, and checks if there are events that it needs to call from a list of events.
In the browser is to set the interval so that it calculates the date ms with every callback to try to keep the time synchronized.
The accuracy of the timer will be dependent upon how "busy" the event loop is at the time of the timeout.
It should be good enough if you wanted something like:
setTimeout(done, THREE_HOURS_IN_MS);
If your event loop is blocking for any length of time you have other problems.
But if you are sampling four times a second as part of the countdown, then I would expect a large inaccuracy to accrue.
So you may need to keep the two activities (sampling and countdown) separate, or maintain the elapsed time manually.
Note that when your web app does not have focus, the accuracy of timers degrades.
In my game engine, there are objects that need to be updated periodically. For example, a scene can be lowering its alpha, so I set an interval that does it. Also, the camera sometimes needs to jiggle a bit, which requires interpolation on the rotation property.
I see that there are two ways of dealing with these problems:
Have an update() method that calls all other object's update methods. The objects track time since they were last updated and act accordingly.
Do a setInterval for each object's update method.
What is the best solution, and why?
setInterval does not keep to a clock, it just sequences events as they come in. Browsers tend to keep at least some minor amount of time between events. So if you have 10 events that all need to fire after 100ms you'll likely see the last event fire well into the 200ms. (This is easy enough to test).
Having only one event (and calling update on all objects) is in this sense better than having each object set it's own interval. There may be other considerations though but for at least this reason option 2 is unfeasible.
Here is some more about setInterval How do browsers determine what time setInterval should use?
The best way I have found out to make a good update() function and keeping a good framerate and less load is as following.
Have a single update() method which draws your frame, by looping some sort of queue/schedule of all drawable object his own update() function which are added to this update event queue/ schedule. (eventlistener)
This way you don't have to loop all objects which are not scheduled for a redraw/update (like menu buttons or crosshairs). And you don't have an over abundance of intervals running for all drawable objects.
I recommend using the update() method over the setInterval.
Also, I would guess that the timing on the several setintervals running would be unreliable.
Another possibility, depending on what other things are happening in your game, using a bunch of separate intervals could introduce race conditions in the counting and comparing of scoring, etc
The proposed algorithms proposed are not exclusive to the related method. That is, you can use setInteval to call all the update methods, or you can have each object update itself by repeatedly calling setTimeout.
More to the point is that a single timer is less overhead than multiple timers (of either type). This really matters when you have lots of timers. On the other hand, only one timer may not suit because some objects might need to be updated more frequently than others, or to a different schedule, so just try to minimise them.
An advantage with setTimeout is that the interval to the next call can be adjusted to meet specific scheduling requirements, e.g. if one is delayed you can skip the next one or make it sooner. setInterval will slowly drift relative to a consistent clock and one–of adjustments are more difficult.
On the other hand, setInteval only needs to be called once so you don't have to keep calling the timer. You may end up with a combination.
Currently, I am rendering WebGL content using requestAnimationFrame which runs at (ideally) 60 FPS. I'm also concurrently scheduling an "update" process, which handles AI, physics, and so on using setTimeout. I use the latter because I only really need to update objects roughly 30 times per second, and it's not really part of the draw sequence; it seemed like a good idea to save the remaining CPU for actual render passes, since most of my animations are fairly hardware intensive.
My question is one of best practices. setTimeout and setInterval are not particularly kind to battery life and CPU consumption, especially when the browser is not in focus. On the other hand, using requestAnimationFrame (or tying the updates directly into the existing render phase) will potentially enforce far more updates every second than are strictly necessary, and may stop updating altogether when the browser is not in focus or at other times the browser deems unnecessary for "animation".
What is the best course of action for updating, but not rendering content?
setTimeout and setInterval are not particularly kind to battery life and CPU consumption
Let's be honest: Neither is requestAnimationFrame. The difference is that RAF automatically turns off when you leave the tab. That behavior can be emulated with setTimeout if you use the Page Visibility API, though, so in reality the power consumption problems between the two are about on par if used intelligently.
Beyond that, though, setTimeout\Interval is perfectly appropriate for use in your case. The only thing that you may want to be aware of is that you'll be hard pressed to get it perfectly in sync with the render loop. You'll have cases where you may draw one too many times before your animation update hits, which can lead to minor stuttering. If you're rendering at 60hz and updating at 30hz it shouldn't be a big issue, but you'll want to be aware of it.
If staying perfectly in sync with the render loop is important to you, you could simply have a if(framecount % 2) { updateLogic(); } at the top of your RAF callback, which effectively limits your updates to 30hz (every other frame) and it's always in sync with the draw.