I'm looking for the best practice approach to stop PhysicsJS bodies from being able to rotate.
I have tried cancelling the body's rotational velocity. Yet this does not seem effective on the step as the body still somehow sneaks in some rotation. Combining this with setting the body's rotation manually seems to work at first:
world.on('step', function(){
obj.state.angular.pos = 0;
obj.state.angular.vel = 0;
world.render();
});
However in the past I have experienced a lot of bugs related to this method. Seemingly to do with the object being allowed to rotate just slightly before the step is called, which causes it to be "rotated" very quickly when its body.state.angular.pos is set back to zero. This results in objects suddenly finding themselves inside the subject, or the subject suddenly finding itself inside walls/other objects. Which as you can imagine is not a desirable situation.
I also feel like setting a body's rotation so forcefully must not be the best approach and yet I can't think of a better way. So I'm wondering if there's some method in PhysicsJS that I haven't discovered yet that basically just states "this object cannot rotate", yet still allows the body to be treated as dynamic in all other ways.
Alternatively what is the "safest" approach to gaining the desired effect, I would be happy even with a generic guide not tailored to physicsJS, just something to give me an idea on what is the general best practice for controlling dynamic body rotations in a simulation.
Thanks in advance for any advice!
The key to accomplishing this is to ensure that you put the body to sleep first and then immediately wake it up, afterward setting the angular velocity to zero.
So for example, what I've been doing to prevent bodies that have collided from rotating was:
world.on('collisions:detected', function(data, e) {
var bodyA = data.collisions[0].bodyA,
bodyB = data.collisions[0].bodyB;
bodyA.sleep(true);
bodyA.sleep(false);
bodyA.state.angular.vel = 0;
bodyB.sleep(true);
bodyB.sleep(false);
bodyB.state.angular.vel = 0;
});
I've also seen this accomplished by increasing the mass of the bodies in question to a ridiculously high number, but this would have possible side effects that you may not desire.
Related
I discovered a weird behavior in nodejs/chrome/v8. It seems this code:
var x = str.charCodeAt(5);
x = str.charCodeAt(5);
is faster than this
var x = str.charCodeAt(5); // x is not greater than 170
if (x > 170) {
x = str.charCodeAt(5);
}
At first I though maybe the comparison is more expensive than the actual second call, but when the content inside the if block is not calling str.charCodeAt(5) the performance is the same as with a single call.
Why is this? My best guess is v8 is optimizing/deoptimizing something, but I have no idea how to exactly figure this out or how to prevent this from happening.
Here is the link to jsperf that demonstrates this behavior pretty well at least on my machine:
https://jsperf.com/charcodeat-single-vs-ifstatment/1
Background: The reason i discovered this because I tried to optimize the token reading inside of babel-parser.
I tested and str.charCodeAt() is double as fast as str.codePointAt() so I though I can replace this code:
var x = str.codePointAt(index);
with
var x = str.charCodeAt(index);
if (x >= 0xaa) {
x = str.codePointAt(index);
}
But the second code does not give me any performance advantage because of above behavior.
V8 developer here. As Bergi points out: don't use microbenchmarks to inform such decisions, because they will mislead you.
Seeing a result of hundreds of millions of operations per second usually means that the optimizing compiler was able to eliminate all your code, and you're measuring empty loops. You'll have to look at generated machine code to see if that's what's happening.
When I copy the four snippets into a small stand-alone file for local investigation, I see vastly different performance results. Which of the two are closer to your real-world use case? No idea. And that kind of makes any further analysis of what's happening here meaningless.
As a general rule of thumb, branches are slower than straight-line code (on all CPUs, and with all programming languages). So (dead code elimination and other microbenchmarking pitfalls aside) I wouldn't be surprised if the "twice" case actually were faster than either of the two "if" cases. That said, calling String.charCodeAt could well be heavyweight enough to offset this effect.
My program has multiple scenes. Only 1 scene will be at all times visible of course, so how do I stop the animation for all the other scenes except the current scene to improve performance? There are 10 scenes and each of them has a lot of objects that strain the performance.
I know it can be probably done with cancelAnimationFrame(), but how to implement it? Supposedly, this is the best practice as recommended to go about it in order to improve performance?
The 10 scenes are an array of scenes. The current scene is the user input numerical value passed to the scenes[] array.
function changeScene() {
// Set the currentScene to the user selection
var sceneNumber = list.options[list.selectedIndex].value;
currentScene = scenes[sceneNumber-1];
// Hide all scenes by default
for(var j=0;j<10;j++){
scenes[j].visible = false;
}
}
I have already programmed all scenes to be hidden by default except for the currentScene. But I have found no help researching how to use the cancelAnimationFrame() in my case. Also, a follow-up question: will using cancelAnimationFrame() only activate after the page has loaded all the scenes with all the objects? Or would it help that loading time as well?
My FIDDLE with the above and related code.
Animation is the change of numeric parameters over time. In your example that means changing the spinningRig rotation angle inside your animate() method. As long as you change the rotation for the current scene only -- which you actually do -- the other scenes are not being animated.
If you're perhaps addressing rendering instead of animation there is again no need to worry because Three.JS renders only the one scene you specify in renderer.render().
Or are you addressing anything other? Animation clearly isn't a problem here.
I'm using d3.js 3.5.6. How do we tick the force layout in our own render loop?
It seems that when I call force.start(), that automatically starts the force layout's own internal render loop (using requestAnimationFrame).
How do I prevent d3 from making a render loop, so that I can make my own render and call force.tick() myself?
This answer is plain wrong. Don't refer to it, don't use it.
I wrote a new one explaining how to do this correctly. I remember spending days digging into this as I though I had discovered an error. And, judging by the comments and the upvotes, I have managed to trick others—even including legends like Lars Kotthoff—to follow me down this wrong road. Anyways, I have learned a lot from my mistake. You only have to be ashamed of your errors if you do not take the chance to learn from them.
As soon as this answer is unaccepted I am going to delete it.
At first I was annoyed by the lack of code in the question and considered the answer to be rather easy and obvious. But, as it turned out, the problem has some unexpected implications and yields some interesting insights. If you are not interested in the details, you might want to have a look at my Final thoughts at the bottom for an executable solution.
I had seen code and documentation for doing the calculations of the force layout by explicitly calling force.tick.
# force.tick()
Runs the force layout simulation one step. This method can be used in conjunction with start and stop to compute a static layout. For example:
force.start();
for (var i = 0; i < n; ++i) force.tick();
force.stop();
This code always seemed dubious to me, but I took it for granted because the documentation had it and Mike Bostock himself made a "Static Force Layout" Block using the code from the docs. As it turns out, my intuition was right and both the Block as well as the documentation are wrong or at least widely off the track:
Calling start will do a lot of initialization of your nodes and links data (see documentation of nodes() and links(). You cannot just dismiss the call as you have experienced yourself. The force layout won't run without it.
Another thing start will eventually do is to fire up the processing loop by calling requestAnimationFrame or setTimeout, whatever is available, and provide force.tick as the callback. This results in an asynchronous processing which will repeatedly call force.tick, whereby doing the calculations and calling your tick handler if provided. The only non-hacky way to break this loop is to set alpha to below the hard-coded freezing point of 0.005 by calling force.alpha(0.005) or force.stop(). This will stop the loop on the next call to tick. Unless the timer is stopped this way, it will continue looping log0.99 (0.005 / 0.1) ≈ 298 times until alpha has dropped below the freezing point.
One should note, that this is not the case for the documentation or the Block. Hence, the tick-loop started by force.start() will continue running asynchronously and do its calculations.
The subsequent for-loop might or might not have any effect on the result of the force layout. If the timer happens to be still running in the background, this means concurrent calls to force.tick from the timer as well as from the for-loop. In any case will the calculations be stopped once alpha has dropped low enough when reaching a total of 298 calls to tick. This can be seen from the following lines:
force.tick = function() {
// simulated annealing, basically
if ((alpha *= 0.99) < 0.005) {
timer = null;
event.end({type: "end", alpha: alpha = 0});
return true;
}
// ...
}
From that point on you can call tick as often as you like without any change to the layout's outcome. The method is entered, but, because of the low value of alpha, will return immediately. All you will see is a repeated firing of end events.
To affect the number of iterations you have to control alpha.
The fact that the layout in the Block seems static is due to the fact that no callback for the "tick" event is registered which could update the SVG on every tick. The final result is only drawn once. And this result is ready after just 298 iterations, it won't be changed by subsequent, explicit calls to tick. The final call to force.stop() won't change anything either, it just sets alpha to 0. This does not have any effect on the result because the force layout has long come to an implicit halt.
Conclusion
Item 1. could be circumvented by a clever combination of starting and stopping the layout as in Stephen A. Thomas's great series "Understanding D3.js Force Layout" where from example 3 on he uses button controls to step through the calculations. This, however, will also come to a halt after 298 steps. To take full control of the iterations you need to
Provide a tick handler and immediately stop the timer by calling force.stop() therein. All calculations of this step will have been completed by then.
In your own loop calculate the new value for alpha. Setting this value by force.alpha() will restart the layout. Once the calculations of this next step are done, the tick handler will be executed resulting in an immediate stop as seen above. For this to work you will have to keep track of your alpha within your loop.
Final thoughts
The least invasive solution might be to call force.start() as normal and instead alter the force.tick function to immediately halt the timer. Since the timer in use is a normal d3.timer it may be interrupted by returning true from the callback, i.e. from the tick method. This could be achieved by putting a lightweight wrapper around the method. The wrapper will delegate to the original tick method, which is closed over, and will return true immediately afterwards, whereby stopping the timer.
force.tick = (function(forceTick) {
return function() { // This will be the wrapper around tick which returns true.
forceTick(); // Delegate to the original tick method.
return true; // Truth hurts. This will end the timer.
}
}(force.tick)); // Pass in the original method to be closed over.
As mentioned above you are now on your own managing the decreasing value of alpha to control the slowing of your layout's movements. This, however, will only require simple calculus and a loop to set alpha and call force.tick as you like. There are many ways this could be done; for a simple showcase I chose a rather verbose approach:
// To run the computing steps in our own loop we need
// to manage the cooling by ourselves.
var alphaStart = 0.1;
var alphaEnd = 0.005;
var alpha = alphaStart;
var steps = n * n;
var cooling = Math.pow(alphaEnd / alphaStart, 1 / steps);
// Calling start will initialize our layout and start the timer
// doing the actual calculations. This timer will halt, however,
// on the first call to .tick.
force.start();
// The loop will execute tick() a fixed number of times.
// Throughout the loop the cooling of the system is controlled
// by decreasing alpha to reach the freezing point once
// the desired number of steps is performed.
for (var i = 0; i < steps; i++) {
force.alpha(alpha*=cooling).tick();
}
force.stop();
To wrap this up, I forked Mike Bostock's Block to build an executable example myself.
You want a Static Force Layout as demonstrated by Mike Bostock in his Block. The documentation on force.tick() has the details:
# force.tick()
Runs the force layout simulation one step. This method can be used in conjunction with start and stop to compute a static layout. For example:
force.start();
for (var i = 0; i < n; ++i) force.tick();
force.stop();
As you have experienced yourself you cannot just dismiss the call to force.start() . Calling .start() will do a lot of initialization of your nodes and links data (see documentation of nodes() and links()). The force layout won't run without it. However, this will not start the force right away. Instead, it will schedule the timer to repeatedly call the .tick() method for asynchronous execution. It is important to notice that the first execution of the tick handler will not take place before all your current code has finished. For that reason, you can safely create your own render loop by calling force.tick().
For anyone interested in the gory details of why the scheduled timer won't run before the current code has finished I suggest thoroughly reading through:
DVK's answer (not the accepted one) to "Why is setTimeout(fn, 0) sometimes useful?".
John Reisig's excellent article on How JavaScript Timers Work.
Years ago, I heard about a nice 404 page and implemented a copy.
In working with ReactJS, the same idea is intended to be implemented, but it is slow and jerky in its motion, and after a while Chrome gives it an "unresponsive script" warning, pinpointed to line 1226, "var position = index % repeated_tokens.length;", with a few hundred milliseconds' delay between successive calls. The script consistently goes beyond an unresponsive page to bringing a computer to its knees.
Obviously, they're not the same implementation, although the ReactJS version is derived from the "I am not using jQuery yet" version. But beyond that, why is it bogging? Am I making a deep stack of closures? Why is the ReactJS port slower than the bare JavaScript original?
In both cases the work is driven by minor arithmetic and there is nothing particularly interesting about the code or what it is doing.
--UPDATE--
I see I've gotten a downvote and three close votes...
This appears to have gotten response from people who are (a) saying something sensible and (b) contradicting what Pete Hunt and other people have said.
What is claimed, among other things, by Hunt and Facebook's ReactJS video, is that the synthetic DOM is lightning-fast, enough to pull 60 frames per second on a non-JIT iPhone. And they've left an optimization hook to say "Ignore this portion of the DOM in your fast comparison," which I've used elsewhere to disclaim jurisdiction of a non-ReactJS widget.
#EdBallot's suggestion that it's "an extreme (and unnecessary) amount of work to create and render an element, and do a single document.getElementById. Now I'm factoring out that last bit; DOM manipulation is slow. But the responses here are hard to reconcile with what Facebook has been saying about performant ReactJS. There is a "Crunch all you want; we'll make more" attitude about (theoretically) throwing away the DOM and making a new one, which is lightning-fast because it's done in memory without talking to the real DOM.
In many cases I want something more surgical and can attempt to change the smallest area possible, but the letter and spirit of ReactJS videos I've seen is squarely in the spirit of "Crunch all you want; we'll make more."
Off to trying suggested edits to see what they will do...
I didn't look at all the code, but for starters, this is rather inefficient
var update = function() {
React.render(React.createElement(Pragmatometer, null),
document.getElementById('main'));
for(var instance in CKEDITOR.instances) {
CKEDITOR.instances[instance].updateElement();
}
save('Scratchpad', document.getElementById('scratchpad').value);
};
var update_interval = setInterval(update, 100);
It is doing an extreme (and unnecessary) amount of work and it is being done every 100ms. Among other things, it is calling:
React.createElement
React.render
document.getElementById
Probably with the amount of JS objects being created and released, your update function plus garbage collection is taking longer than 100ms, effectively taking the computer to its knees and lower.
At the very least, I'd recommend caching as much as you can outside of the interval callback. Also no need to call React.render multiple times. Once it is rendered into the dom, use setProps or forceUpdate to cause it to render changes.
Here's an example of what I mean:
var mainComponent = React.createElement(Pragmatometer, null);
React.render(mainComponent,
document.getElementById('main'));
var update = function() {
mainComponent.forceUpdate();
for(var instance in CKEDITOR.instances) {
CKEDITOR.instances[instance].updateElement();
}
save('Scratchpad', document.getElementById('scratchpad').value);
};
var update_interval = setInterval(update, 100);
Beyond that, I'd also recommend moving the setInterval code into whatever React component is rendering that stuff (the Scratchpad component?).
A final comment: one of the downsides of using setInterval is that it doesn't wait for the callback function to complete before queuing up the next callback. An alternative is to use setTimeout with the callback setting up the next setTimeout, like this
var update = function() {
// do some stuff
// update is done to setup the next timeout
setTimeout(update, 100);
};
setTimeout(update, 100);
I'm using a 2d array to represent the grid of cells. When a cell on an edge or in a corner checks it's neighbors that would be out of bounds, it treats them as permanently dead.
function getCell(row, column) {
if(row === -1 || row === cellMatrix.length || column === -1 || column === cellMatrix[0].length)
{
return 0;
}
else return cellMatrixCopy[row][column];
}
I'd just like to get rid of behavior like gliders halting and turning into blocks when they reach the edge of the grid. How would you "get rid" of the edges of the array?
You can checkout the full implementation here. Thanks in advance.
How to fake an “infinite” 2d plane?
To create an infinite dimension game board use a sparse matrix representation where the row and column indicies are arbitrary-precision integers. Roughly as follows:
map<pair<BigInt,BigInt>, Cell> matrix;
Cell& get_cell(BigInt x, BigInt y)
{
return matrix[make_pair[x,y]];
}
The game of life is impossible to fake being infinite. Say the structure you create tries to expand past the current bounds, what is to stop it from expanding infinitely. Like MitchWheat said you could wrap the edges which is your best bet (for the more natural looking behavior), but since it's turing complete you can't fake it being infinite in every situation without having infinite memory.
Since the game of life is turing complete that means if a something "goes off the edge" it is impossible to tell if it will come back for a general case (related to the halting problem), meaning any heuristic you use to decide when something goes off the edge will have a degree of inaccuracy, if this is acceptable then you should consider that approach to, though IMO simulating the game of life incorrectly kinda defeats the purpose.
Another way to go would be to purposely simulate a larger area than you show so objects appear to go off the edge
This question is also here with some good answers.
The simplest modification to your simulation (and not mentioned on the other question) would be to triple the area's length and width, but only show what you currently show. The only catch would be if something went off-screen, hit the wall, and then came back. You could try playing around with the game rules at the boundary to make it more likely for a cell to die (so that a glider could get a little further in before becoming a block that could mess things up). Maybe that a live cell with three neighbors would die if it were within two steps of the side.
One of the other answers (and our answer) can be summarized as keeping track of the live nodes, not the entire screen. Then you would be able to do cool stuff like zoom and pan as if it were Google Maps.