EaselJS with 200+ vector shapes : performance and aesthetics - javascript

I'm having a huge perf issue when using EaselJS vs Canvas native method : 2.2 s vs 0.01 s (although EaselJS do map canvas native methods...).
I wrote a canvas app which draws a tree*. In order to animate the growth af the tree, it seems cleaner and handier to tween an EaselJS graphics object than writing a custom drawing function.
*(actually it'll look more like a virginia creeper)
1) I must have done something wrong, as the perf with EaselJS are awfull.
The significant piece of the code :
var g=new createjs.Graphics();
g.bs(branchTexture);
for (var i=0; i < arbre.length; i++) {
var step=arbre[i];
for (var t=0; t < step.length; t++){
var d=step[t];
var c=d[5];
var s=new createjs.Shape(g);
s.graphics.setStrokeStyle(7*(Math.pow(.65,d[7]-1)),'round').mt(c[0],c[1]).qt(c[2],c[3],c[4],c[5]);
}
mainStage.addChild(s);
}
}
mainStage.update();
Here's a jsfiddle showing the comparison :
http://jsfiddle.net/xxsL683x/29/
I tried removing texture, simplifying moveTo() operations, but the result si still very slow. Caching doesn't apply, as it's the first rendering pass.
What have I done wrong ?
2) Additionally, I need to find a way to orient the filling pattern in the direction of the growth, so I think i'll get rid of the vector shapes and interpolate some bitmap between two "growth states". If anyone has a better idea, i would be glad to read it.
(But it would be nice to have an answer to the first question though, it's probably not the last time I'll be attempting to tween complex objects.)
3) Another reason for getting rid of EaselJS is that the antialiasing doesn't apply. Why ? Also has to be investigated...
Hope this question will help others to avoid some errors, if I've done some, or at least lead to a better understanding of the way easeljs handles vector shapes.
PS : The tree do have leaves ;)... but I pulled them out for obvious clarity reasons.

I've update your Fiddle, it's now much faster. The problem is that you're creating a new Shape in each loop, which is pretty slow.
for (var t=0; t < step.length; t++){
// loop
g.setStrokeStyle(7*(Math.pow(.65,d[7]-1)),'round').mt(c[0],c[1]).qt(c[2],c[3],c[4],c[5]);
}
// then just create a Shape after all your graphic commands are built
var s = new createjs.Shape(g);
mainStage.addChild(s);
http://jsfiddle.net/xxsL683x/31/

Related

WebAudio FDN Lossless Prototype unstable when using individual feedback gains

I'm trying to build an FDN Reverberator in WebAudio by following this article.
There is a simplified implementation of a Householder FDN which uses a common feedback gain for all delays and seems pretty stable.
However, when I try to implement the more general case that is mixed by a matrix I cannot seem to make it stable.
I have inlined most of the code to narrow down the issue, and put it in a JSFiddle.
EDIT: Warning, high volume in the unstable case.
The difference comes down to this:
var feedback = context.createGain();
feedback.gain.value = gainValue;
for(var i=0; i<n; i++) {
this.delays[i].connect(feedback);
feedback.connect(this.delays[i]);
}
Compared to:
for(var i=0; i<n; i++) {
for(var o=0; o<n; o++) {
var feedback = context.createGain();
feedback.gain.value = gainValue;
this.delays[i].connect(feedback);
feedback.connect(this.delays[o]);
}
}
When I use a common feedback GainNode for all delays, it works fine. When I create individual feedback GainNodes for all delays, using the same gainValue, it becomes unstable.
What am I doing wrong?
EDIT: Clarification from the article.
As mentioned in §3.4, an "ideal" late reverberation impulse response should resemble exponentially decaying noise [314]. It is therefore useful when designing a reverberator to start with an infinite reverberation time (the "lossless case") and work on making the reverberator a good "noise generator". Such a starting point is [often] referred to as a lossless prototype [153,430]. Once smooth noise is heard in the impulse response of the lossless prototype, one can then work on obtaining the desired reverberation time in each frequency band (as will be discussed in §3.7.4 below).
The gain nodes add volume to each other. If you use multiple gain nodes you need to split their value by the number of active gain nodes.
So if you have 10 gain nodes plaing simultaneosly your volume for the gain node would be value / 10 (number of active gain nodes). You need to edit the value of gain nodes before as well to the new value. So best to store all Gain nodes in an array and loop over it.
I didn't try it but I think it should work. It is physically totally nonesense, because if you have ten people crying in the same room the db-Meter is still as loud as if one would cry. Try to think of gain nodes more in a Mathematical way that they add up the signal to each other.
Your Reverb is dope by the way.
In Code it means:
for(var i=0; i<n; i++) {
for(var o=0; o<n; o++) {
var feedback = context.createGain();
feedback.gain.value = gainValue/9;
this.delays[i].connect(feedback);
feedback.connect(this.delays[o]);
}
}
I may reuse this code someday right??? If I set n to 30 I get kind of a cymball sound.

canvas - change perspective of the camera in a 2d setup

TLDR:
I need to change the perspective over an object in a 2d cavas.
Let's say I make a photo of my desk while I sit next to it. I want to change that image, to look as if I would see it from above. Basically the view would change like in the image below (yes, my paint skills are awesome :)) ).
I am using canvas and easeljs (createjs).
I am totally aware that this is not a 3rd object, I don't have a stage and a camera. I am also aware that easeljs doesn't support this as a feature.
What I am actually trying is to emulate this (I am also not interested in quality of the result at this point).
What I tried and it worked (I noticed I am not the first to try to do this and didn't actually found a real answer so this is my current approach):
I take the original image. Divide it in a high number of images (the higher the better) as in the image below.
Then I scale each of the "mini-images" on x axis with a different factor (that I'm computing depending on the angle the picture was made). Part of relevant code below (excuse the hardcodings and creepy code - is from a proof of concept I made in like 10 minutes):
for (var i = 0; i < 400; i++) {
crops[i] = new createjs.Bitmap(baseImage);
crops[i].sourceRect = new createjs.Rectangle(0, i, 700, i + 1);
crops[i].regX = 350;
crops[i].y = i - 1;
crops[i].x = 100;
crops[i].scaleX = (400 - i / 2) * angleFactor;
stage.addChild(crops[i]);
}
Then you crop again only the relevant part.
Ok... so this works but the performance is... terrible - you basically generate 400 images - in this case - then you put them in a canvas. Yes, I know it can be optimized a bit but is still pretty bad. So I was wondering if you have any other (preferably better) approaches.
I also tried combining a skewing transformation with a scale and a rotation but I could not achieve the right result (but I actually still think there may still be something here... I just can't put my finger on it and my "real" math is a bit rusty).

Object pooling (isometric tiling) on phaser and camera positioning

I am new to Phaser and I am currently having a hard time in generating a tilemap with the aid of the phaser isometric plugin. I am also having trouble with understanding some of the concepts related with the Phaser world, game, and the camera which make the part of generating the tilemap correctly even harder. I have actually noticed that this problem seems to be an obstacle to Phaser newbies, like myself, and having it correctly explained would certainly help to fight this.
Getting to the matter:
I have generated a tilemap using a for loop using single grass sprites. The for loop is working correctly and I also implemented a function where I can specify the number of tiles I want to generate.
{
spawnBasicTiles: function(half_tile_width, half_tile_height, size_x, size_y) {
var tile;
for(var xx = 0; xx < size_x; xx += half_tile_width) {
for(var yy = 0; yy < size_y; yy += half_tile_height) {
tile = game.add.isoSprite(xx, yy, 0, 'tile', 0, tiles_group);
tile.checkWorldBounds = true;
tile.outOfBoundsKill = true;
tile.anchor.set(0.5, 0);
}
}
}
}
So the process of generating the static grass tiles is not a problem. The problem, and one of the reasons I am trying to getting the object pooling to work correctly is when the tiles number is superior to 80 which has a dramatic impact on the game performance. Since I aim for making HUGE, auto-generating maps that are rendered according to the player character position the object pooling is essential. I have created a group for these tiles and added the properties I thought that would be required for having the tiles that are out of bounds to not be rendered(physics, world bounds...). However, after many attempts I concluded that even the tiles that are out of bounds are still being generated. I also used another property rather than add.isoSprite to generate the tiles, which was .create but made no difference. I guess no tiles are being "killed".
Is this process correct? What do I need to do in order to ONLY generate the tiles that appear on camera (I assume the camera is the visible game rectangle) and generate the rest of them when the character moves to another area assuming the camera is already tracking the character movement?
Besides that, I am looking to generate my character in the middle of the world which is the middle of the generated grass tilemap. However, I am having a hard time doing that too. I think the following concepts are the ones I should play with in order to achieve that, despite not being able to:
.anchor.set()
game.world.setBounds(especially this one... it doesn't seem to set where I order to);
Phaser.game ( I set its size to my window.width and window.height, not having much trouble with this...)
Summing up:
Using the following for loop method of generating tiles, how can I make infinite/almost infinite maps that are generated when the camera that follows my character moves to another region? And besides that, what is the proper way of always generating my character in the middle of my generated grass map and also for the camera to start where the character is?
I will appreciate the help a lot and hope this might be valuable for people in similar situations to mine. For any extra information just ask and if you want the function for generating the tiles by a specific number I will gladly post it. Sorry for any language mistakes.
Thank you very much.
EDIT:
I was able to spawn my character always in the middle of the grass by setting its X and Y to (size/width) and (size/height). Size is the measure of X and Y in px.

Speed up simplex algorithm

I am playing around with a great simplex algorithm I have found here: https://github.com/JWally/jsLPSolver/
I have created a jsfiddle where I have set up a model and I solve the problem using the algorithm above. http://jsfiddle.net/Guill84/qds73u0f/
The model is basically a long array of variables and constraints. You can think of it as trying to find the cheapest means of transportation of passengers between different hubs (countries), where each country has a minimum demand for passengers, a maximum supply of passengers, and each connection has a price. I don't care where passengers go, I just want to find the cheapest way to distribute them. To achieve this I use the following minimising objective:
model = {
"optimize": "cost",
"opType": "min",
"constraints": { \\etc...
I am happy with the model and the answer provided by the algorithm ... but the latter takes a very long time to run (>15 seconds...) Is there any possible way I can speed up the calculation?
Kind regards and thank you.
G.
It sounds as though you have a minimum-cost flow problem. There's a reasonable-looking TopCoder tutorial on min-cost flow by Zealint, who covers the cycle-canceling algorithm that would be my first recommendation (assuming that there's no quick optimization that can be done for your LP solver). If that's still too slow, there's a whole literature out there.
Since you're determined to solve this problem with an LP solver, my suggestion would be to write a simpler solver that is fast and greedy but suboptimal and use it as a starting point for the LP by expressing the LP in terms of difference from the starting point.
#Noobster, I'm glad that someone other than me is getting use out of my simplex library. I went through, looked at it, and was getting around the same runtime as you (10 - 20 seconds). There was a piece of the code that was needlessly transposing array to turn the RHS into a 1d array from a 2d array. With your problem, this killed performance eating up 60ms every time it happened (for your problem, 137 times).
I've corrected this in the repo and am seeing runtimes around 2 seconds. There are probably a ton of code clean up optimizations like this that need to happen but the problem set I built this (http://mathfood.com) for are so small that I never knew this was an issue. Thanks!
For what its worth, I took the simplex algo out of a college textbook and turned it into code; the MILP piece came from wikipedia.
Figured it out. The most expensive piece of the code was the pivoting operation; which it turns out was doing a lot of work to update the matrix by adding 0. Doing a little logic up front to prevent this dropped my run-time down on node from ~12 seconds to ~0.5.
for (i = 0; i < length; i++) {
if (i !== row) {
pivot_row = tbl[i][col];
for (j = 0; j < width; j++) {
// No point in doing math if you're just adding
// Zero to the thing
if (pivot_row !== 0 && tbl[row][j] !== 0) {
tbl[i][j] += -pivot_row * tbl[row][j];
}
}
}
}

Performance concerns when storing data in large arrays with Javascript

I have a browser-based visualization app where there is a graph of data points, stored as an array of objects:
data = [
{x: 0.4612451, y: 1.0511} ,
... etc
]
This graph is being visualized with d3 and drawn on a canvas (see that question for an interesting discussion). It is interactive and the scales can change a lot, meaning the data has to be redrawn, and the array needs to be iterated through quite frequently, especially when animating zooms.
From the back of my head and reading other Javascript posts, I have a vague idea that optimizing dereferences in Javascript can lead to big performance improvements. Firefox is the only browser on which my app runs really slow (compared to IE9, Chrome, and Safari) and it needs to be improved. Hence, I'd like to get a firm, authoritative answer the following:
How much slower is this:
// data is an array of 2000 objects with {x, y} attributes
var n = data.length;
for (var i=0; i < n; i++) {
var d = data[i];
// Draw a circle at scaled values on canvas
var cx = xs(d.x);
var cy = ys(d.y);
canvas.moveTo(cx, cy);
canvas.arc(cx, cy, 2.5, 0, twopi);
}
compared to this:
// data_x and data_y are length 2000 arrays preprocessed once from data
var n = data_x.length;
for (var i=0; i < n; i++) {
// Draw a circle at scaled values on canvas
var cx = xs(data_x[i]);
var cy = ys(data_y[i]);
canvas.moveTo(cx, cy);
canvas.arc(cx, cy, 2.5, 0, twopi);
}
xs and ys are d3 scale objects, they are functions that compute the scaled positions. I mentioned the above that the above code may need to run up to 60 frames per second and can lag like balls on Firefox. As far as I can see, the only differences are array dereferences vs object accessing. Which one runs faster and is the difference significant?
It's pretty unlikely that any of these loop optimizations will make any difference. 2000 times through a loop like this is not much at all.
I tend to suspect the possibility of a slow implementation of canvas.arc() in Firefox. You could test this by substituting a canvas.lineTo() call which I know is fast in Firefox since I use it in my PolyGonzo maps. The "All 3199 Counties" view on the test map on that page draws 3357 polygons (some counties have more than one polygon) with a total of 33,557 points, and it loops through a similar canvas loop for every one of those points.
Thanks to the suggestion for JsPerf, I implemented a quick test. I would be grateful for anyone else to add their results here.
http://jsperf.com/canvas-dots-testing: results as of 3/27/13:
I have observed the following so far:
Whether arrays or objects is better seems to depend on the browser, and OS. For example Chrome was the same speed on Linux but objects were faster in Windows. But for many they are almost identical.
Firefox is just the tortoise of the bunch and this also helps confirm Michael Geary's hypothesis that its canvas.arc() is just super slow.

Categories