I'd like to know how WebKitCSSMatrix actually works - javascript

Apple's official documentation says:
WebKitCSSMatrix objects represent a 4x4 homogeneous matrix for 3D transforms or a vector for 2D transforms. You can use these objects to manipulate matrices in JavaScript. For example, you can multiply, translate, and scale matrices.
I'm a glorified designer, not an engineer, so I'm assuming that's the reason why I can't make any sense of that description. Please, can somebody point me in the right direction to understand how this matrix and/or vectors work?

Whew, this is the most difficult question I've attempted to answer. The short answer is that, as web designers, we don't have the vocabulary to express 3d transformations. In order to explain it to you in a comprehensible way I'd have to use math concepts which I don't understand myself.
If you'd like to investigate further you can take a look at:
http://www.eleqtriq.com/2010/05/css-3d-matrix-transformations/
But, I can explain it visually.
http://duopixel.com/stack/webkitmatrix/ (you'll have to view this under Safari 5 w/Snow Leopard, or an iPad, or course).
What you're seeing is just an interface to the 16 values webkitCSSMatrix, the sliders that seem to do nothing are related to the z axis, which I suspect would be visible if we had more objects in the 3d canvas.
Edit: after studying the link I placed before, I noticed the original author has done the same example before, doh! http://www.eleqtriq.com/wp-content/static/demos/2010/css3d/matrix3dexplorer.html

Even though it's for ActionScript, check out Understanding the Transformation Matrix in Flash 8. It's got pretty pictures, too :)
Before getting into how transformation matrices (matrices is plural of matrix) work, it is important to understand what a matrix is. A matrix is a rectangular array (or table) of numbers consisting of any number of rows and columns. A matrix consisting of m rows and n columns is known as an m x n matrix. This represents the matrix's dimensions. You'll commonly seen matrices with numbers in rows and columns surrounded by two large bracket symbols.
...
Affine transformations are transformations that preserve collinearity and relative distancing in a transformed coordinate space. This means points on a line will remain in a line after an affine transformation is applied to the coordinate space in which that line exists. It also means parallel lines remain parallel and that relative spacing or distancing, though it may scale, will always maintain at a consistent ratio. Affine transformations allow for repositioning, scaling, skewing and rotation. Things they cannot do include tapering or distorting with perspective. If you're ever worked with transforming symbols in Flash, you probably recognize these qualities.
(source: senocular.com)

Related

How to detect collision in not easily polygon divided body

Say we are coding something in Javascript and we have a body, say an apple, and want to detect collision of a rock being thrown at it: it's easy because we can simply consider the apple as a circle.
But how about we have, for example, a "very complex" fractal? Then there is no polygon similar to it and we also cannot break it into smaller polygons without a herculean amount of effort. Is there any way to detect perfect collision in this case, as opposed to making something that "kind" of works, like considering the fractal a polygon (not perfect because there will be collision detected even in blank spaces)?
You can use a physics editor
https://www.codeandweb.com/physicseditor
It'll work with most game engines. You'll have to figure how to make it work in JS.
Here's an tutorial from the site using typescript - related to JS
http://www.gamefromscratch.com/post/2014/11/27/Adventures-in-Phaser-with-TypeScript-Physics-using-P2-Physics-Engine.aspx
If you have coordinates of the polygons, you can make an intersection of subject and clip polygons using Javascript Clipper
The question doesn't provide too much information of the collision objects, but usually anything can be represented as polygon(s) to certain precision.
EDIT:
It should be fast enough for real time rendering (depending of complexity of polygons). If the polygons are complex (many self intersections and/or many points), there are many methods to speedup the intersection detection:
reduce the point count using ClipperLib.JS.Lighten(). It removes the points that have no effect to the outline (eg. duplicate points and points on edge)
get first bounding rectangles of polygons using ClipperLib.JS.BoundsOfPath() or ClipperLib.JS.BoundsOfPaths(). If bounding rectangles are not in collision, there is no need to make intersection operation. This function is very fast, because it just gets min/max of x and y.
If the polygons are static (ie their geometry/pointdata doesn't change during animation), you can lighten and get bounds of paths and add polygons to Clipper before animation starts. Then during each frame, you have to do only minimal effort to get the actual intersections.
EDIT2:
If you are worried about the framerate, you could consider using an experimental floating point (double) Clipper, which is 4.15x faster than IntPoint version and when big integers are needed in IntPoint version, the float version is 8.37x faster than IntPoint version. The final speed is actually a bit higher because IntPoint Clipper needs that coordinates are first scaled up (to integers) and then scaled down (to floats) and this scaling time is not taken into account in the above measurements. However float version is not fully tested and should be used with care in production environments.
The code of experimental float version: http://jsclipper.sourceforge.net/6.1.3.4b_fpoint/clipper_unminified_6.1.3.4b_fpoint.js
Demo: http://jsclipper.sourceforge.net/6.1.3.4b_fpoint/main_demo3.html
Playground: http://jsbin.com/sisefo/1/edit?html,javascript,output
EDIT3:
If you don't have polygon point coordinates of your objects and the objects are bitmaps (eg. png/canvas), you have to first trace the bitmaps eg. using Marching Squares algorithm. One implementation is at
https://github.com/sakri/MarchingSquaresJS.
There you get an array of outline points, but because the array consists of huge amount of unneeded points (eg. straight lines can easily be represented as start and end point), you can reduce the point count using eg. ClipperLib.JS.Lighten() or http://mourner.github.io/simplify-js/.
After these steps you have very light polygonal representations of your bitmap objects, which are fast to run through intersection algorithm.
You can create bitmaps that indicate the area occupied by your objects in pixels. If there is intersection between the bitmaps, then there is a collision.

How do I turn a number of vertex indices bigger than four into a practical way?

I'm working on a way to turn the information from 3d-files into THREE geometries. I receive them formated like this:
{"N":"name of the block","V":[[0,1,2]..[N1,N2,N3]],"F":[[0,1,2]..[M1,M2,M3]],"P":[[O1,O2,P3,..,Op]..[..]]}
N should be obvious. It's the name of the geometry.
V is an array of vertices.
F is an array of triangular faces.
So far so good. That's easy to convert into THREE geometries. P is the tricky part. It's an array of polygons. A polygon is in this case a face consisting of a number of vertex indices bigger than four.
There's no actual restriction how many vertex indices may hold, apart from the minimum of five.
Is there any working way to convert a structure like this for THREEjs?
three.js supports a Face3 class only. It used to have a Face4 class, but you need something completely different to handle polygons. The short answer is no, three.js does not handle this out of the box.
A simple way to tackle it is to create a fan out of your polygons by fixing one vert, and looping through the rest, but this will work only on convex polygons.
https://en.wikipedia.org/wiki/Polygon_triangulation
Not a trivial problem.

HTML5 Canvas Creative Alpha-Blending

So I have an animation that I'm coding in javascript and HTML5 (no libraries, no plugins, no nothing and I'd like it to stay that way). The animation uses physics (basically a bunch of unusual springs attached to masses) to simulate a simple liquid. The output from this part of the program is a grid (2d-array) of objects, each with a z value. This works quite nicely. My problem arises when drawing the data to an HTML5 Canvas.
That's what it looks like. Trust me, it's better when animated.
For each data point, the program draws one circle with a color determined by the z value. Just drawing these points, however, the grid pattern is painfully obvious and it is difficult to see the fluid that it represents. To solve this, I made the circles larger and more transparent so that they overlapped each other and the colors blended, creating a simple convolution blur. The result was both fast and beautiful, but for one small flaw:
As the circles are drawn in order, their color values don't stack equally, and so later-drawn circles obscure the earlier-drawn ones. Mathematically, the renderer is taking repeated weighted averages of the color-values of the circles. This works fine for two circles, giving each a value of 0.5*alpha_n, but for three circles, the renderer takes the average of the newest circle with the average of the other two, giving the newest circle a value of 0.5*alpha_n, but the earlier circles each a value of 0.25*alpha_n. As more circles overlap, the process continues, creating a bias toward newer circles and against older ones. What I want, instead, is for each of three or more circles to get a value of 0.33*alpha_n, so that earlier circles are not obscured.
Here's an image of alpha-blending in action. Notice that the later blue circle obscures earlier drawn red and green ones:
Here's what the problem looks like in action. Notice the different appearance of the left side of the lump.
To solve this problem, I've tried various methods:
Using different canvas "blend-modes". "Multiply" (as seen in the above image) did the trick, but created unfortunate color distortions.
Lumping together drawing calls. Instead of making each circle a separate canvas path, I tried lumping them all together into one. Unfortunately, this is incompatible with having separate fill colors and, what's more, the path did not blend with itself at all, creating a matte, monotone silhouette.
Interlacing drawing-order. Instead of drawing the circles in 0 to n order, I tried drawing first the evens and then the odds. This only partially solved the problem, and created an unsightly layering pattern in which the odds appeared to float above the evens.
Building my own blend mode using putImageData. I tried creating a manual pixel-shader to suit my needs using javascript, but, as expected, it was far too slow.
At this point, I'm a bit stuck. I'm looking for creative ways of solving or circumnavigating this problem, and I welcome your ideas. I'm not very interested in being told that it's impossible, because I can figure that out for myself. How would you elegantly draw a fluid from such data-points?
If you can decompose your circles into two groups (evens and odds), such that there is no overlap among circles within a group, the following sequence should give the desired effect:
Clear the background
Draw the evens with an alpha of 1.0 (opaque)
Draw the odds with an alpha of 1.0 (opaque)
Draw the evens with an alpha of 0.5
Places which are covered by neither evens nor odds will show the background. Those which are covered only by evens will show the evens at 100% opacity. Those covered by odds will show the odds with 100% opacity. Those covered by both will show a 50% blend.
There are other approaches one can use to try to blend three or more sets of objects, but doing it "precisely" is complicated. An alternative approach if one has three or more images that should be blended uniformly according to their alpha channel is to repeatedly draw all of the images while the global alpha decays from 1 to 0 (note that the aforementioned procedure actually does that, but it's numerically precise when there are only two images). Numerical rounding issues limit the precision of this technique, but even doing two or three passes may substantially reduce the severity of ordering-caused visual artifacts while using fewer steps than would be required for precise blending.
Incidentally, if the pattern of blending is fixed, it may be possible to speed up rendering enormously by drawing the evens and odds on separate canvases not as circles, but as opaque rectangles, and subtracting from the alpha channel of one of the canvases the contents of a a fixed "cookie-cutter" canvas or fill pattern. If one properly computes the contents of cookie-cutter canvases, this approach may be used for more than two sets of canvases. Determining the contents of the cookie-cutter canvases may be somewhat slow, but it only needs to be done once.
Well, thanks for all the help, guys. :) But, I understand, it was a weird question and hard to answer.
I'm putting this here in part so that it will provide a resource to future viewers. I'm still quite interested in other possible solutions, so I hope others will post answers if they have any ideas.
Anyway, I figured out a solution on my own: Before drawing the circles, I did a comb sort on them to put them in order by z-value, then drew them in reverse. The result was that the highest-valued objects (which should be closer to the viewer) were drawn last, and so were not obscured by other circles. The result is that the obscuring effect is still there, but it now happens in a way that makes sense with the geometry. Here is what the simulation looks like with this correction, notice that it is now symmetrical:

Javascript spline/arc interpolation for dummies

I'm hitting a wall in some work I'm doing; I've searched on here for many, many threads regarding numerical interpolation and have found them either to contain too much math for me to interpret them, or that their coding solutions have been too specific to be generalized to the task I'm working on.
I have sets of coordinates (currently float x, y distances around an 0,0 origin point) which I am, via Javascript, transposing to latitude, longitude coordinates. (The transposition is easy, so don't worry about that — I'm just telling you that to make the application more clear.)
For the rest, refer to the below graphic:
The dots are the coordinates. (They are generated algorithmically.) The blue line shows a simple, linear interpolation between the points. What I want is something more like the red line. It's not quite an ellipse — you can see that around the first coordinates, it forms arcs that are almost like a perfect circle.
Note that some of the points are negative in various places. Note that the lines between them must be draw sequentially — an algorithm that generates the points out of sequence will make things much harder for this application.
What I'd like is to have a Javascript function that would let me do the following: specify two sequential points from this series (x1,y1; x2,y2), specify a number of interpolated steps in between (say, 5 to 10), and then output an array of coordinates that would, when linked linearly (that is, when a straight line is drawn between them), look something like the red line above (with the degree of curviness obviously constrained by the number of steps).
Of all of the many spline functions out there, which of these satisfies these requirements? The mathematical precision of the spline function is less important to me than the simplicity of adapting it to this purpose, and to its aesthetic output. I would be fine with manually setting the eccentricity/circle-ness of each individual set of coordinates, too (so the first ones really should be very circle-like, but the latter should not be).
Put another way, I am looking for a simple function for getting the interior coordinates of an arc between any two sets of coordinates. EDIT to clarify that I'm fine with there being a third variable that sets the inclination of the arc (positive or negative) and its eccentricity or whatever. The function doesn't necessarily have to know where it is on the diagram above, as I will know that. I'm just looking for something that can help me interpolate the arc points.
I think I understand the parameters of the problem; what I'm bad at is geometry and turning mathematical answers into usable Javascript. (Because I don't really understand the math.)
I have already looked at Midpoint circle algorithm and found it difficult to adapt to this purpose (because of the need for sequentiality and non-integer coordinates); I've also looked at a variety of spline interpolation methods and found them way too complicated for my dummy-self to make sense of.
Any pointers, help, and code would be appreciated!

How to use GradientEntry in xfl?

<LinearGradient>
<matrix>
<Matrix a="0.0262451171875" d="0.009765625" tx="218.45" ty="83"/>
</matrix>
<GradientEntry color="#E63426" ratio="0.00392156862745098"/>
<GradientEntry color="#CA271E" ratio="0.36078431372549"/>
<GradientEntry color="#B31D19" ratio="0.749019607843137"/>
<GradientEntry color="#AB1917" ratio="1"/>
</LinearGradient>
This is the relevant part of the xfl file that is needed fill a shape with colors using gradientEntry.
The matrix values above are suppose to somehow help me get the start and end coordinates
for the gradient. Does anyone know how to extract the start and end coordinates. I did a similar thing not long ago using EaselJS Matrix 2D class with the decompose function to decide scaling, rotation, skewing and translation (displacement).
What im trying to do is to draw an xfl picture in HTML 5 with canvas.
Im a bit new at programming so maybe my question is not so well formulated! Sorry about that.
I have been looking into this for a good while, but I haven't figured it out entirely yet.
It's a typical transformation matrix found a lot in the XFL files, but what it transforms exactly is unknown to me. I do know that if you pull [0,0] through the transformation matrix and consider the local transformation space of the layer (ie, subtract the transformation point), you get the center of the gradient.
If I transform [0,1], [1,0] or [1,1], however, the results barely differ from [0,0] because the values in the transformation matrix are always extremely small. It does seem that [1,0] at least points in the right direction, though. If I put [1000,0] in it, I get about 1/2 of the entire length of the gradient.
So just based on sight, I would say that the gradient would run from [-1000,0] to [1000,0]. But that's just an empirical estimation. If anyone's got a better estimation, or perhaps a reason why they did it this way, I'd love to know it.

Categories