Most efficient way to implement mouse smoothing - javascript

I'm finishing up a drawing application that uses OpenGL ES 2.0 (WebGL) and JS. Things work pretty well unless I draw with very quick movements. See the image below:
This loop was drawn with a smooth motion, but because JS was only able to get mouse readings at specific locations, the result is faceted. This happens to a certain degree in Photoshop if you have mouse smoothing turned off, though obviously much less because PS has the ability to poll at a much higher rate.
So, I would like to implement some mouse smoothing, but I'm concerned about making sure it's very efficient so that it doesn't bog down the actual pixel drawing operations. I was originally thinking about using the mouse locations that JS is able to grab to generate splines and interpolate between readings to give a smoother result. I'm not sure if this is the best approach, though. If it is, how do I make sure I sample the correct locations on the intermediate spline? Most of the spline equations I've found don't have uniformly-distributed values for t = [0, 1].
Any help/guidance/advice would be very appreciated. Thanks!

Catmull-Rom might be a good one to try, if you haven't already.
http://www.mvps.org/directx/articles/catmull/
I'd pick a minimum segment length and divide up segments that are over that into 1+segmentLength/minSegmentLength sub-segments.

Related

Smooth all data-points in a chart in nodejs/javascript

I have a chart which line I wish to be able to make as smooth as possible. The line should still keep the overall pattern and as close to the original line as possible - but I need to be able to smooth all "bumbs" 100% away / to the degree I wish.
When I say "100% smooth" - I mean something like this (try to draw a curved line in the square): http://soswow.github.io/fit-curve/demo/
The line must only go up or down (while the main trend is up/down-wards) - E.g. like a Sine curve. Now imaging you added a lot of noise/bumps of different sizes/freq. to the Sine curve - but you like to "restore" the curve without changing the curve's overall pattern. That is exactly my need. The ideal: If I could filter away exactly the selected level of noise/freq. I wish to remove from the main trend.
SMA is lagging in nature and I need something which is a lot closer to the actual data-points in time.
I know the lagging feature of SMA is normally accepted - but I don't accept it ;) I strongly believe it would be possible to do better than that :) DMA can shift the data-points itself - but has no effect of the data-points info in real time which is what I'm looking for as well...I know I have to hack/compensate - and I can also come up with 100s of ways myself (mixing all the algos I know, running them multiple times etc.) But I guess someone out there is way smarter than me and that it has already been solved - and I would definitely wonder if not a standard algorithm for exactly this issue exist?
I have looked into many different algorithms - but none of them worked satisfyingly (Moving Averages, Median, polynomial regression, Savitzky Golay etc.). But the result is still way too "bumby" and "pixelated" and otherwise it again becomes too lagging.
Lastly I have found: Bezier Cubic and quadratic which seems pretty interesting but I don't know how apply it on all my data-points and I can't find a suitable NPM (I can only find libraries like this: https://www.npmjs.com/package/bezier-easing which only takes 1 data-point which is not what I'm looking for).
Savitzky G. is better than regular MA - but I still believe it lags too much when it is as smooth as I consider acceptable.
The task is pre-processing and noise-reduction of temperature, price and similar charts in real-time before it is handled over to an IA which looks for abnormalizes (too much noise seems to confuse the AI and is also unnecessary for the most parts). The example with the drawing was only an example - just as well as me mentioning a "Sine curve" (to illustrate my point). The chart is in general very arbitrary and doesn't follow any pre-defined patterns.
I like to emphasize again that the primary prerequisite of the selected algorithm/procedure must be - that it generates a chart-line which minimizes lagging from the main chart's overall trend to an absolutely minimum and at the same time makes it possible to adjust at what level the noise-reduction should take place :-)
I have also made this small drawing in paint - just so you easily would understand my point :-) screencast.com/t/jFq2sCAOu The algo should remove and replace all instances/areas in a given chart which matches the selected frequency - in the drawing is only shown one of each - but normally there would exist many different areas of the chart with the same level of noise.
Please let me know if all this makes sense to you guys - otherwise please pin-point what I need to elaborate more about.
All help, ideas and suggestions are highly appreciated.

How to check if two convex polyhedrons intersect with each other in Three.js?

I'm working on a problem that need to randomly generate and put convex polyhedrons into a cube/cylinder container at randomly chosen points without overlapping. I'm using three.js to get a graphic output.
A demo.
While putting a polyhedron, how to check whether it intersects with other polyhedrons?
The convex polyhedron involved is simply tetrahedron or hexahedron and is constructed using THREE.ConvexGeometry. As I need a precise check, bounding box is not enough, I just use it to make sure two polyhedrons are not intersected.
I've done a lot research and found many complicated theories and methods, what I need is to get a Boolean result that tells if there exists an intersection between two convex polyhedrons. SAT (Separating Axis Theorem) in 3D is good enough, but Three.js doesn't seem to be capable of doing this. Can anyone tell me how to do this kind of check in a simple way or just explain how to do it with SAT in 3D?
You can take a look at http://www.realtimerendering.com/intersections.html. Even though the site is from 2011 the intersection algorithms have not changed in the last years. From the demo it seems that once the polyhedrons are placed in the cube, they don't move. So the SAT algorithm would not be the best solution as it is used for moving polyhedra.
The Gilbert–Johnson–Keerthi is a powerful algorithm that allow to measure distance and check for intersection between convex polyhedrons. Still I believe that it is better to use on simple polyhedron otherwise the computation in the support function might take some time. A possible drawback is that you need to have functions able to measure the distance between a point and another point/segment/triangle, I do not know if some are available in three.js.
http://en.wikipedia.org/wiki/Gilbert%E2%80%93Johnson%E2%80%93Keerthi_distance_algorithm

How could I compare two data sets of X,Y coordinates to look for similarities?

Pretty abstract question, but I'm conducting some research on gesture-based recognition. I've managed to get the gesture to be outputted into a series of X,Y coordinates that I can view as a scatter graph:
Here's my problem; I'm unsure on how to proceed. What is the best way to compare two data sets of X,Y coordinates and give a confidence percentage on how similar they are?
I'm currently using JavaScript and would ideally like to keep using it.
Reading about handwriting recognition software it seems that the early phases such a recognising the strokes might be helpful. Decompose the gesture into a number of elements (lines or curves) then apply some matching algorithms.
There are a couple of other questions that may be had here, for example: if a we have two identical gestures but one would be much slower than the other and take ten times the time, would they be considered similar?
Anyway, for starters I would look at each moment in time at the positions of the cursor in both gestures and determine the geometrical distance between them. Then you could compute a 'deviation' number of one gesture from the other, and if the number is big, then the gestures might not be similar. This could be a starting point.

Infinite Zoom with Javascript, Jquery or HTML5 Canvas

I've seen this "The Scale of the Universe 2" and I just want to know if this can be done with javascript or jQuery or with HTML5 Canvas.
If you click an item (example the "Human") , an info will pop out beside it.
I searched for 3 days here if someone has a similar question. But I only saw Google Map like behavior where you can zoom in on the map cursor position.
Actually I want to make a "Timeline" like effect or, like the "Time Machine" Recovery on Mac OS X.
Fixed position of zoom. Not like a google map zoom, that you can pan and zoom anywhere.
Can I put (example "The human") images and text on a div?
Are there available articles/tutorials about this?
Options:
Javascript
jQuery
HTML5 Canvas and CSS3 Transform and scrolling it to Z-axis so you can zoom in/out.
Flash/Flex (Well I don't want to use lots of resources on CPU because I need it in a large resolution or in full screen.
It is possible to implement an infinite zoom in HTML canvas, this is the source code of a proof of concept that I implemented and here you can test it live.
It is actually quite tricky to implement, and most probably you'll need to use big decimals.
The approach I followed uses the same coordinate space as d3-zoom. Basically, you have three coordinates, x, y and k. K is the zoom. If k is 2 it means that you have doubled everything.
Now, if you want to do infinite zoom, it is very easy to reach k = 64, but that is already outside of float64 precision and you'll have a lot of artifacts, where translating in the image is not smooth, or you don't zoom in where you want.
One way to avoid those artifacts is to use coordinates that have infinite length, for example BigIntegers. To make my implementation easy and compatible with d3-zoom, I used big decimals instead, but I had to implement my own library of BigDecimals, basically infinite precission on the integer part and 32bits of precision on the decimal part. Of course, you also need to adapt the zooming library to support BigDecimals. Moreover, in the case of d3-zoom, a lot of calculations where done in the initial coordinate space (k=1) but division of floats will always have an error and it is also perceivable once you are deep enough. To avoid that you need to do all computations locally.
It might sound like a lot of hassle to insist on using the d3-zoom library, but zooming UX is actually tricky, specially if you combine that at different k, you'll need to consider scrolling, zooming on the phone, double tapping...
In case you want to use SVG transformations, then you'll need to fake it. You'll introduce nodes when they are barely visible, allow to scale them. However, most probably you'll also need to fake it when they are too big to avoid artifacts there.
There is no infinite zoom. However you can zoom in/out of an SVG image in HTML5 canvas.
SVG supports affine tranformation. You can set the required zoom/pan in the affine transform and show the relavant areas. The behavior/listener can be implemented in Javascript and the SVG can be rendered on HTML5 canvas.
As a starting point you can look at this example: http://www.html5canvastutorials.com/labs/html5-canvas-scaling-a-drawing-with-plus-and-minus-buttons/
This is totally doable in HTML5. Actually, any system able to display and zoom images should be able to. It's not one big image being zoomed, it's a big amount of images being zoomed (for instance the initial human is an image, which is scaled and moved out when you zoom in or out). The idea is splendid, but I don't really see any technical performance in it. As long as you correctly limit the number of images being resized and bitmapped, it should keep a decent FPS rate.

HTML5 Canvas Collision Detection "globalCompositeOperation" performance

Morning,
Over the past few months I have been tinkering with the HTML5 Canvas API and have had quite a lot of fun doing so.
I've gradually created a number of small games purely for teaching myself the do's and don'ts of game development. I am at a point where I am able to carry out basic collision detection, i.e. collisions between circles and platforms (fairly simple for most out there but it felt like quite an achievement when you first get it working, and even better when you understand what is actually going on). I know pixel collision detection is not for every game purely because in many scenarios you can achieve good enough results using the methods discussed above and this method is obviously quite expensive on resources.
But I just had a brainwave (It is more than likely somebody else has thought of this and I am way down the field but I've googled it and found nothing)....so here goes....
Would it possible to use/harness the "globalCompositeOperation" feature of canvas. My initial thoughts were to set its method to "xor" and then check the all pixels on the canvas for transparency, if a pixel is found there must be a collision. Right? Obviously at this point you need to work out which objects the pixel in question is occupied by and how to react but you would have to do this for other other techniques.
Saying that is the canvas already doing this collision detection behind the scenes in order to work out when shapes are overlapping? Would it be possible to extend upon this?
Any ideas?
Gary
The canvas doesn't do this automatically (probably b/c it is still in its infancy). easeljs takes this approach for mouse enter/leave events, and it is extremely inefficient. I am using an algorithmic approach to determining bounds. I then use that to see if the mouse is inside or outside of the shape. In theory, to implement hit detection this way, all you have to do is take all the points of both shapes, and see if they are ever in the other shape. If you want to see some of my code, just let me know
However, I will say that, although your way is very inefficient, it is globally applicable to any shape.
I made a demo on codepen which does the collision detection using an off screen canvas with globalCompositeOperation set to xor as you mentioned. The code is short and simple, and should have OK performance with smallish "collision canvases".
http://codepen.io/sakri/pen/nIiBq
if you are using a Xor mode fullscreen ,the second step is to getImageData of the screen, which is a high cost step, and next step is to find out which objects were involved in the collision.
No need to benchmark : it will be too slow.
I'd suggest rather you use the 'classical' bounding box test, then a test on the inner BBOxes of objects, and only after you'd go for pixels, locally.
By inner bounding box, i mean a rectangle for which you're sure to be completely inside your object, the reddish part in this example :
So use this mixed strategy :
- do a test on the bounding boxes of your objects.
- if there's a collision in between 2 BBoxes, perform an inner bounding box test : we are sure there's a collision if the sprite's inner bboxes overlaps.
- then you keep the pixel-perfect test only for the really problematic cases, and you need only to draw both sprites on a temporary canvas that has the size of the bigger sprite. You'll be able to perform a much much faster getImageData. At this step, you know which objects are involved in the collision.
Notice that you can draw the sprites with a scale, on a smaller canvas, to get faster getImageData at the cost of a lower resolution.
Be sure to disable smoothing, and i think that already a 8X8 canvas should be enough (it depends on average sprite speed in fact. If your sprites are slow, increase the resolution).
That way the data is 8 X 8 X 4 = 256 bytes big and you can keep a good frame-rate.
Rq also that, when deciding how you compute the inner BBox, you can allow a given number of empty pixels to get into that inner BBox, trading accuracy for speed.

Categories