I have a very complicated site built on CSS3 that has html elements 3d-transformed, rotated, flipped, flopped and just generally distorted.
I'm trying to figure out the on-screen location of one of these elements and don't see any way to do so. I was wondering if anyone has any ingenious ideas.
Alternatively, if anyone can explain the math behind -webkit-perspective, I can figure out the position as that's the only thing I'm not sure how to model.
Have you tried using getBoundingClientRect()?
I've used it successfully in the past to calculate the dimensions of elements that have been transformed with the transform property.
The problem is, that the CSS3 transformations doesn't actually change the position of the elements in anyway. Of course the browsers "know" that they are repositioned, because it renders them, but this information is not provided back to the DOM/API.
The only thing I can think of, is to calculate the positions based on the transformations yourself, since these are "simple" matrix transformations.
Unfortunately Algebra class has been too long ago, that I can't tell you anymore how to do it - only that it is possible.
Using getBoundingClientRect is a good idea but will only give you the coordinates of the rectangle that contains your shape, not the exact coordinates of the 4 topleft, bottomright, bottomleft, topright corners.
You'd only be able to do this by taking each of those non-transformed coordinates and applying the transform via javascript.
Related
In my SVG application, I have images which the user can extend. I'd like to give controls to the user so they can bend them from the edges as well.
Here is a visual example of what a user could do.
Would love to hear suggestions on how to go about doing this.
One possible idea is that I put the image inside a path & add controls for stretching the edges of the path. However, I wonder how to stretch the image inside the path so that it consumes the whole available space.
There is no easy way to do what you want. However I can suggest two possible tricky ways:
(1) Convert it to a chain of images and apply an appropriate skew transform to each one.
(2) Use an <feDisplacementMap> filter to displace the pixels to the appropriate place.
This is more of a generic question to be honest, just wondering if anyone has done any sort of research on the subject.
Basically I am adding event support to a small game engine I am creating for my own personal use. I would like pixel perfect hover over 2d object event support and am just thinking of the best way of doing it. Realistically it would be faster for me personally just to invoke a draw of my objects onto a transparent canvas and checking if the mouse x y is over a transparent pixel or not since I dont have to make a set of points defining the outside of an object. This would also allow me to have holes in my object and it would still correctly know if I hovered over or not.
What I am wondering is using methods shown here: How can I determine whether a 2D Point is within a Polygon?
How much slower would my method be to the methods shown there?
Im currently still learning so its not easy for me to implement all of this and just test it myself since it would probably take me ages to get to work correctly and test the speeds.
Side note: I would still have a basic bounding box to save it from redrawing and testing every single time.
Checking if a point is in a polygon will 99.999999% of the time be vastly, vastly faster.
To be slower the polygon would need to be extremely complex.
To do the other method you need to use getImageData, and getting image data on the canvas is very slow.
Point in polygon algorithms do properly account for holes. Make sure you have one that obeys the non-zero winding number rule, because that is what canvas uses (as opposed to the even odd rule) and you may want compatibility with paths constructed in the canvas (either now or later).
If I have a canvas with a circle that changes color upon clicking on it, I can use a click event on the canvas element and handle the math for that (distance formula calculation <= radius). But what if I have two circles that overlap (like a van diagram), and I click in the middle of the two circles assuming that only the top circle should change color? If the math of the first circle is applied in this case, both circles would change color.
How would I deal with events in the canvas in terms of overlapping objects like the example above? With hopefully a fast/efficient algorithm?
You might want a framework like EaselJS that has a better api for what you're trying to do. Barebones canvas 2d-context doesn't provide much in terms of display-object / sprite behavior.
Responses above also mention some sort of list to represent layers. I don't think the implementation would be very difficult, just another condition to check for along with the radius.
Canvas isn't really like Flash or like a DOM tree whereby things have sort orders or z-indexes. Its a bit more like a flat rastered image and you have to rely upon other logic in your javascript to remember the sequence & stacking order of things you have drawn.
If you need this kind of interactivity I've always found it best to use a 3rd party library (unless it really is just a case of one or two circles which dont do much).
For interactive 'shape' based javascript graphics I would sugest Raphael.js or D3 which are actually more of SVG tools than a canvas one so maybe it's not for you but they are simple and cross-browser.
There's also processing.js (js port of Processing the Java lib) which feels a bit like flash and again can track all of the levels and objects. Theres a tonne of others but thats another topic.
If it's super simple the options might be:
Hold the co-ordinates of all shapes/elements composited on the canvas inside an object or array which also tracks their z-index/sort sequence, thereby letting you know whats on top.
Using the imagedata at the mouse coordinate of the click to establish what has been clicked
using multiple canvases composited on each other and letting the DOM do the work for the click events
I've seen this "The Scale of the Universe 2" and I just want to know if this can be done with javascript or jQuery or with HTML5 Canvas.
If you click an item (example the "Human") , an info will pop out beside it.
I searched for 3 days here if someone has a similar question. But I only saw Google Map like behavior where you can zoom in on the map cursor position.
Actually I want to make a "Timeline" like effect or, like the "Time Machine" Recovery on Mac OS X.
Fixed position of zoom. Not like a google map zoom, that you can pan and zoom anywhere.
Can I put (example "The human") images and text on a div?
Are there available articles/tutorials about this?
Options:
Javascript
jQuery
HTML5 Canvas and CSS3 Transform and scrolling it to Z-axis so you can zoom in/out.
Flash/Flex (Well I don't want to use lots of resources on CPU because I need it in a large resolution or in full screen.
It is possible to implement an infinite zoom in HTML canvas, this is the source code of a proof of concept that I implemented and here you can test it live.
It is actually quite tricky to implement, and most probably you'll need to use big decimals.
The approach I followed uses the same coordinate space as d3-zoom. Basically, you have three coordinates, x, y and k. K is the zoom. If k is 2 it means that you have doubled everything.
Now, if you want to do infinite zoom, it is very easy to reach k = 64, but that is already outside of float64 precision and you'll have a lot of artifacts, where translating in the image is not smooth, or you don't zoom in where you want.
One way to avoid those artifacts is to use coordinates that have infinite length, for example BigIntegers. To make my implementation easy and compatible with d3-zoom, I used big decimals instead, but I had to implement my own library of BigDecimals, basically infinite precission on the integer part and 32bits of precision on the decimal part. Of course, you also need to adapt the zooming library to support BigDecimals. Moreover, in the case of d3-zoom, a lot of calculations where done in the initial coordinate space (k=1) but division of floats will always have an error and it is also perceivable once you are deep enough. To avoid that you need to do all computations locally.
It might sound like a lot of hassle to insist on using the d3-zoom library, but zooming UX is actually tricky, specially if you combine that at different k, you'll need to consider scrolling, zooming on the phone, double tapping...
In case you want to use SVG transformations, then you'll need to fake it. You'll introduce nodes when they are barely visible, allow to scale them. However, most probably you'll also need to fake it when they are too big to avoid artifacts there.
There is no infinite zoom. However you can zoom in/out of an SVG image in HTML5 canvas.
SVG supports affine tranformation. You can set the required zoom/pan in the affine transform and show the relavant areas. The behavior/listener can be implemented in Javascript and the SVG can be rendered on HTML5 canvas.
As a starting point you can look at this example: http://www.html5canvastutorials.com/labs/html5-canvas-scaling-a-drawing-with-plus-and-minus-buttons/
This is totally doable in HTML5. Actually, any system able to display and zoom images should be able to. It's not one big image being zoomed, it's a big amount of images being zoomed (for instance the initial human is an image, which is scaled and moved out when you zoom in or out). The idea is splendid, but I don't really see any technical performance in it. As long as you correctly limit the number of images being resized and bitmapped, it should keep a decent FPS rate.
The 2nd part of the question is, which javascript library is better/easier to manipulate images with? I won't be actually drawing any shapes or anything. Other info: I'll be using jQuery and don't need to support all browsers, just webkit.
Edit:
More information: the current design is to layout/draw several rows/columns of images in a grid-like layout, with the image in the center being in "focus" (a little larger, with a border or something and some text next to it). The tricky thing is that we want the whole canvas of images to appear to slide/glide over to bring another random image into focus. So obviously the number of images in this grid needs to exceed what is visible in the viewport so that when the transition occurs there are always images occupying the canvas. Other than moving the images around, I won't be blurring them or otherwise modifying them. Eventually we will add user interactions like clicking/touching on a visible image to bring it to focus manually.
Let me know if this is not clear or still confusing.
I ran across scripty2 which seems like an alternative to using canvas/SVG for my purposes. I also started farting around with EaselJS last night, and it seems like this might work, but I'm wondering if it'll end up being more work/complex than just using standard HTML/CSS and a tool like Scripty2 to aid with animations and click/touch events. Just looking for any suggestions. Thanks!
The answer depends on your manipulation and animation.
If it's just translations, CSS wins for speed compared to canvas. I haven't tested, but I feel confident it easily beats SVG for the same sort of thing.
If you're going to be doing non-affine transformations or otherwise messing with the images (e.g. blurring them) you clearly want Canvas.
If you need event handlers per object, you clearly want a retained-mode drawing system like SVG or HTML+CSS. I haven't done enough CSS3 transforms to say how they compare in terms of speed to SVG, but they clearly do not have the robust transformation DOM of SVG.
This is a rather subjective question (or suite of questions) and you haven't yet given sufficient information for a clear answer to be possible.