A couple of questions in my head while studying Interaction with Three.js
1)Please explain what are Viewport Coordinates?
2)How they differ from client Coordinates?
3)How we ran onto this formula.
var vpx = ( eltx / div.offsetWidth ) * 2 - 1;
var vpy = - ( elty / div.offsetHeight ) * 2 + 1;
// vp->viewport, eltx->client coords,div->The div where webGL renderer has been appended.
4)Why we take 3rd coordinate in viewport System as 0.5 to 1 while taking the vector ?
I would be really grateful if you will explain these Questions and the concept in detail or suggest me a book to read from. Best if some 3D diagrams are available for 1st Question.
Would be really really thankful.
The 3D Scene is rendered inside a canvas container. It can be any size, and located anywhere in the layout of a HTML page. Very often WebGL examples and apps are made made full screen, so that the canvas fills the whole screen and is effectively the same size as the HTML layout. But that's not always the case, and a WebGL scene can be embedded alongside other HTML content, much like an image.
Three.js generally doesn't care or know about how the 3D canvas relates to the coordinates and size of the whole HTML page. Inside the WebGL canvas a completely different coordinate system -totally independent of screen pixels- is used.
When you want to handle clicks inside the 3D canvas, unfortunately the browser only gives the pixel values counting from the top left corner of the HTML page (eltx and elty). So you need to first convert the HTML mouse coordinates to the WebGL coordinates (a vector usable in Three.js). In WebGL coordinates, 0,0 is the center of canvas, -1,-1 is top left, +1,+1 bottom right and so on, no matter what the size and position of the canvas is.
First you need to take the position of the canvas and subtract that from the mouse coordinates. Your formula does not take that into account, but instead assumes that the webgl container is located at the top left corner of the page (canvas position is 0px 0px). Thats ok but if you the container is moved or the HTML body has CSS padding for example, it won't be accurate anymore.
Second you need to convert the absolute mouse pixel position (adjusted in the previous step), and convert that to relative position inside the canvas. That's what your formula does. If mouse x position is 50px and your canvas is 100px wide, the formula goes like (50/100) * 2 - 1 = 0, which is the screen space center of the canvas viewport.
Now you have coordinates that make sense in the Three.js 3D scene.
Related
I have an editor that opens a JXG.Board which I can drag around etc. This editor maintains the ratio of the board, so that means I can make the board larger and smaller inside the editor while the original remains at the same size. When the size of the board INSIDE the editor matches the size of the actual canvas element then there are no problems.
But as soon as there is a resolution mismatch the coordinates don't get updated properly because the coordinates in the editor are different to that of the actual canvas coordinates. So as an extreme example scaling down so the board is 50% of the original (400 x 300) vs (800 x 600) results in the origin being offset towards the upper left corner each time the board is loaded into the editor as the origin is being taken from the editor and is 50% of that of the original so the origin drifts.
I'm a bit confused about how the coordinate system works. I've tried all sorts of stuff such as applying a scale factor and so on. I just cannot seem to get it working. I figure that I need to be using board.origin.usrCoords and somehow mapping that to board.origin.scrCoords - but I just cannot seem to get it working!
Basically what is the correct way to do this mapping? So far I've only used board.origin.scrCoords and the docs aren't exactly clear on how to work with the board coordinates.
Thank you!
I would like to do the opposite of this tutorial which aligns HTML elements to ones in 3D space by projecting the 3D coordinates back onto the screen.
I am able to use unproject and some vector math to get pretty good results with location on the screen provided I'm using locations within the camera frustum. This no longer works when I want to use the position of HTML elements off-screen (i.e. places on the page where the user would need to scroll to) due to the perspective camera distortion.
How can I fix this?
Given a 2D coordinate (in particular, given an HTML element.offsetTop) and a perspective camera, how do I convert it into 3D space?
Thank you!
Currently I have an image that needs to be manipulated so it matches the same scale, position, and rotation as a template.
The grey rectangle with a circle in the middle is the template.
The orange rectangle and circle represents the user's input. It needs to be rotated, scaled and aligned to it matches the grey one. I'm currently stumped on how to proceed. I've no code other than the following.
function align_image()
{
// clever transform alignment code here
}
Bad dog, no biscuit!
The process at of aligning the images would normally be done manual input and judged by eye. I'm hoping to automate this step and align the image to its respective size and position but leaving the comfort and safety of Photoshop DOM I'm not sure how to proceed or even if this is a trivial matter or one left best alone. The project is web based currently using javascript and three.js
So if anyone can give me some pointers I'd appreciated it.
I don't code javascript so I can only talk about the algorithm. Generally best tool for registration is to use feature matching methods (using sift, surf,...) but your image is not the kind that have strong features. Now if you're always dealing with rectangles and circles in your images, find the "edges" of the rectangle with Hough Transform, compute the angle of those edges (lines) then rotate the image with that angle in the opposite direction.
Then with the help of Hough Circle Detector, find the center of the circles in the middle of the images, calculate the distance between them, and move the target rectangle to the source's circle position. After the movement by comparing the radius of the circles, you can resize the target image to make it like the source rectangle.
All of these are conveniently doable with Opencv.
I'm currently involved in a project where it'd be useful to display geographical points as overlays on top of a VIDEO element (size 512x288).
The VIDEO element contains a live broadcast, and I also have a real time feed of bearing, latitude, longitude and altitude that's fed into the site as JavaScript variables.
I also have an array of POI's (Points of Interest) that's included in the site. The POI's are in the following format:
var points = [['Landmark name', 'latitude', 'longitude'], […]];
Every five seconds or so, I want to loop through the array of POI's, and check to see if any of them are within the video's current viewport - and if true, overlay them on top of the VIDEO element.
Could someone point me in the right direction as to what I should be looking at? I assume I have to map the points to a 2D plane using e.g. Mercator projection.
But I'm a bit lost when it comes to mapping the POI's relative pixel position to the video's.
Looking forward to getting some tips!
Having done this before, the most critical element is to determine the field-of-view of the camera accurately (at least to the hundredth of a degree) in either the vertical or horizontal direction. Then, use the aspect ratio (512/288 = 1.78) of the video to determine the other angle (if needed) using atan formula (do not make the common mistake of multiplying the vertical field of view by the aspect ratio to get the horizontal field of view. Field of view is angular, aspect ratio is linear). Think of it in terms of setting up a camera, for example, in OpenGL except your camera is in the real world. Instead of picking field-of-view and camera orientation, you are going to have to measure it.
You will need to know the attitude of the camera (pan/tilt or pitch/roll/yaw) in order to overlay graphics properly.
You won't need a Mercator projection. I am assuming that the field of view of the camera is relatively small (ie. 40 deg H or so) so you can usually assume the projected surface is a rectangle (technically, it is a small patch from a sphere).
I'm trying to place a circle at 50% of the width of the paper using RaphaelJS, is this possible without first doing the math (.5 * pixel width)? I want to simply be able to place an element at 50% of its container's width, is this even possible with the current Raphael API?
Raphael claims to be able to draw vector graphics, and yet it seems everything in the API is pixel-based. How can you draw a vector image using pixels? That seems 100% contradictory.
Likewise, as I understand vector art, it retains the same dimensions regardless of actual size. Is this not one of the primary reasons to use vector graphics, that it doesn't matter if it's for screen, print or whatever, it will always be the same scale? Thus, I'm further
confused by the need for something like ScaleRaphael; just seems like such functionality is part and parcel to creating vector graphics. But, perhaps I just don't understand vector graphics?
It just doesn't seem like an image that is created with absolute pixel dimensions and unable to be resized natively qualifies as a vector image. That, or I'm missing a very large chunk of the API here.
Thanks in advance for any help. I've attempted to post this twice now to the RaphaelJS Google Group, but I guess they are censoring it for whatever reason because I've been waiting for it to appear since last week and still no sign of my posts (although other new posts are showing up).
Using pixel values to define shape positions/dimensions does not make it a non-vector shape. Take for instance Adobe Illustrator - this is a vector product and yet you can still see that the properties for each object shows the positions and dimensions is pixels.
A basic explanation of vector graphics would be like this, taking a rectangle as an example:
A vector rectangle will have a number of properties such as x, y,
width and height. These properties can be specified in pixels. The
difference with vector (as opposed to raster) is that these pixel
properties are only used to determine how the shape is drawn. So when
the display is rendered (refreshed) the "system" can redrawn the shape
using the same properties without effecting the quality of the resize.
A raster image however will hold a lot more information (i.e. the
exact value of each pixel used to form the shape/rectangle)
If the word "pixel" makes you think it is contradictory, just remeber everything on a computer screen is rendered in pixels. Even vector graphics have to be converted to "raster" as some point in the graphics pipeline.
If you are worried about having to use a calculation (0.5 * width) then just remember that something has to do that calculation, and personally I would happily handle this simple step myself.
After all that, you should just calculate size and position in pixels based on the size of your "paper" element and feed those values in Raphael for creating the shape.