I'm using Cesium Earth to develop an application for satellite tracking.
Now, the satellite coordinates are in Earth Fixed system and it works OK.
However, I need to show them also in ECI coordinate frame and for that I have to make the Earth rotate.
How to do that?
I'll start by mentioning that Cesium often uses the name ICRF as a synonym or replacement for ECI, so if you're searching the documentation you'll have better luck looking for ICRF.
The CZML Sandcastle Demo shows some satellites orbiting the Earth with paths shown in the Inertial frame. This is done in the CZML file by doing two things:
Set the value "referenceFrame":"INERTIAL" in the position section
All of the actual position values must themselves be expressed in Inertial, not Fixed frame.
You can tell the path is in Inertial because it is an ellipse. If it were being shown in Earth-fixed, it would look like a spiral, looping crazily around the Earth. As time passes the orbit ellipse should of course remain in the Inertial frame with the stars, not remaining fixed above any one landmass on the Earth.
However, I need to show them also in ECI coordinate frame and for that I have to make the Earth rotate.
Those are two separate problems. In Cesium, the Earth's fixed frame is already rotating (internally) with respect to the ICRF frame. But the camera stays in Earth-fixed (ECF) by default. So the user sees the Earth appear stationary, and the stars and satellite orbits appear to rotate around the Earth. This is actually a valid way to view the system, as if the camera were just stuck on a very tall pole that was attached to the Earth, sweeping through different orbits.
To make the Earth visually rotate on-screen as time passes, you have to update the Camera's position to keep it stationary in the ICRF frame, as opposed to the default fixed frame.
The Camera Sandcastle Demo has a live example of this. Click the dropdown and select View in ICRF from the list. The code for this begins around line 119 in the live-edit window on the left side:
function icrf(scene, time) {
if (scene.mode !== Cesium.SceneMode.SCENE3D) {
return;
}
var icrfToFixed = Cesium.Transforms.computeIcrfToFixedMatrix(time);
if (Cesium.defined(icrfToFixed)) {
var camera = viewer.camera;
var offset = Cesium.Cartesian3.clone(camera.position);
var transform = Cesium.Matrix4.fromRotationTranslation(icrfToFixed);
camera.lookAtTransform(transform, offset);
}
}
viewer.scene.postUpdate.addEventListener(icrf);
This code just updates the camera's position as time passes, such that the camera appears to be stationary in the ICRF frame with the stars and the satellite orbits, and the Earth itself is shown to rotate.
Related
Edit 2: I found what I was looking for here. The near and far planes compound the bounding box of the camera.
I have a Perspective Camera and multiple set of points (SphereGeometry) in the space and I want to define a bounding box according to what is visible in the field of view, so that I only need to load the points that are currently visible in the space according to the camera settings.
My problem when defining the bounding box is that I don't have a reference point between the camera position and what I'm currently looking at. For example, if the camera is at the position v1(20, 20, 20), and let's say the mid point of my set of points is set at v2(5, 5, 5). Then, I can define the initial point of my bounding box as this mid point v2(5, 5, 5) , and I could create a bounding box (square) with size (v1-v2)/2. But once I start moving the camera I lose this reference initial point, and I don't know how to obtain a distance parameter to define a proper bounding box of "what is currently visible".
Every time I move the camera I need to know the size of the bounding box and its position so that it represents as accurate as possible "what is currently visible" in the field of view.
One possible solution could be to translate the initial point v2(5, 5, 5) following every camera movement, is there any way to do this or something similar?
Edit:
So far I've been able to replicate #TheJim01 's code into my component, but I think this solution is not what I was looking for or I'm not able to understand it properly. I'm adding some images next to better explain what is going on.
In the next image I have a set of points of various colors, the 8 brown points represent each of the worldBoxCorners before performing a worldToLocal transformation. Which fits the whole space as it should.
Next, I render worldBoxCorners after the transformation. I do not understand how the bounding box is moved or how can I use it.
What I'm looking for is, for example if I zoom in into the space in the greener zone, to obtain a bounding box alike the last image.
The point is to only load the points that are within the field of view, but for that I need to define a bounding box. When I say loading the points, I mean that only the points that are visible should be loaded from the backend (not talking about rendering here).
This solution provides a perfect bounding box for the current loaded space. What I'm missing on is how to define a new bounding box for a given loaded space according to camera parameters change (zoom in/out or rotations).
I am working on rendering a 3D world in a 2D space. I found this article: https://en.m.wikipedia.org/wiki/3D_projection. In the perspective projection sub-category, it talks about "the viewer's position relative to the display surface" which is represented by e. Where is e. Is it where the viewer is looking ( the center of the screen), the viewer's actual position relative to the screen(if so how this is gotten), or something completely different?
Position of e depends of coordinate system (space) we consider to be the camera in.
In world space e has different coordinates, in view space or screen space it is always located at the origin.
But the thing is that in computer graphics there's no such thing as camera (same as viewer, eye, e from your article), so transforming (rotating, translating or scaling) the camera actually means applying the appropriate transformations for the whole scene just with opposite values. for instance to rotate the camera around y axis by alpha radians you should rotate the scene around same axis by -alpha radians, thus camera always stays in the same position therefore emulating real world camera where scene stays in the same position but camera keeps transforming.
I'm trying to achieve a permanent head-coupled perspective without using the full headtrackr library. My head won't be moving, but it won't be directly in front of the screen.
I have a little demo that you can download and run with python -m SimpleHTTPServer 8000
The code is adapted from mainly this headtrackr example and part of the headtrackr source
My expectations are based off this diagram:
In the third image, I imagine slightly swiveling my monitor counter-clockwise from above. This should be equivalent to reducing Z and making X less than zero. I expect my monitor to show the middle image, but instead I see something like this:
I think the "window" I'm looking through is the XY-plane, but shouldn't it stretch like the middle orange rectangle in the first diagram? Here's another window that stays fixed: http://kode80.com/2012/04/09/holotoy-perspective-in-webgl/ to see what I mean by "window."
Are off-axis perspective and head-tracking unrelated? How do I get a convincing illusion of off-axis perspective in THREE.js?
I think you can accomplish what you're aiming for with setViewOffset. Your diagram looks a little off to me. Perhaps its because there is no qube in the off-axis projection, but I think the point is that the frustum should remain framed on a fixed point without rotating the camera which would introducer perspective distortion.
To accomplish this with setViewOffset I would set the fullWidth and fullHeight to some extra large size. The view offset the will be a window in that oversized view. As the user moves that window will be offset in the opposite direction of the viewer.
http://threejs.org/docs/#Reference/Cameras/PerspectiveCamera
3D Projection mapping can be broken into a corner-pinning texture step and a perspective adjustment step. I think they're somewhat unrelated.
In Three.js you can make a quad surface (composed of two triangle Face3) and map a texture onto the surface. Then move the corners of the quad in XY (not Z). I don't think this step introduces perspective artifacts other than what's necessary for minor deformations in a quad texture. I think I'm talking about banding issues and nearest neighbor artifacts, not 3d perspective errors. The size of these artifacts depends on how the projector is shining on the object. If the projector is nicely perpendicular to the surface, very little corner-mapping is necessary. If you're using a monitor instead of a projector, then no corner-pinning is necessary.
Next is the perspective adjustment step. You have to adjust the content of the texture based on where the user is in relation to the real-life surface. I believe you can do this with an XYZ distance from the physical viewer to the center of the screen surface, a scaling factor between pixels and real-life size, and the pixel dimensions of the surface.
In my demo, the blueish faces of my cubes point in the positive Z direction. When I rotate my monitor in the real-life world, they continue to point in the positive Z direction of their monitor world. The diagram that I posted is a little misleading because the orange box in the middle picture is locally rotated to compensate for the rotating of the real-life monitor world. That orange box's front face is no longer pointing exactly in the positive Z direction of its monitor world.
Outside of Three.js, in Processing there are some techniques for projection mapping.
This one may be the simplest, although I haven't tried it myself: http://blogs.bl0rg.net/netzstaub/2008/08/24/wiimote-headtracking-in-processing/
SurfaceMapper for Processing has support for real-life curved surfaces (not just flat rectangles), but it only works for Processing before Processing 2.0.
If anyone develops a SurfaceMapper library for Three.js that would be really cool! I'd love to design a virtual world, put cameras in the world, have each camera consider real-life viewer perspective, and then put those rendered textures on real-life displays.
You need to adjust the perspective matrix. It's built with -left +right -bottom +top. Changing these will produce the effect you are looking for.
A couple of questions in my head while studying Interaction with Three.js
1)Please explain what are Viewport Coordinates?
2)How they differ from client Coordinates?
3)How we ran onto this formula.
var vpx = ( eltx / div.offsetWidth ) * 2 - 1;
var vpy = - ( elty / div.offsetHeight ) * 2 + 1;
// vp->viewport, eltx->client coords,div->The div where webGL renderer has been appended.
4)Why we take 3rd coordinate in viewport System as 0.5 to 1 while taking the vector ?
I would be really grateful if you will explain these Questions and the concept in detail or suggest me a book to read from. Best if some 3D diagrams are available for 1st Question.
Would be really really thankful.
The 3D Scene is rendered inside a canvas container. It can be any size, and located anywhere in the layout of a HTML page. Very often WebGL examples and apps are made made full screen, so that the canvas fills the whole screen and is effectively the same size as the HTML layout. But that's not always the case, and a WebGL scene can be embedded alongside other HTML content, much like an image.
Three.js generally doesn't care or know about how the 3D canvas relates to the coordinates and size of the whole HTML page. Inside the WebGL canvas a completely different coordinate system -totally independent of screen pixels- is used.
When you want to handle clicks inside the 3D canvas, unfortunately the browser only gives the pixel values counting from the top left corner of the HTML page (eltx and elty). So you need to first convert the HTML mouse coordinates to the WebGL coordinates (a vector usable in Three.js). In WebGL coordinates, 0,0 is the center of canvas, -1,-1 is top left, +1,+1 bottom right and so on, no matter what the size and position of the canvas is.
First you need to take the position of the canvas and subtract that from the mouse coordinates. Your formula does not take that into account, but instead assumes that the webgl container is located at the top left corner of the page (canvas position is 0px 0px). Thats ok but if you the container is moved or the HTML body has CSS padding for example, it won't be accurate anymore.
Second you need to convert the absolute mouse pixel position (adjusted in the previous step), and convert that to relative position inside the canvas. That's what your formula does. If mouse x position is 50px and your canvas is 100px wide, the formula goes like (50/100) * 2 - 1 = 0, which is the screen space center of the canvas viewport.
Now you have coordinates that make sense in the Three.js 3D scene.
I'm currently involved in a project where it'd be useful to display geographical points as overlays on top of a VIDEO element (size 512x288).
The VIDEO element contains a live broadcast, and I also have a real time feed of bearing, latitude, longitude and altitude that's fed into the site as JavaScript variables.
I also have an array of POI's (Points of Interest) that's included in the site. The POI's are in the following format:
var points = [['Landmark name', 'latitude', 'longitude'], […]];
Every five seconds or so, I want to loop through the array of POI's, and check to see if any of them are within the video's current viewport - and if true, overlay them on top of the VIDEO element.
Could someone point me in the right direction as to what I should be looking at? I assume I have to map the points to a 2D plane using e.g. Mercator projection.
But I'm a bit lost when it comes to mapping the POI's relative pixel position to the video's.
Looking forward to getting some tips!
Having done this before, the most critical element is to determine the field-of-view of the camera accurately (at least to the hundredth of a degree) in either the vertical or horizontal direction. Then, use the aspect ratio (512/288 = 1.78) of the video to determine the other angle (if needed) using atan formula (do not make the common mistake of multiplying the vertical field of view by the aspect ratio to get the horizontal field of view. Field of view is angular, aspect ratio is linear). Think of it in terms of setting up a camera, for example, in OpenGL except your camera is in the real world. Instead of picking field-of-view and camera orientation, you are going to have to measure it.
You will need to know the attitude of the camera (pan/tilt or pitch/roll/yaw) in order to overlay graphics properly.
You won't need a Mercator projection. I am assuming that the field of view of the camera is relatively small (ie. 40 deg H or so) so you can usually assume the projected surface is a rectangle (technically, it is a small patch from a sphere).