Canvas 3D viewer's perspective - javascript

I am working on rendering a 3D world in a 2D space. I found this article: https://en.m.wikipedia.org/wiki/3D_projection. In the perspective projection sub-category, it talks about "the viewer's position relative to the display surface" which is represented by e. Where is e. Is it where the viewer is looking ( the center of the screen), the viewer's actual position relative to the screen(if so how this is gotten), or something completely different?

Position of e depends of coordinate system (space) we consider to be the camera in.
In world space e has different coordinates, in view space or screen space it is always located at the origin.
But the thing is that in computer graphics there's no such thing as camera (same as viewer, eye, e from your article), so transforming (rotating, translating or scaling) the camera actually means applying the appropriate transformations for the whole scene just with opposite values. for instance to rotate the camera around y axis by alpha radians you should rotate the scene around same axis by -alpha radians, thus camera always stays in the same position therefore emulating real world camera where scene stays in the same position but camera keeps transforming.

Related

Cesium Earth: Show satellites in ECI coordinate system

I'm using Cesium Earth to develop an application for satellite tracking.
Now, the satellite coordinates are in Earth Fixed system and it works OK.
However, I need to show them also in ECI coordinate frame and for that I have to make the Earth rotate.
How to do that?
I'll start by mentioning that Cesium often uses the name ICRF as a synonym or replacement for ECI, so if you're searching the documentation you'll have better luck looking for ICRF.
The CZML Sandcastle Demo shows some satellites orbiting the Earth with paths shown in the Inertial frame. This is done in the CZML file by doing two things:
Set the value "referenceFrame":"INERTIAL" in the position section
All of the actual position values must themselves be expressed in Inertial, not Fixed frame.
You can tell the path is in Inertial because it is an ellipse. If it were being shown in Earth-fixed, it would look like a spiral, looping crazily around the Earth. As time passes the orbit ellipse should of course remain in the Inertial frame with the stars, not remaining fixed above any one landmass on the Earth.
However, I need to show them also in ECI coordinate frame and for that I have to make the Earth rotate.
Those are two separate problems. In Cesium, the Earth's fixed frame is already rotating (internally) with respect to the ICRF frame. But the camera stays in Earth-fixed (ECF) by default. So the user sees the Earth appear stationary, and the stars and satellite orbits appear to rotate around the Earth. This is actually a valid way to view the system, as if the camera were just stuck on a very tall pole that was attached to the Earth, sweeping through different orbits.
To make the Earth visually rotate on-screen as time passes, you have to update the Camera's position to keep it stationary in the ICRF frame, as opposed to the default fixed frame.
The Camera Sandcastle Demo has a live example of this. Click the dropdown and select View in ICRF from the list. The code for this begins around line 119 in the live-edit window on the left side:
function icrf(scene, time) {
if (scene.mode !== Cesium.SceneMode.SCENE3D) {
return;
}
var icrfToFixed = Cesium.Transforms.computeIcrfToFixedMatrix(time);
if (Cesium.defined(icrfToFixed)) {
var camera = viewer.camera;
var offset = Cesium.Cartesian3.clone(camera.position);
var transform = Cesium.Matrix4.fromRotationTranslation(icrfToFixed);
camera.lookAtTransform(transform, offset);
}
}
viewer.scene.postUpdate.addEventListener(icrf);
This code just updates the camera's position as time passes, such that the camera appears to be stationary in the ICRF frame with the stars and the satellite orbits, and the Earth itself is shown to rotate.

Projection math in software 3d engine

I'm working on writing a software 3d engine in Javascript, rendering to 2d canvas. I'm totally stuck on an issue related to the projection from 3d world to 2d screen coordinates.
So far, I have:
A camera Projection and View matrix.
Transformed Model vertices (v x ModelViewProj).
Take the result and divide both the x and y by the z (perspective) coordinate, making viewport coords.
Scale the resultant 2d vector by the viewport size.
I'm drawing a plane, and everything works when all 4 vertices (2 tris) are on screen. When I fly my camera over the plane, at some point, the off screen vertices transform to the top of the screen. That point seems to coincide with when the perspective coordinate goes above 1. I have an example of what i mean here - press forward to see it flip:
http://davidgoemans.com/phaser3d/
Code isn't minified, so web dev tools can inspect easily, but i've also put the source here:
https://github.com/dgoemans/phaser3dtest/tree/fixcanvas
Thanks in advance!
note: I'm using phaser, not really to do anything at the moment, but my plan is to mix 2d and 3d. It shouldn't have any effect on the 3d math.
When projection points which lie behind of the virtual camera, the results will be projected in front of it, mirrored. x/z is just the same as -x/-z.
In rendering pipelines, this problem is addresses by clipping algorithms which intersect the primitives by clipping planes. In your case, a single clipping plane which lies somewhere in front of your camera, is enough (in rendering pipelines, one is usually using 6 clipping planes to describe a complete viewing volume). You must prevent the situation that a single primitive has at least one point on front of the camera, and at least another one behind it (and you must discard primitives which lie completely behind, but that is rather trivial). Clipping must be done before the perspective divide, which is also the reason why the space the projection matrix transforms to is called the clip space.

Three.js rotate everything except camera

I have given up trying to orbit a camera around my scene in Three.js and have now decided to revert to doing what I used to do in XNA, just rotate everything except the camera.
The reason I have given up is because I cannot get the camera to orbit properly 360 degrees in all the axis, it starts inverting after going over the top or under the bottom. Using THREE.OrbitControls does not solve this because it merely restricts rotation in the problematic axis instead of fixing the problem.
My problem is now getting this other rotation story working. What I have done is put all objects except the camera in another object "rotSection" and I am now just rotating that object. This is working but rotation is always performed according to the relative (0, 0, 0) position of the rotation object which seems to always stay in the one corner but I would like to rotate around the centre of my world on not around the edge. I have tried to centre the rotSection relative to the scene but it still rotates around its corner and not its centre. Any idea how I can get rotation of an Object3D around a certain point?
The engines don’t move the ship at all. The ship stays where it is and
the engines move the universe around it.
Futurama
The camera in 3d technically never rotates, everything else is rotated and move in order to bring it to camera's local space. You don't have to do any tricks in order to do this, this should be the core of the 3d engine, setting the matrices, setting up the shaders, and doing the correct transforms. Three.js does this for you.
Perhaps you should look into quaternions? Specifically the axisAngle conversion to quats. THREE.OrbitControls won't do what you want.

How to Detect collision of Cube Mesh with browser boundary in ThreeJS

I am new in 3D games and ThreeJS. As shown in image above. My Mesh at browser's left boundry and at that time Mesh position is -1035 instead of 0.
It means mesh.position.x in ThreeJS is not Browser window X pixel then what is mesh.position.x in ThreeJS. Is there any calculation to convert Browser Inner Width and Height to as same as mesh.position.x. How to detect collision with browser boundary .
Three.js uses a 3D spatial coordinate system where the X and Z axes are horizontal and the Y axis is vertical. The origin (0, 0, 0) is just an arbitrary point in that space, and you can "look around" in the 3D world so the origin might not be on the screen. The coordinates of an object in Three.js-space are also, for all intents and purposes, arbitrary units.
You can convert between the 2D screen space and the 3D world space using projectors. Going from 3D to 2D has already been answered, and there is also a nice tutorial on the topic.
So, to find out whether a mesh is on the screen, you should project its 3D position to 2D screen space and then check whether that is outside the size of the window or canvas.

Relative position of latlon points

I'm currently involved in a project where it'd be useful to display geographical points as overlays on top of a VIDEO element (size 512x288).
The VIDEO element contains a live broadcast, and I also have a real time feed of bearing, latitude, longitude and altitude that's fed into the site as JavaScript variables.
I also have an array of POI's (Points of Interest) that's included in the site. The POI's are in the following format:
var points = [['Landmark name', 'latitude', 'longitude'], […]];
Every five seconds or so, I want to loop through the array of POI's, and check to see if any of them are within the video's current viewport - and if true, overlay them on top of the VIDEO element.
Could someone point me in the right direction as to what I should be looking at? I assume I have to map the points to a 2D plane using e.g. Mercator projection.
But I'm a bit lost when it comes to mapping the POI's relative pixel position to the video's.
Looking forward to getting some tips!
Having done this before, the most critical element is to determine the field-of-view of the camera accurately (at least to the hundredth of a degree) in either the vertical or horizontal direction. Then, use the aspect ratio (512/288 = 1.78) of the video to determine the other angle (if needed) using atan formula (do not make the common mistake of multiplying the vertical field of view by the aspect ratio to get the horizontal field of view. Field of view is angular, aspect ratio is linear). Think of it in terms of setting up a camera, for example, in OpenGL except your camera is in the real world. Instead of picking field-of-view and camera orientation, you are going to have to measure it.
You will need to know the attitude of the camera (pan/tilt or pitch/roll/yaw) in order to overlay graphics properly.
You won't need a Mercator projection. I am assuming that the field of view of the camera is relatively small (ie. 40 deg H or so) so you can usually assume the projected surface is a rectangle (technically, it is a small patch from a sphere).

Categories