I'm currently involved in a project where it'd be useful to display geographical points as overlays on top of a VIDEO element (size 512x288).
The VIDEO element contains a live broadcast, and I also have a real time feed of bearing, latitude, longitude and altitude that's fed into the site as JavaScript variables.
I also have an array of POI's (Points of Interest) that's included in the site. The POI's are in the following format:
var points = [['Landmark name', 'latitude', 'longitude'], […]];
Every five seconds or so, I want to loop through the array of POI's, and check to see if any of them are within the video's current viewport - and if true, overlay them on top of the VIDEO element.
Could someone point me in the right direction as to what I should be looking at? I assume I have to map the points to a 2D plane using e.g. Mercator projection.
But I'm a bit lost when it comes to mapping the POI's relative pixel position to the video's.
Looking forward to getting some tips!
Having done this before, the most critical element is to determine the field-of-view of the camera accurately (at least to the hundredth of a degree) in either the vertical or horizontal direction. Then, use the aspect ratio (512/288 = 1.78) of the video to determine the other angle (if needed) using atan formula (do not make the common mistake of multiplying the vertical field of view by the aspect ratio to get the horizontal field of view. Field of view is angular, aspect ratio is linear). Think of it in terms of setting up a camera, for example, in OpenGL except your camera is in the real world. Instead of picking field-of-view and camera orientation, you are going to have to measure it.
You will need to know the attitude of the camera (pan/tilt or pitch/roll/yaw) in order to overlay graphics properly.
You won't need a Mercator projection. I am assuming that the field of view of the camera is relatively small (ie. 40 deg H or so) so you can usually assume the projected surface is a rectangle (technically, it is a small patch from a sphere).
Related
I would like to do the opposite of this tutorial which aligns HTML elements to ones in 3D space by projecting the 3D coordinates back onto the screen.
I am able to use unproject and some vector math to get pretty good results with location on the screen provided I'm using locations within the camera frustum. This no longer works when I want to use the position of HTML elements off-screen (i.e. places on the page where the user would need to scroll to) due to the perspective camera distortion.
How can I fix this?
Given a 2D coordinate (in particular, given an HTML element.offsetTop) and a perspective camera, how do I convert it into 3D space?
Thank you!
I'm using Cesium Earth to develop an application for satellite tracking.
Now, the satellite coordinates are in Earth Fixed system and it works OK.
However, I need to show them also in ECI coordinate frame and for that I have to make the Earth rotate.
How to do that?
I'll start by mentioning that Cesium often uses the name ICRF as a synonym or replacement for ECI, so if you're searching the documentation you'll have better luck looking for ICRF.
The CZML Sandcastle Demo shows some satellites orbiting the Earth with paths shown in the Inertial frame. This is done in the CZML file by doing two things:
Set the value "referenceFrame":"INERTIAL" in the position section
All of the actual position values must themselves be expressed in Inertial, not Fixed frame.
You can tell the path is in Inertial because it is an ellipse. If it were being shown in Earth-fixed, it would look like a spiral, looping crazily around the Earth. As time passes the orbit ellipse should of course remain in the Inertial frame with the stars, not remaining fixed above any one landmass on the Earth.
However, I need to show them also in ECI coordinate frame and for that I have to make the Earth rotate.
Those are two separate problems. In Cesium, the Earth's fixed frame is already rotating (internally) with respect to the ICRF frame. But the camera stays in Earth-fixed (ECF) by default. So the user sees the Earth appear stationary, and the stars and satellite orbits appear to rotate around the Earth. This is actually a valid way to view the system, as if the camera were just stuck on a very tall pole that was attached to the Earth, sweeping through different orbits.
To make the Earth visually rotate on-screen as time passes, you have to update the Camera's position to keep it stationary in the ICRF frame, as opposed to the default fixed frame.
The Camera Sandcastle Demo has a live example of this. Click the dropdown and select View in ICRF from the list. The code for this begins around line 119 in the live-edit window on the left side:
function icrf(scene, time) {
if (scene.mode !== Cesium.SceneMode.SCENE3D) {
return;
}
var icrfToFixed = Cesium.Transforms.computeIcrfToFixedMatrix(time);
if (Cesium.defined(icrfToFixed)) {
var camera = viewer.camera;
var offset = Cesium.Cartesian3.clone(camera.position);
var transform = Cesium.Matrix4.fromRotationTranslation(icrfToFixed);
camera.lookAtTransform(transform, offset);
}
}
viewer.scene.postUpdate.addEventListener(icrf);
This code just updates the camera's position as time passes, such that the camera appears to be stationary in the ICRF frame with the stars and the satellite orbits, and the Earth itself is shown to rotate.
I am working on trying to create a basic, grid-based, but performant weather-arrow visualization system.
EDIT 2:
Up-to-date version here: ( Mapbox Tracker ) of the system using the workflow which is described below
Usage Instructions:
- Click on Wind icon (on the left)
- Wait for triangles to occupy screen
- Pan time-slider (at the bottom)
As you will observe (especially on larger resolutions or when panning time slider quickly) there is quite a performance hit when drawing the triangles.
I would greatly appreciate any advice on where to start with either using something in the current API which would help, or any ideas on how to tap into the current graphics pipeline with some type of custom buffer where I would only need to rotate, scale, change color of triangles already populated in screen space.
I feel as though my specific use-case would greatly benefit from something like this, I really just don't know how to approach it.
I have a naive implementation running using this workflow:
Create a geojson FeatureCollection source
Create a fill layer
Using Data Driven property: fill-color
Data function:
Get map bounds
Project sw & ne into screen points (map.project(LatLng))
Divide height and width into portions
Loop through width and height portions
Lookup data
Access data rotation property
Create vertices based on center point + size
Rotate vertices
Create Point objects for vertices
Unproject Point Object and wrap map.unproject(Point).wrap()
Create Feature Object, assign Data driven Color
Assign unprojected LatLng as Coordinates to Polygon geometry
Add to Feature Array for Collection
Call setData on layer
So while this works, I'm looking for advice for a more performance friendly approach.
What I'm thinking here is whether I can somehow create a custom layer, one where I only need to draw to screen co-ordinates to represent the data relative to its LatLng point. So that I can draw colored, scaled, rotated triangles in screen space, and then have them update to relevant data from the new relative LatLng position.
E.g. Update some type of Mesh on screen instead of having to: unproject, then update feature collection source using map.getSource('arrows').setData(d), requestAnimationFrame(function) etc.
I've done similar in three.js in other projects but I would much rather use something that is more mapbox native. Does this sound feasible? Am I going to see a decent performance boost if so?
I've not dealt with raw gl calls before etc so I might need a pointer or two in the right direction if its going to need to get as low level as that.
EDIT:
Previous Implementation using gmaps / three.js : volvooceanrace
(wait for button on left to go from grey to black) click on top button which shows a 'wind' label when hovered over, slide red time bar underneath to change data.
Added screenshot of current working implementation
Mapbox GL Arrows
Not sure what was available in 2016, but a reasonable approach these days might be to use symbol layers, and the icon-rotate data-driven property to rotate each icon based on the property of its data point.
I've been trying to find more information about setting the listener orientation using Web Audio API. I've checked the api documentation but I'm not completely clear on how it should be used. https://docs.webplatform.org/wiki/apis/webaudio/AudioListener/setOrientation
Describes which direction the listener is pointing in the 3D cartesian coordinate space. Both a front vector and an up vector are provided. In human terms, the front vector represents which direction the person's nose is pointing. The up vector represents the direction the top of a person's head is pointing. These values are expected to be linearly independent (at right angles to each other). The x, y, z parameters represent a front direction vector in 3D space, with the default value being (0,0,-1). The xUp, yUp, zUp parameters represent an up direction vector in 3D space, with the default value being (0,1,0).
I need to use the orientation rotation to help the user determine whether the source sound is behind or in front, how could I turn the listener orientation 90 degrees to the right or left?
Many thanks
First, make sure that you understand that you're rotating the listener, not the source, by doing this. You're basically telling the software to correct for the user not facing their screen (assuming a standard configuration where the screen is "front").
According to the spec, rotating the user 90° would mean changing the "nose" vector (forwardX, forwardY, forwardZ) to 1,0,0 (listener facing right) or -1,0,0 (listener facing left). Facing 45° left would be:
context.listener.forwardX.value = -1
context.listener.forwardY.value = 0
context.listener.forwardZ.value = -1
(the vector will be normalized, of course; length doesn't matter).
I'm new to three.js and am trying to set up what amounts to a 2D visualization (for an assortment of layered sprites) using these 3D tools. I'd like some guidance on the PerspectiveCamera() arguments and camera.position.set() arguments. I already have a nudge in the right direction from this answer to a related question, which said to set the z coordinate equal to 0 in camera.position.set(x,y,z).
Below is the snippet of code I'm modifying from one of stemkoski's three.js examples. The parts that are hanging me up for the moment are the values for the VIEW_ANGLE, x, and y. Assuming I want to have a flat camera view on a plane the size of the screen how should I assign these variables? I've tried range of values but it's hard to tell from the visualization what is happening. Thanks in advance!
var SCREEN_WIDTH = window.innerWidth, SCREEN_HEIGHT = window.innerHeight;
var VIEW_ANGLE = ?, ASPECT = SCREEN_WIDTH / SCREEN_HEIGHT, NEAR = 0.1, FAR = 20000;
camera = new THREE.PerspectiveCamera( VIEW_ANGLE, ASPECT, NEAR, FAR);
scene.add(camera);
var x = ?, y = ?, z = 0;
camera.position.set(x,y,z);
camera.lookAt(scene.position);
UPDATE - perspective vs orthographic camera:
Thanks #GuyGood, I realize I need to make a design choice about the perspective camera versus the orthographic camera. I now see that the PerspectiveCamera(), even in this 2D context would allow for things like parallax, whereas OrthographicCamera() would allow for literal rendering of sizes (no diminishing with distance) no matter what layer my 2D element is on. I'm inclined to think I'll still use the PerspectiveCamera() for effects such as small amounts of parallax between the sprite layers (so I guess my project is not purely 2D!).
It seems then that the main thing is to make all the sprite layers parallel to the viewing plane and that camera.position.set() is the orthogonal viewing line to the center of the field of view.This must be basic for so many folks here; it is such a new world to me!
I think I still have a hard time wrapping my head around the role of VIEW_ANGLE, x, and y and the distance between the camera and the far and near viewing planes in a 2D visualization. With the orthographic camera this is pretty immaterial - you just need enough depth to include all the layers you want and a viewing plane that suits the scale of your sprites. However, with the perspective camera the role of depth and field influences the effect of parallax, but are there other considerations as well?
UPDATE 2 - Angle of view and other variables:
After a bit more tooling around in pursuit of how to think about Angle of View (Field of View, or FOV) for the camera and the x,y,z arguments for the camera position, I came across this helpful video summary of the role of Field of View in game design (a close enough analog to answer my questions for my 2D visualization). Along with this Field of View tutorial for photographers that I also found helpful (if maybe a touch cheesy ;), these two resources helped me get a sense of how to choose a Field of View for my project and what happens with either very wide or narrow Fields of View (which are measured in number of degrees out of 360). The best results are a mix of what feels like a natural field of vision for a human, depending on the distance of the screen or projection from their face, and is also keenly related to the relative scale of things in the foreground versus background in the visualization (wider fields of view make the background look smaller, narrower fields of view magnify the background - similar to, though not as pronounced as the effect of an orthographic camera). I hope you find this as helpful as I did!
UPDATE 3 - Further reading
For anyone zesting for more detail about camera specifications in a range of uses, you may find chapter 13 of Computer Graphics Principles and Practice as useful as I have for addressing my above questions and much more.
UPDATE 4 - Considerations for the z dimension in the Orthographic camera
As I've continued my project I decided to use the orthographic camera so that I could increment the z dimensions of my sprites in order to avoid z-fighting, yet not have them appear to recede progressively into the distance. By contrast, if I want to make it appear as though a sprite is receding into the distance, I can simply adjust its size. However, today I ran across a silly mistake that I wanted to point out to save others from the same trouble. Although the orthographic camera does not depict receding size as objects are more distant, take care that there is still a back frustrum plane past which objects will be culled from view. Today I accidentally incremented the z values of several of my objects past that plane and could not figure out why they were not showing up on screen. It can be easy to forget this factor about the z coordinate while working with the orthographic camera.
What is your goal? If you do not need perspective distortion, use the orthographic camera.
Also just check the documentation:
https://threejs.org/docs/#api/en/cameras/PerspectiveCamera
View Angle/Fieldof View is self explanatory, if you don't know what it is, read up on it.
http://www.incgamers.com/wp-content/uploads/2013/05/6a0120a85dcdae970b0120a86d9495970b.png
Concerning the x y and z value. Well, this depends on the size of your plane and the distance to the camera. You can either set the camera position or the plane's position and keep the camera at (0,0,0).
Just imagine a plane in 3D space. You can calculate the position of the camera depending on the size of your plane or just go by try and error...
For using the orthographic camera, see this post:
Three.js - Orthographic camera