I have trouble about picking points in 3D scene in WebGL. I found a example that is possible attach ID to each point through the RGBA components. It works but usually a point which is clicked is so far from wanted. I have point cloud of about 1.5mil points. It seems that function readPixels() returns little bit different components of RGBA in same area of click and ID is different too. Maybe here is some problem that shaders use [0,1] interval of color components and function readPixels() unsigned byte. Maybe so many points located in 1 pixel. In the picture is the red point place where was clicked and the green point is the place where ID (RGBA components) were found. I tried to set bigger gl_PointSize for example 30 pixels in ID RGBA layer and then I could find right point, but it is not accurate.
In the picture below the position was correct because function readPixels() returns correct RGBA components (ID).
Related
I have a set of data Im pulling from an API and I want to vizualise it in an webapplication with color representation. My initial approach was to create it with an heatmap in vue.js but I want to have the colors to interpolate between the different values. The problem with heatmap in my case is that it wont fill the gaps between each point. Just drawing a circle.
Basically say I have got five points on different x an y locations on a canvas, each points representing a color. I want to interpolate the color between each point and fill the canvas based on the pointcolor.
I have some different ideas doing this;
three.js where I guess I can achieve this with a shader learping over
each point
is it possible to create this with css?
D3.js?
or another library
Thanks for your input
Unfortunately I cannot find an image for visualizing exactly but here is an attempt (sorry for the bad quality. Have only got paint on this computer)
Here are every point representated by the white circle. Say this dot contains the color between I want to learp to and from. This example would have seven colors. one on each circle. Then the fill interpolates between each point.
how to get a real measure of two points in a 360 picture of an interior using a-frame.io framework?
we tried converting the unit system of a-frame to centimeter and took two points where the dimensions were known and set it as default. and estimated that any other points we take would be relatively correct but it isn't.
any other suggestions or formula that could help?
thank you
That can't work. At least unless you have a depth-image as well. What you can easily get from a single 360° image are two angles for pan and tilt. If you add a third value, the distance from the camera (also called depth), you have so called spherical coordinates which can be converted to cartesian coordinates (x, y, z).
Without knowing that distance you can only reconstruct a ray, but not a single point. You need one more piece of information to determine where along that ray the point is (which is what you need to know for any measurements in the image).
I have a PointCloud called "cloud" centered at (0,0,0) with around 1000 vertices. The vertices' positions are updated using a vertex shader. I now would like to print out each vertex's position to the console every few seconds from the render loop.
If I inspect the point cloud's vertices using
console.log(cloud.geometry.vertices[100])
in the render loop, I always get a vertex with all zeros, which I can see from all the particles zipping around is not true.
I looked at this similar post: How to get the absolute position of a vertex in three.js? and tried
var vector = cloud.geometry.vertices[100].clone();
vector.applyMatrix4( cloud.matrixWorld );
which still gave me a vector of all zeros (for all of the vertices). Using scene.matrixWorld in place of cloud.matrixWorld did not work either.
I also tried using cloud.updateMatrixWorld() and scene.updateMatrixWorld(). Lastly, I tried setting cloud.geometry.vertexNeedsUpdate = true.
So far, all of the advice I've seen has used the above, but none of them work for me. I have a feeling that the array just isn't getting updated with the correct values, but I don't know why that is. Any ideas?
That is because the vertices never change their properties on the cpu, but only on the gpu (vertex shader).
This is a one-way ticket, the communication goes cpu -> gpu not the other way around.
If you need to work with the vertex position then you have to do the calculations on the cpu and send the vertex batch everytime something changed back to the gpu.
After setting a certain colour as the fillStyle of the canvas and drawing a rectangle with fillRect, the colour of the rectangle sometimes differs a bit from the one provided (the getImageData returns different values - usually one of them is lower by 1). It seems to happen only when using rgba colours (and not with rgb) but I actually do need to use the alpha channel.
I've made a simple test suite on js fiddle for anyone who would like to look into this issue:
http://jsfiddle.net/LaPdP/1/
Any ideas on why is this happening and if there is any way to fix that? If it at least happened always on the same value then I'd just bypass it by increasing it by 1, but it seems quite random to me.
Update from 2017: I forgot completely about this answer, but the cause is related to pre-multiplying the data, when getting/setting. As the numbers in a bitmap is always integer there will be rounding errors as the natural result of pre-multiplying often result in non-integer numbers.
There is unfortunately no convenient way to fix this.
Just to clarify about gamma below: Gamma (via a gamma setting or an ICC profile) will affect images directly, but for shapes drawn directly to the canvas this should not be a problem per-se as only the display gamma is adjusted on top, not the data itself.
Old answer:
What you are experiencing is probably a result of only a partial implementation of the color and gamma correction section in the canvas standard.
The reason for various color values, at least when it comes to images containing ICC profiles, is due to the built-in color and gamma correction in the browsers:
4.8.11.1 Color spaces and color correction
The canvas APIs must perform color correction at only two points: when
rendering images with their own gamma correction and color space
information onto the canvas, to convert the image to the color space
used by the canvas (e.g. using the 2D Context's drawImage() method
with an HTMLImageElement object), and when rendering the actual canvas
bitmap to the output device.
Source: w3.org
However, it also states in section 4.8.11.1:
Note: Thus, in the 2D context, colors used to draw shapes onto the
canvas will exactly match colors obtained through the getImageData()
method.
As the status as this is written is a work in progress my guess is that the browsers has a "lazy" implementation of color and gamma correction which currently also affects shapes - or - all color information from the canvas is corrected to display profile as the latter point in first quote. This will perhaps not change until the standard becomes final.
I'm trying to place a circle at 50% of the width of the paper using RaphaelJS, is this possible without first doing the math (.5 * pixel width)? I want to simply be able to place an element at 50% of its container's width, is this even possible with the current Raphael API?
Raphael claims to be able to draw vector graphics, and yet it seems everything in the API is pixel-based. How can you draw a vector image using pixels? That seems 100% contradictory.
Likewise, as I understand vector art, it retains the same dimensions regardless of actual size. Is this not one of the primary reasons to use vector graphics, that it doesn't matter if it's for screen, print or whatever, it will always be the same scale? Thus, I'm further
confused by the need for something like ScaleRaphael; just seems like such functionality is part and parcel to creating vector graphics. But, perhaps I just don't understand vector graphics?
It just doesn't seem like an image that is created with absolute pixel dimensions and unable to be resized natively qualifies as a vector image. That, or I'm missing a very large chunk of the API here.
Thanks in advance for any help. I've attempted to post this twice now to the RaphaelJS Google Group, but I guess they are censoring it for whatever reason because I've been waiting for it to appear since last week and still no sign of my posts (although other new posts are showing up).
Using pixel values to define shape positions/dimensions does not make it a non-vector shape. Take for instance Adobe Illustrator - this is a vector product and yet you can still see that the properties for each object shows the positions and dimensions is pixels.
A basic explanation of vector graphics would be like this, taking a rectangle as an example:
A vector rectangle will have a number of properties such as x, y,
width and height. These properties can be specified in pixels. The
difference with vector (as opposed to raster) is that these pixel
properties are only used to determine how the shape is drawn. So when
the display is rendered (refreshed) the "system" can redrawn the shape
using the same properties without effecting the quality of the resize.
A raster image however will hold a lot more information (i.e. the
exact value of each pixel used to form the shape/rectangle)
If the word "pixel" makes you think it is contradictory, just remeber everything on a computer screen is rendered in pixels. Even vector graphics have to be converted to "raster" as some point in the graphics pipeline.
If you are worried about having to use a calculation (0.5 * width) then just remember that something has to do that calculation, and personally I would happily handle this simple step myself.
After all that, you should just calculate size and position in pixels based on the size of your "paper" element and feed those values in Raphael for creating the shape.