I have a slider/guage/bar component to select a value. As the slider is dragged it updates a value to give the user feedback:
currentValue = Math.round((componentHeight - draggedValue) / componentHeight * vesselVolume);
The slider gives its value from the top position of an absolute div inside an overflow hidden div parent, giving the illusion of a gauge that can be raised and lowered.
This all works fine.
The problem
I'm using the gauge to estimate the volume of liquid in a vessel. If the vessel has level, straight edges, fine, but it may not. A barrel, for example, will bulge out then back in.
I can use a mask over my guage to simulate the vessel being filled but it made me wonder if the task of being able to extract some some of volume filled of the vessel shown is ridiculously complicated or something approachable?
My current approach
I'll be using my own SVG graphics which could provide simple line data. I'm looking into whether there's a clipping or boolean operation possible on SVGs in vanilla javascript where I can clip the graphic where the top of the gauge reaches.
I presume I could then calculate the area of the resulting graphic, and present a volume of liquid if provide the volume of liquid that would have fit in the entire vessel.
I'm providing no depth information but it's currently safe for me to assume all vessels are symmetrical around the Y axis.
I'm a bit clueless at math involving shapes, is this task feasible? As a compromise, I would be ok to use crude formula that emulate various vessels if they approximately simulate edges diverging at a certain angle up to a certain height, etc.
Related
I have built an ASP.NET MVC application which will render a floor plan in SVG after a user selects a specific buildling and floor. Using Timmywil's panzoom library, I've added the ability for users to move the floor plan around and zoom in or out on it. The floor plan is initially rendered in the center of the screen and the zoom is adjusted so the whole floor plan is visible.
Via a button, users can save the floor plan in PDF format. After this button click, the SVG tag with the paths inside is used as input to convert it. However, only the initial situation is saved since the viewbox and coordinates are still the same. I've used Timmywil's samples to demonstrate my problem. Below is the initial situation. So the floor plan (in this case a lion) is nicely centered and fully visible inside of the div (the black bordered box):
In the situation a floor plan is really large and a user would only like to have a certain part of saved (picture 2 and 3), it should 'crop' the SVG, but I'm having trouble finding the numbers and making the calculations to achieve this. I guess it has to be done by changing the viewbox values.
Could someone help me out?
Currently I have an image that needs to be manipulated so it matches the same scale, position, and rotation as a template.
The grey rectangle with a circle in the middle is the template.
The orange rectangle and circle represents the user's input. It needs to be rotated, scaled and aligned to it matches the grey one. I'm currently stumped on how to proceed. I've no code other than the following.
function align_image()
{
// clever transform alignment code here
}
Bad dog, no biscuit!
The process at of aligning the images would normally be done manual input and judged by eye. I'm hoping to automate this step and align the image to its respective size and position but leaving the comfort and safety of Photoshop DOM I'm not sure how to proceed or even if this is a trivial matter or one left best alone. The project is web based currently using javascript and three.js
So if anyone can give me some pointers I'd appreciated it.
I don't code javascript so I can only talk about the algorithm. Generally best tool for registration is to use feature matching methods (using sift, surf,...) but your image is not the kind that have strong features. Now if you're always dealing with rectangles and circles in your images, find the "edges" of the rectangle with Hough Transform, compute the angle of those edges (lines) then rotate the image with that angle in the opposite direction.
Then with the help of Hough Circle Detector, find the center of the circles in the middle of the images, calculate the distance between them, and move the target rectangle to the source's circle position. After the movement by comparing the radius of the circles, you can resize the target image to make it like the source rectangle.
All of these are conveniently doable with Opencv.
Edit 2: I found what I was looking for here. The near and far planes compound the bounding box of the camera.
I have a Perspective Camera and multiple set of points (SphereGeometry) in the space and I want to define a bounding box according to what is visible in the field of view, so that I only need to load the points that are currently visible in the space according to the camera settings.
My problem when defining the bounding box is that I don't have a reference point between the camera position and what I'm currently looking at. For example, if the camera is at the position v1(20, 20, 20), and let's say the mid point of my set of points is set at v2(5, 5, 5). Then, I can define the initial point of my bounding box as this mid point v2(5, 5, 5) , and I could create a bounding box (square) with size (v1-v2)/2. But once I start moving the camera I lose this reference initial point, and I don't know how to obtain a distance parameter to define a proper bounding box of "what is currently visible".
Every time I move the camera I need to know the size of the bounding box and its position so that it represents as accurate as possible "what is currently visible" in the field of view.
One possible solution could be to translate the initial point v2(5, 5, 5) following every camera movement, is there any way to do this or something similar?
Edit:
So far I've been able to replicate #TheJim01 's code into my component, but I think this solution is not what I was looking for or I'm not able to understand it properly. I'm adding some images next to better explain what is going on.
In the next image I have a set of points of various colors, the 8 brown points represent each of the worldBoxCorners before performing a worldToLocal transformation. Which fits the whole space as it should.
Next, I render worldBoxCorners after the transformation. I do not understand how the bounding box is moved or how can I use it.
What I'm looking for is, for example if I zoom in into the space in the greener zone, to obtain a bounding box alike the last image.
The point is to only load the points that are within the field of view, but for that I need to define a bounding box. When I say loading the points, I mean that only the points that are visible should be loaded from the backend (not talking about rendering here).
This solution provides a perfect bounding box for the current loaded space. What I'm missing on is how to define a new bounding box for a given loaded space according to camera parameters change (zoom in/out or rotations).
I am working on an API that use shapes (and irregular) shapes to build websites. My problem is where I can provide a div that can carry as a background to irregular shapes so .
However to do this I would need to know the max area the object is taking up by having the max height and width.
I am aware that element.getBoundingClientRect does this but my roadblock is that is does not consider any psuedo elements, which is how most of these shapes are made.
I know when working with the CSS transform property, especially using scale, the browser knows to resize the whole shape including the pseudo element that makes up the shape.
It also uses the border-box coordinate system.
However the browser does not provide this information as it comes from the user agent
My main question is how do I access the dimensions the user agent computes for any given element, or how do I find the proper dimensions of a 'getBoundingClientRect' that considers an elements psuedo classes
My shapes can be found in the attached links.
httpsmichaelodumosu57.github.iosigma-xi-mu
https://css-tricks.com/examples/ShapesOfCSS/
I can't afford to use any other method to create my shapes because I have limited time on the project, but I do know that the browser can provide me with the information I am looking for.
Yes I have answered my own question. What you want to do is to scale the image to a very small since (since transform scale() works perfectly) and place it in a grid box (this could be a div of small size as well. You would run document.elementsFromPoint(x, y)
(notice the pluralization) on every point in the div containing you shrunked irregular shape and from there you can find the height and width of its bounding box by the difference of the highest range of their respective coordinate values. To center you must place your irregular shape in the bounding box of the background drop with the re-scaled dimensions (remember your skrunked your irregular shape to find them) and have the margin of the inner shape set to zero. This works only if your real (not pseudo element) is to the left most of the screen. Question is how do you position your background when your irregular shape is not properrly centering inside of it?
You can use document.elementFromPoint(x, y) for getting the element that exists in specific point, but I have not tested it for any kind of shapes.
I'm trying to place a circle at 50% of the width of the paper using RaphaelJS, is this possible without first doing the math (.5 * pixel width)? I want to simply be able to place an element at 50% of its container's width, is this even possible with the current Raphael API?
Raphael claims to be able to draw vector graphics, and yet it seems everything in the API is pixel-based. How can you draw a vector image using pixels? That seems 100% contradictory.
Likewise, as I understand vector art, it retains the same dimensions regardless of actual size. Is this not one of the primary reasons to use vector graphics, that it doesn't matter if it's for screen, print or whatever, it will always be the same scale? Thus, I'm further
confused by the need for something like ScaleRaphael; just seems like such functionality is part and parcel to creating vector graphics. But, perhaps I just don't understand vector graphics?
It just doesn't seem like an image that is created with absolute pixel dimensions and unable to be resized natively qualifies as a vector image. That, or I'm missing a very large chunk of the API here.
Thanks in advance for any help. I've attempted to post this twice now to the RaphaelJS Google Group, but I guess they are censoring it for whatever reason because I've been waiting for it to appear since last week and still no sign of my posts (although other new posts are showing up).
Using pixel values to define shape positions/dimensions does not make it a non-vector shape. Take for instance Adobe Illustrator - this is a vector product and yet you can still see that the properties for each object shows the positions and dimensions is pixels.
A basic explanation of vector graphics would be like this, taking a rectangle as an example:
A vector rectangle will have a number of properties such as x, y,
width and height. These properties can be specified in pixels. The
difference with vector (as opposed to raster) is that these pixel
properties are only used to determine how the shape is drawn. So when
the display is rendered (refreshed) the "system" can redrawn the shape
using the same properties without effecting the quality of the resize.
A raster image however will hold a lot more information (i.e. the
exact value of each pixel used to form the shape/rectangle)
If the word "pixel" makes you think it is contradictory, just remeber everything on a computer screen is rendered in pixels. Even vector graphics have to be converted to "raster" as some point in the graphics pipeline.
If you are worried about having to use a calculation (0.5 * width) then just remember that something has to do that calculation, and personally I would happily handle this simple step myself.
After all that, you should just calculate size and position in pixels based on the size of your "paper" element and feed those values in Raphael for creating the shape.