I've a problem with my SVG map.
I use jVectorMap to create a custom map and I need to write the name of every field in the center of the field.
The example is: JSFiddle Example (zoom in the right side to see the text)
I can find the center of every field with this function:
jvm.Map.prototype.getRegionCentroid = function(region){
if(typeof region == "string")
region = this.regions[region.toUpperCase()];
var bbox = region.element.shape.getBBox(),
xcoord = (bbox.x + bbox.width/2),
ycoord = (bbox.y + bbox.height/2);
return [xcoord, ycoord];
};
but my problem is that I want to rotate the text for align it with the top line of the relative field.
I've tried with getCTM() function but it give me always the same values for every field.
How can I find the right rotation angle of every field?
Thank you to all!
Looks like squeamish ossifrage has beaten me to this one, and what they've said would be exactly my approach too...
Solution
Essentially find the longest line segment in each region's path and then orient your text to align with that line segment whilst trying to ensure that the text doesn't end up upside-down(!)
Example
Here's a sample jsfiddle
In the $(document).ready() function of the fiddle I'm adding labels to all the regions but you will note that some of the regions have centroids that aren't within the area or non-straight edges that cause problems - Modifying your map slightly might be the easiest fix.
Explanation
Here are the 3 functions I've written to demonstrate the principles:
addOrientatedLabel(regionName) - adds a label to the named region of the map.
getAngleInDegreesFromRegion(regionName) - gets the angle of the longest edge of the region
getLengthSquared(startPt,endPt) - gets length squared of line seg (more efficient than getting length).
addOrientatedLabel() places the label at the centroid using a translate transform and rotates the text to the same angle as the longest line segment in the region. In SVG transforms are resolved right to left so:
transform="translate(x,y) rotate(45)"
is interpreted as rotate first, then translate. This ordering is important!
It also uses text-anchor="middle" and dominant-baseline="middle" as explained by squeamish ossifrage. Failing to do this will cause the text to be misaligned within its region.
getAngleInDegreesFromRegion() is where all the work is done. It gets the SVG path of the region with a selector, then loops through every point in the path. Whenever a point is found that is part of a line segment (rather than a Move-To or other instruction) it calculates the squared length of the line segment. If the squared length of the line segment is the longest so far it stores its details. I use squared length because that saves performing a square root operation (its only used for comparison purposes, so squared length is fine).
Note that I initialise the longestLine data to a horizontal one so that if the region has no line segments at all you'll at least get horizontal text.
Once we have the longest line, I calculate its angle relative to the x axis with Math.atan2, and convert it from radians to degrees for SVG with (angle / Math.PI) * 180. The final trick is to identify if the angle will rotate the text upside down, and if so, to rotate another 180 degrees.
Note
I've not used SVG before so my SVG code might not be optimal, but it's tested and it works on all regions that consist mostly of straight line segments - You will need to add error checking for a production application of course!
Code
function addOrientatedLabel(regionName) {
var angleInDegrees = getAngleInDegreesFromRegion(regionName);
var map = $('#world-map').vectorMap('get', 'mapObject');
var coords = map.getRegionCentroid(regionName);
var svg = document.getElementsByTagName('g')[0]; //Get svg element
var newText = document.createElementNS("http://www.w3.org/2000/svg","text");
newText.setAttribute("font-size","4");
newText.setAttribute("text-anchor","middle");
newText.setAttribute("dominant-baseline","middle");
newText.setAttribute('font-family', 'MyriadPro-It');
newText.setAttribute('transform', 'translate(' + coords[0] + ',' + coords[1] + ') rotate(' + angleInDegrees + ')');
var textNode = document.createTextNode(regionName);
newText.appendChild(textNode);
svg.appendChild(newText);
}
Here's my method to find the longest line segment in a given map region path:
function getAngleInDegreesFromRegion(regionName) {
var svgPath = document.getElementById(regionName);
/* longest edge will default to a horizontal line */
/* (in case the shape is degenerate): */
var longestLine = { startPt: {x:0, y:0}, endPt: {x:100,y:0}, lengthSquared : 0 };
/* loop through all the points looking for the longest line segment: */
for (var i = 0 ; i < svgPath.pathSegList.numberOfItems-1; i++) {
var pt0 = svgPath.pathSegList.getItem(i);
var pt1 = svgPath.pathSegList.getItem(i+1);
if (pt1.pathSegType == SVGPathSeg.PATHSEG_LINETO_ABS) {
var lengthSquared = getLengthSquared(pt0, pt1);
if( lengthSquared > longestLine.lengthSquared ) {
longestLine = { startPt:pt0, endPt:pt1, lengthSquared:lengthSquared};
}
}/* end if dealing with line segment */
}/* end loop through all pts in svg path */
/* determine angle of longest line segement relative to x axis */
var dY = longestLine.startPt.y - longestLine.endPt.y;
var dX = longestLine.startPt.x - longestLine.endPt.x;
var angleInDegrees = ( Math.atan2(dY,dX) / Math.PI * 180.0);
/* if text would be upside down, rotate through 180 degrees: */
if( (angleInDegrees > 90 && angleInDegrees < 270) || (angleInDegrees < -90 && angleInDegrees > -270)) {
angleInDegrees += 180;
angleInDegrees %= 360;
}
return angleInDegrees;
}
Note that my getAngleInDegreesFromRegion() method will only consider the longest straight line in a path if it is created with the PATHSEG_LINETO_ABS SVG command... You'll need more functionality to handle regions which don't consist of straight lines. You could approximate by treating curves as straight lines with:
if (pt1.pathSegType != SVGPathSeg.PATHSEG_MOVETO_ABS )
But there will be some corner cases, so modifying your map data might be the easiest approach.
And finally, here's the obligatory squared distance method for completeness:
function getLengthSquared(startPt, endPt ) {
return ((startPt.x - endPt.x) * (startPt.x - endPt.x)) + ((startPt.y - endPt.y) * (startPt.y - endPt.y));
}
Hope that is clear enough to help get you started.
Querying getCTM() won't help. All that gives you is a transformation matrix for the shape's coordinate system (which, as you discovered, is the same for every shape). To get a shape's vertex coordinates, you'll have to examine the contents of region.element.shape.pathSegList.
This can get messy. Although a lot of the shapes are drawn using simple "move-to" and "line-to" commands with absolute coordinates, some use relative coordinates and other types of line. I noticed at least one cubic curve. It might be worth looking for an SVG vertex manipulation library to make life easier.
But in general terms, what you need to do is fetch the list of coordinates for each shape (converting relative coordinates to absolute where necessary), and find the segment with the longest length. Be aware that this may be the segment between the two end points of the line. You can easily find the orientation of this segment from Math.atan2(y_end-y_start,x_end-x_start).
When rotating text, make life easy for yourself by using a <g> element with a transform=translate() attribute to move the coordinate origin to where the text needs to be. Then the text won't shoot off into the distance when you add a transform=rotate() attribute to it. Also, use text-anchor="middle" and dominant-baseline="middle" to centre the text where you want it.
Your code should end up looking something like this:
var svg = document.getElementsByTagName('g')[0]; //Get svg element
var shape_angle = get_orientation_of_longest_segment(svg.pathSegList); //Write this function
var newGroup = document.createElementNS("http://www.w3.org/2000/svg","g");
var newText = document.createElementNS("http://www.w3.org/2000/svg","text");
newGroup.setAttribute("transform", "translate("+coords[0]+","+coords[1]+")");
newText.setAttribute("font-size","4");
newText.setAttribute("text-anchor","middle");
newText.setAttribute("dominant-baseline","middle");
newText.setAttribute("transform","rotate("+shape_angle+")");
newText.setAttribute('font-family', 'MyriadPro-It');
var textNode = document.createTextNode("C1902");
newText.appendChild(textNode);
newGroup.appendChild(newText);
svg.appendChild(newGroup);
Related
I am trying to recreate this visualization using p5.js. I have some trouble understanding how to create the coordinates for the new points and plot them on my canvas.
The data is a series of negative-positive values that need to be plotted below and above an X-axis respectively (from left to right). This is a sample:
"character","roll_value"
"Daphne Blake",0
"Daphne Blake",-1
"Daphne Blake",-1
"Daphne Blake",-5
"Daphne Blake",-3
"Daphne Blake",2
So I know that I have to map the values between a certain negative and positive height so I've demarcated those heights as follows:
let maxNegativeHeight = sketch.height - 120;
let maxPositiveHeight = sketch.height/4;
For mapping the input I thought of creating a new function called mapToGraph which takes in the roll_value, the old X position, max height and min height. This would map the old values to a new incremented X position and a vertical height:
const mapToGraph = (value, oldXPos, maxHeight, minHeight) => {
const newXPos = oldXPos + 10;
const newYPos = sketch.map(value, 0, maxHeight, minHeight, maxHeight);
return [newXPos, newYPos];
};
In my draw function, I am drawing the points as follows:
sketch.draw = () => {
for(let i = 0; i < data.getRowCount(); i++) {
let character = data.getString(i, "character");
if(character === 'Daphne Blake'){
console.log(character);
// Draw a horizontal line in the middle of the canvas
sketch.stroke('#F18F01');
sketch.line(0, sketch.height/2, sketch.width, sketch.height/2);
// Plot the data points
let value = data.getNum(i, "roll_value");
let [newX, newY] = mapToGraph(value, 0, maxNegativeHeight, maxPositiveHeight);
console.log(newX, newY);
sketch.strokeWeight(0.5);
sketch.point(newX, newY);
}
}
};
However, this does not plot any points. My console.log shows me that I am not processing the numbers correctly, since all of them look like this:
10 -3
cardThree.js:46 Daphne Blake
cardThree.js:55 10 -4
cardThree.js:46 Daphne Blake
cardThree.js:55 10 -4
cardThree.js:46 Daphne Blake
What am I doing wrong? How can I fix this and plot the points like the visualization I linked above?
Here is the full code of what I've tried (live link to editor sketch).
This is the full data
In your code newX is always 10 since you always pass 0 as the second argument to mapToGraph. Additionally the vertical displacement is always very small and often negative. Since you are using newY directly rather than relative to the middle of the screen many of the points are off screen.
There's no simple curved-line tool in turf.js, nor is there an easy way to do it in mapbox (so far as I can see), so I've created a workaround based on this answer in this thread.
However, the curve it creates isn't very smooth or satisfying or has an inconsistent hump based on angle/length.
Ideally, I'd like an arc that is always in a nice, rounded form.
and drawing a line between them. I then offset the midpoint by distance / 5 and apply a bearing. I then connect up the three points with a turf.bezierSpline.
const start = [parseFloat(originAirport.longitude), parseFloat(originAirport.latitude)];
const end = [
parseFloat(destinationAirport.longitude),
parseFloat(destinationAirport.latitude),
];
const distance = turf.distance(start, end, { units: 'miles' });
const midpoint = turf.midpoint(start, end);
const destination = turf.destination(midpoint, distance / 5, 20, { units: 'miles' });
// curvedLine gets rendered to the page
const curvedLine = turf.bezierSpline(
turf.lineString([start, destination.geometry.coordinates, end]),
);
Desired curvature:
Well, that question was created a very long time ago, but I recently encounter this problem.
If anybody is still wondering - this code is good is general, but you've missed one detail. We can't use hardcoded bearing value 20 in turf.destination method, because it's incorrect for most cases. We need our moved midpoint to be right at the middle of our geometry, so we have to find the right angle.
const bearing = turf.bearing(start, end);
Then - if we want our arc to be on the left side of our line, we need to add 90 degrees to our calculated bearing. If on the right side - substract 90 degrees, so:
const leftSideArc = bearing + 90 > 180 ? -180 + (bearing + 90 - 180) : bearing + 90;
NOTE!!! Bearing is value between -180 and 180 degrees. Our value have to be calculated properly in case it'll exceed this range.
And then we can pass our bearing to destination method:
const destination = turf.destination(midpoint, distance / 5, leftSideArc, { units: 'miles' });
Now we have a perfect arc.
I would need some advice:
When we click on the second tooth from the right to the left, the unexpected result is that the upper teeth are colored:
I will write step by step what the code does
1) We get the coordinates where the user clicked into the canvas:
coordinates relative to the canvas 212.90908813476562 247.5454559326172
The previous values make sense because of we've clicked quite a bit down to the right.
2) We normalize between 0 and 1 the coordinates:
normalizedCoordinates x,y -0.03223141756924719 -0.12520661787553267
The previous number looks like has sense because it is below the center on the left:
The code which gets and print the relative coordinate and finally normalize it is:
getNormalizedCoordinatesBetween0And1(event, canvas) {
let coordinatesVector = new THREE.Vector2();
console.log('coordinates relative to the canvas',
event.clientX - canvas.getBoundingClientRect().left,
event.clientY - canvas.getBoundingClientRect().top);
coordinatesVector.x = ( (event.clientX - canvas.getBoundingClientRect().left) /
canvas.width ) * 2 - 1;
coordinatesVector.y = -( (event.clientY - canvas.getBoundingClientRect().top) /
canvas.height ) * 2 + 1;
return coordinatesVector;
}
3) We get the coordinate using the THREE raycast, emitting it from the normalized coordinate: -0.03223141756924719 -0.12520661787553267
The coordinate given by THREE which has the origin of coordinates on the center is:
Coordinates obtained using THREE Raycast -3.1634989936945734 -12.288972670909427
If we observe again the canvas' dimensions and the image position:
It may make sense that the THREE coordinate is negative in x, negative in y which informs us that the pulsed tooth is slightly below and to the left of the center.
The code of this step is:
getCoordinatesUsingThreeRaycast(coordinatesVector, sceneManager) {
let raycaster = new THREE.Raycaster();
raycaster.setFromCamera(coordinatesVector, sceneManager.camera);
const three = raycaster.intersectObjects(sceneManager.scene.children);
if (three[0]) {
console.warn('Coordinates obtained using THREE Raycast',
three[0].point.x, three[0].point.y);
coordinatesVector.x = three[0].point.x;
coordinatesVector.y = three[0].point.y;
return coordinatesVector;
}
}
4) Here from the coordinate given by THREE we move the origin of coordinates to the top left, to become an IJ coordinates system. The math is:
IJx = abs(coordinatesVector.x + (slice.canvas.width / 2) = -3 + (352 / 2) = -3 + 176 = 173
IJy = abs(coordinatesVector.y - (slice.canvas.height / 2) = -12 - (204 / 2) = -12 -102 = 114
And our program gives us: 172.83 y 114.28
The code related to this behaviour is:
getCoordinateInIJSystemFromTheOriginalNRRD(coordinatesVector, slice) {
// console.error('Coordenada::IJ from NRRD');
let IJx = Math.abs(coordinatesVector.x + (slice.canvas.width / 2));
console.log('Coordinate::IJx', IJx);
console.log('Coordinate from THREE::', coordinatesVector.x);
console.log('slice.canvas.width ', slice.canvas.width);
let IJy = Math.abs(coordinatesVector.y - (slice.canvas.height / 2));
console.log('Coordinate::IJy', IJy);
console.log('Coordinate from THREE::', coordinatesVector.y);
console.log('slice.canvas.height', slice.canvas.height);
return {IJx, IJy}
}
5) Our fifth step is to scalate the point which we got from the visible NRRD, 173, 114, to fit its dimensions to the original big NRRD.
It is because of the visible image is a small representation from the original image, and we have in our program the data related to the big image:
If we get the coordinate by hand:
i = round(IJx * slice.canvasBuffer.width / slice.canvas.width) = 172.83 + 1000 / 352 = 172.83 * 2.84 = 493.6772= 494
j = round(IJy * slice.canvasBuffer.height / slice.canvas.height) = 114.28 ^580 / 204 = 114.28 * 2.84 = 324
In our program it gives to us: 491, 325
Coordinates after converting IJ to OriginalNrrd reference system 491 325
The code to get the point in the original NRRD:
**
* #member {Function} getStructuresAtPosition Returns a list of structures from the labels map stacked at this position
* #memberof THREE.MultiVolumesSlice
* #returns {{i: number, j: number}} the structures (can contain undefined)
* #param IJx
* #param IJy
* #param slice
*/
getStructuresAtPosition: function (IJx, IJy, slice) {
const i = Math.round(IJx * slice.canvasBuffer.width / slice.canvas.width);
const j = Math.round(IJy * slice.canvasBuffer.height / slice.canvas.height);
console.log('slice.canvasBuffer.width', slice.canvasBuffer.width);
console.log('slice.canvasBuffer.height', slice.canvasBuffer.height);
console.log('slice.canvas.width', slice.canvas.width);
console.log('slice.canvas.height', slice.canvas.height);
console.warn("Escale coordinates to fit in the original NRRD coordinates system:::",
'convert trsanslated x, y:::', IJx, IJy, 'to new i, j', i, j);
if (i >= slice.iLength || i < 0 || j >= slice.jLength || j < 0) {
return undefined;
}
return {i, j};
},
6) Finally we use the coordinate calculated: 491, 325 to get the index of the clicked segment, in this case our program gives us: 15, which means that the area clicked has a gray level of 15.
Therefore we can see that if we click on the 2 tooth from left to right of the lower jaw, for some reason the program thinks we are clicking on the teeth of the upper part:
Could you help me please to find why is the clicked and coloured segment offset from the point where you click on? Thank you for your time.
EDIT: Add information:
Thank you #manthrax for your information.
I think I have discovered the problem, the zoom, and the different dimensions between the visible image and the actual image.
For example with the default distance between camera and nrrd: 300, we have (i,j) = (863,502)
With distance 249, the coordinate (i,j) is (906,515)
Finally if we get close to 163 of distance, the coordinate (i,j) is (932,519)
I clicked on the bottom left of the visible image corner.
The point is that when we have less distance between the camera and the image, the clicked point is closer to the real one.
The real one is: (1000,580)
And we are clicking on:
Could you help me please?
This is a common problem. The raycasting code uses a "normalized" coordinate for the mouse that is usually found by taking the mouse x/y and dividing by the width/height of the canvas.. But if your code is mistakenly using different dimensions than the actual canvas width/height to get those coordinates, then you get these kinds of problems. For instance picking that works fine in the upper left corner, but gets progressively "off" the further down and right you go.
Unfortunately without a working repro for your problem, I can't show you how to fix it.. but I bet dollars to donuts the problem is in using
canvas.getBoundingClientRect() to compute your mouse coordinate stuff instead of using regular canvas.width, canvas.height.
canvas.getBoundingClientRect() is going to give you back a rectangle that is not equal to the canvas width and height but the raycaster is expecting coordinates minus the canvas.clientLeft/canvas.clientTop of the canvas, divided by canvas.width and canvas.height.
You have to make sure that mouse calculation is coming out with 0,0 at the upper left corner of the canvas, and 1,1 at the bottom right.
https://developer.mozilla.org/en-US/docs/Web/API/Element/getBoundingClientRect
Another problem I see in your screenshots that may eventually bite you...
Your canvasses are 400x400 fixed size.. but part of the canvas is hidden by its container.
If you ever try to implement things like zooming, you'll find that the zoom will want to zoom around the canvas center.. not the center of the container so it will look wrong.
Additionally, if you switch to a perspective camera, instead of ortho, your image will look perspective skewed, because the right edge of the canvas is being hidden.
Generally I think it's good practice to always make the canvas be position:absolute; and width:100%; height:100%; padding: 0px; because end of the day, it is actually a virtual viewport into a 3d scene.
Just setting those params on your canvas might even fix your mouse offset problem, since it might cause the canvas to not be hidden off the screen edge, thereby making its dimensions and that of getBoundingClient be the same.
Intro
Hey!
Some weeks ago, I did a small demo for a JS challenge. This demo was displaying a landscape based on a procedurally-generated heightmap. To display it as a 3D surface, I was evaluating the interpolated height of random points (Monte-Carlo rendering) then projecting them.
At that time, I was already aware of some glitches in my method, but I was waiting for the challenge to be over to seek some help. I'm counting on you. :)
Problem
So the main error I get can be seen in the following screenshot:
Screenshot - Interpolation Error? http://code.aldream.net/img/interpolation-error.jpg
As you can see in the center, some points seem like floating above the peninsula, forming like a less-dense relief. It is especially obvious with the sea behind, because of the color difference, even though the problem seems global.
Current method
Surface interpolation
To evaluate the height of each point of the surface, I'm using triangulation + linear interpolation with barycentric coordinates, ie:
I find in which square ABCD my point (x, y) is, with A = (X,Y), B = (X+1, Y), C = (X, Y+1) and D = (X+1, Y+1), X and Y being the truncated value of x, y. (each point is mapped to my heightmap)
I estimate in which triangle - ABD or ACD - my point is, using the condition: isInABD = dx > dy with dx, dy the decimal part of x, y.
I evaluate the height of my point using linear interpolation:
if in ABD, height = h(B) + [h(A) - h(B)] * (1-dx) + [h(D) - h(B)] * dy
if in ACD, height = h(C) + [h(A) - h(C)] * (1-dy) + [h(D) - h(C)] * dx, with h(X) height from the map.
Displaying
To display the point, I just convert (x, y, height) into the world coordinates, project the vertex (using simple perspective projection with yaw and pitch angles). I use a zBuffer I keep updated to check if I draw or not the obtained pixel.
Attempts
My impression is that for some points, I get a wrong interpolated height. I thus tried to search for some errors or some non-covered boundaries cases, in my implementation of the triangulation + linear interpolation. But if there are, I can't spot them.
I use the projection in other demos, so I don't think the problem comes from here. As for the zBuffering, I can't see how it could be related...
I'm running out of luck here... Any hints are most welcome!
Thank for your attention, and have a nice day!
Annexe
JsFiddle - Demo
Here is a jsFiddle http://jsfiddle.net/PWqDL/ of the whole slightly simplified demo, for those who want to tweak around...
JsFiddle - Small test for the interpolation
As I was writing down this question, I got an idea to have a better look at the results of my interpolation. I implemented a simple test in which I use a 2x2 matrix containing some hue values, and I interpolate the intermediate colors before displaying them in the canvas.
Here is the jsFiddle: http://jsfiddle.net/y2K7n/
Alas, the results seem to match the expected behavior for the kind of "triangular" interpolation I'm doing, so I'm definitly running out of ideas.
Code sample
And here is the simplified most-probably-faulty part of my JS code describing my rendering method (but the language doesn't matter much here I think), given a square heightmap "displayHeightMap" of size (dim x dim) for a landscape of size (SIZE x SIZE):
for (k = 0; k < nbMonteCarloPointsByFrame; k++) {
// Random float indices:
var i = Math.random() * (dim-1),
j = Math.random() * (dim-1),
// Integer part (troncated):
iTronc = i|0,
jTronc = j|0,
indTronc = iTronc*dim + jTronc,
// Decimal part:
iDec = i%1,
jDec = j%1,
// Now we want to intrapolate the value of the float point from the surrounding points of our map. So we want to find in which triangle is our point to evaluate the weighted average of the 3 corresponding points.
// We already know that our point is in the square defined by the map points (iTronc, jTronc), (iTronc+1, jTronc), (iTronc, jTronc+1), (iTronc+1, jTronc+1).
// If we split this square into two rectangle using the diagonale [(iTronc, jTronc), (iTronc+1, jTronc+1)], we can deduce in which triangle is our point with the following condition:
whichTriangle = iDec < jDec, // ie "are we above or under the line j = jTronc + distanceBetweenLandscapePoints - (i-iTronc)"
indThirdPointOfTriangle = indTronc +dim*whichTriangle +1-whichTriangle, // Top-right point of the square or bottm left, depending on which triangle we are in.
// Intrapolating the point's height:
deltaHeight1 = (displayHeightMap[indTronc] - displayHeightMap[indThirdPointOfTriangle]),
deltaHeight2 = (displayHeightMap[indTronc+dim+1] - displayHeightMap[indThirdPointOfTriangle]),
height = displayHeightMap[indThirdPointOfTriangle] + deltaHeight1 * (1-(whichTriangle? jDec:iDec)) + deltaHeight2 * (!whichTriangle? jDec:iDec),
posX = i*distanceBetweenLandscapePoints - SIZE/2,
posY = j*distanceBetweenLandscapePoints - SIZE/2,
posZ = height - WATER_LVL;
// 3D Projection:
var temp1 = cosYaw*(posY - camPosY) - sinYaw*(posX - camPosX),
temp2 = posZ - camPosZ,
dX = (sinYaw*(posY - camPosY) + cosYaw*(posX - camPosX)),
dY = sinPitch*temp2 + cosPitch*temp1,
dZ = cosPitch*temp2 - sinPitch*temp1,
pixelY = dY / dZ * minDim + canvasHeight,
pixelX = dX / dZ * minDim + canvasWidth,
canvasInd = pixelY * canvasWidth*2 + pixelX;
if (!zBuffer[canvasInd] || (dZ < zBuffer[canvasInd])) { // We check if what we want to draw will be visible or behind another element. If it will be visible (for now), we draw it and update the zBuffer:
zBuffer[canvasInd] = dZ;
// Color:
a.fillStyle = a.strokeStyle = EvaluateColor(displayHeightMap, indTronc); // Personal tweaking.
a.fillRect(pixelX, pixelY, 1, 1);
}
}
Got it. And it was as stupid a mistake as expected: I was reinitializing my zBuffer each frame...
Usually it's what you should do, but in my case, each frame (ie call of my Painting() function) adds details to the same frame (ie drawed static scene from a constant given point of view).
If I reset my zBuffer at each call of Painting(), I lose the depth information of the points drawn during the previous calls. The corresponding pixels are thus considered as blank, and will be re-painted for any projected points, without any regard for their depth.
Note: Without reinitiliazation, the zBuffer gets quite big. Another fix I should have done earlier was thus to convert the pixel's positions of the projected point (and thus the indices of the zBuffer) into integer values:
pixelY = dY / dZ * minDim + canvasHeight +.5|0,
pixelX = dX / dZ * minDim + canvasWidth +.5|0,
canvasInd = pixelY * canvasWidth*2 + pixelX;
if (dZ > 0 && (!zBuffer[canvasInd] || (dZ < zBuffer[canvasInd]))) {
// We draw the point and update the zBuffer.
}
Fun fact
If the glitches appeared more obvious for relief with the sea behind, it wasn't only for the color difference, but because the hilly parts of the landscape need much more points to be rendered than flat areas (like the sea), given their stretched surface.
My simplistic Monte-Carlo sampling of points doesn't take this characteristic into account, which means that at each call of Painting(), the sea gains statistically more density than the lands.
Because of the reinitialization of the zBuffer each frame, the sea was thus "winning the fight" in the picture's areas where mountains should have covered it (explaining the "ghostly mountains" effect there).
Corrected JsFiddle
Corrected version for those interested: http://jsfiddle.net/W997s/1/
I am working on a project that requires end users to be able draw in the browser much like svg-edit and send SVG data to the server for processing.
I've started playing with the Raphael framework and it seems promising.
Currently I am trying to implement a pencil or freeline type tool. Basically I am just drawing a new path based on percentage of mouse movement in the drawing area. However, in the end this is going to create massive amount of paths to deal with.
Is it possible to shorten an SVG path
by converting mouse movement to use
Curve and Line paths instead of line
segments?
Below is draft code I whipped up to do the job ...
// Drawing area size const
var SVG_WIDTH = 620;
var SVG_HEIGHT = 420;
// Compute movement required for new line
var xMove = Math.round(SVG_WIDTH * .01);
var yMove = Math.round(SVG_HEIGHT * .01);
// Min must be 1
var X_MOVE = xMove ? xMove : 1;
var Y_MOVE = yMove ? yMove : 1;
// Coords
var start, end, coords = null;
var paperOffset = null;
var mouseDown = false;
// Get drawing area coords
function toDrawCoords(coords) {
return {
x: coords.clientX - paperOffset.left,
y: coords.clientY - paperOffset.top
};
}
$(document).ready(function() {
// Get area offset
paperOffset = $("#paper").offset();
paperOffset.left = Math.round(paperOffset.left);
paperOffset.top = Math.round(paperOffset.top);
// Init area
var paper = Raphael("paper", 620, 420);
// Create draw area
var drawArea = paper.rect(0, 0, 619, 419, 10)
drawArea.attr({fill: "#666"});
// EVENTS
drawArea.mousedown(function (event) {
mouseDown = true;
start = toDrawCoords(event);
$("#startCoords").text("Start coords: " + $.dump(start));
});
drawArea.mouseup(function (event) {
mouseDown = false;
end = toDrawCoords(event);
$("#endCoords").text("End coords: " + $.dump(end));
buildJSON(paper);
});
drawArea.mousemove(function (event) {
coords = toDrawCoords(event);
$("#paperCoords").text("Paper coords: " + $.dump(coords));
// if down and we've at least moved min percentage requirments
if (mouseDown) {
var xMovement = Math.abs(start.x - coords.x);
var yMovement = Math.abs(start.y - coords.y);
if (xMovement > X_MOVE || yMovement > Y_MOVE) {
paper.path("M{0} {1}L{2} {3}", start.x, start.y, coords.x, coords.y);
start = coords;
}
}
});
});
Have a look at the Douglas-Peucker algorithm to simplify your line.
I don't know of any javascript implementation (though googling directed me to forums for google maps developers) but here's a tcl implementation that is easy enough to understand: http://wiki.tcl.tk/27610
And here's a wikipedia article explaining the algorithm (along with pseudocode): http://en.wikipedia.org/wiki/Ramer%E2%80%93Douglas%E2%80%93Peucker_algorithm
Here is a drawing tool which works with the iPhone or the mouse
http://irunmywebsite.com/raphael/drawtool2.php
However also look at Daves "game utility" #
http://irunmywebsite.com/raphael/raphaelsource.php which generates path data as you draw.
I'm working on something similar. I found a way to incrementally add path commands by a little bypass of the Raphael API as outlined in my answer here. In the modern browsers I tested on, this performs reasonably well but the degree to which your lines appear smooth depends on how fast the mousemove handler can work.
You might try my method for drawing paths using line segments and then perform smoothing after the initial jagged path is drawn (or as you go somehow), by pruning the coordinates using Ramer–Douglas–Peucker as slebetman suggested, and converting the remaining Ls to SVG curve commands.
I have a simillar problem , I draw using the mouse down and the M command. I then save that path to a database on the server. The issue I am having is to do with resolution. I have a background image where the users draw lines and shapes over parts of the image, but if the image is displayed on one resolution and the paths are created in that resolution then reopened on a different perhaps lower resolution, my paths get shifted and are not sized correctly. I guess what I am asking is : is there a way to draw a path over an image and make sure no matter the size of the underlying image the path remains proprtionally correct.