Convert lat/lon to pixel coordinate? - javascript

I'm trying to convert a lat/lon pair to a pixel coordinate. I have found this mercator projection but I don't understand the code. What is the factor,x_adj, y_adj variable?
When I run the code without those constants my lat/lon pair is not on my map and the x and y pixel coordinate is not what I want.
function get_xy(lat, lng)
{
var mapWidth=2058;
var mapHeight=1746;
var factor=.404;
var x_adj=-391;
var y_adj=37;
var x = (mapWidth*(180+lng)/360)%mapWidth+(mapWidth/2);
var latRad = lat*Math.PI/180;
var mercN = Math.log(Math.tan((Math.PI/4)+(latRad/2)));
var y = (mapHeight/2)-(mapWidth*mercN/(2*Math.PI));
return { x: x*factor+x_adj,y: y*factor+y_adj}
}
Source: http://webdesignerwall.com/tutorials/interactive-world-javascript-map/comment-page-1?replytocom=103225
[2] Covert latitude/longitude point to a pixels (x,y) on mercator projection

Where did those variables come from
These variables are chosen to match the computed coordinates to the background image of the map. If the projection parameters of the map were known, they could be computed. But I believe it is far more likely that they were obtained through trial and error.
How to compute a Mercator projection
If you want a more general method to describe the section of the world a given (not transverse) Mercator map shows, you can use this code:
// This map would show Germany:
$south = deg2rad(47.2);
$north = deg2rad(55.2);
$west = deg2rad(5.8);
$east = deg2rad(15.2);
// This also controls the aspect ratio of the projection
$width = 1000;
$height = 1500;
// Formula for mercator projection y coordinate:
function mercY($lat) { return log(tan($lat/2 + M_PI/4)); }
// Some constants to relate chosen area to screen coordinates
$ymin = mercY($south);
$ymax = mercY($north);
$xFactor = $width/($east - $west);
$yFactor = $height/($ymax - $ymin);
function mapProject($lat, $lon) { // both in radians, use deg2rad if neccessary
global $xFactor, $yFactor, $west, $ymax;
$x = $lon;
$y = mercY($lat);
$x = ($x - $west)*$xFactor;
$y = ($ymax - $y)*$yFactor; // y points south
return array($x, $y);
}
A demo run of this code is available at http://ideone.com/05OhG6.
Regarding aspect ratio
A setup with $xFactor != $yFactor produces a kind of stretched Mercator projection. This is not conformal (angle-preserving) any more. If one wants a true Mercator projection, one can omit any of the first six variable assignments, i.e. those defining the bounding box or those describing the size of the resulting map, and then use some computation too choose it satisfying $xFactor == $yFactor. But since the choice which to omit is kind of arbitrary, I feel that the above code is the most symmetric way to describe things.

Here's how to get the returned variables X and Y from the function you've found...
var xy=get_xy(56,34);
var X=xy.x;
var Y=xy.y;
Now X and Y contain the coordinates.

Related

Matter.js calculating force needed

Im trying to apply a force to an object. To get it to move in the angle that my mouseposition is generating relative to the object.
I have the angle
targetAngle = Matter.Vector.angle(myBody.pos, mouse.position);
Now I need to apply a force, to get the body to move along that angle.
What do I put in the values below for the applyForce method?
// applyForce(body, position, force)
Body.applyForce(myBody, {
x : ??, y : ??
},{
x:??, y: ?? // how do I derive this force??
});
What do I put in the x and y values here to get the body to move along the angle between the mouse and the body.
To apply a force to move your object in that direction you need to take the sine and cosine of the angle in radians. You'll want to just pass the object's position as the first vector to not apply torque (rotation).
var targetAngle = Matter.Vector.angle(myBody.pos, mouse.position);
var force = 10;
Body.applyForce(myBody, myBody.position, {
x: cos(targetAngle) * force,
y: sin(targetAngle) * force
});
Also if you need it, the docs on applyForce() are here.
(I understand this question is old, I'm more or less doing this for anyone who stumbles across it)
You can rely on the Matter.Vector module and use it to substract, normalize and multiply positions vectors:
var force = 10;
var deltaVector = Matter.Vector.sub(mouse.position, myBody.position);
var normalizedDelta = Matter.Vector.normalise(deltaVector);
var forceVector = Matter.Vector.mult(normalizedDelta, force);
Body.applyForce(myBody, myBody.position, forceVector);

How do I fix this image (pixel by pixel) distortion issue?

I am attempting to distort an image displayed within a canvas, so it looks like a "planet". However I am struggling to find away to deal with a distortion issue. The only solution coming to mind is to find a way to reduce the radiusDistance variable, the bigger it is. That said, I am unsure how to achieve this. Any suggestions?
Below is the math and objects I am currently using to achieve this:
polarArray = [];
var polar = function(a,r,c){ //polar object, similar to pixel object.
this.a = a; //angle
this.r = r; //radius (distance)
this.color = c; //color, stored using an object containg r,g,b,a variables
};
loopAllPixels(function(loop){//loop through every pixel, stored in a pixel array
var pixel = loop.pixel;
/*each pixel object is structured like this:
pixel {
x,
y,
color {
r,
g,
b,
a
}
}
*/
var angle = pixel.x/360*Math.PI;
var radiusDistance = pixel.y/Math.PI;
polarArray.push(new polar(angle,radiusDistance,pixel.color));//store polar coordinate pixel + colour.
pixel.color = new colorRGBA(255,255,255,255);//set background as white.
});
for (var i=0;i<polarArray.length;i++){//loop through polarArray (polar coordinate pixels + colour)
var p = polarArray[i]; //polar pixel
var x = (p.r*Math.cos(p.a))+(canvas.width/2); //x coordinate
var y = (p.r*Math.sin(p.a))+(canvas.height/2); //y coordinate
if (setpixel(x,y,p.color)==true){ //set pixel at location.
continue;
}
break;
}
updatePixelsToContext();//draw to canvas
And here is the effect it currently produces (note that I flip the image horizontally before applying it to the canvas, and in this example, I set the background with a magenta kind of colour, for better clarity of the issue):
Note:
I am intending for the warping effect, just not the "ripping" of the pixels, caused by not obtaining all the neccessary pixel data required.
Also bear in mind that speed and effeciency isn't my priority here as of yet.

Google Maps API V3 - Showing progress along a route

I want to show a custom route on a route along with the current progress. I've overlayed the a Polyline to show the route and I am able to find the LatLng of the current postion and place a marker. What I want to do now is to highlight the traveled part of the Polyline using a different colour. I'm pretty sure it's not possible to have multiple colours for one Polyline so I plan to overlay a second Polyline over the first the first to give this effect. Here's my question:
Knowing the LatLng of the current position and armed with the array of LatLng points used to create the orginal Polyline how best can I create the second 'Progess' route?
Thanks ahead.
With a few assumptions:
Your path is quite dense (if not, you could interpolate intermediate points)
Your route doesn't overlap with itself (wouldn't work with a repeating circular path, say)
..one crude way would be as follows:
Using Python'y pseudo-code, say you have a route like this:
points = [LatLng(1, 1), LatLng(2, 2), LatLng(3, 3), LatLng(4, 4)]
You draw this as a PolyLine as usual.
Then, given your current position, you would find the nearest point on the route:
cur_pos = LatLng(3.1, 3.0123)
nearest_latlng = closest_point(points, to = cur_pos)
Then nearest_latlng would contain LatLng(3, 3)
Find nearest_latlng in the list, then simply draw a second polyline up to this point. In other words, you truncate the points list at the current LatLng:
progress_points = [LatLng(1, 1), LatLng(2, 2), LatLng(3, 3)]
..then draw that on the map
As mentioned, this will break if the path loops back on itself (closest_point would only ever find the first or last point)
If you know how far has been travelled, there is an epoly extension which gives a few methods that could be used, mainly:
.GetIndexAtDistance(metres)
Returns the vertex number at or after the specified distance along the path.
That index could be used instead of the closest_point calculated one above
Typescript example:
getRouteProgressPoints(directionResult: google.maps.DirectionsResult | undefined, targetDistance: number) {
if (!directionResult || targetDistance <= 0) return [];
const route = directionResult.routes[0];
const leg: google.maps.DirectionsLeg = route.legs[0];
const routePoints: google.maps.LatLng[] = [];
// collect all points
leg.steps.forEach((step) => {
step.path.forEach((stepPath) => {
routePoints.push(stepPath);
});
});
let dist = 0;
let olddist = 0;
const points = [];
// go throw all points until target
for (let i = 0; i < routePoints.length && dist < targetDistance; i++) {
const currentPoint = routePoints[i];
// add first point
if (!i) {
points.push(currentPoint);
continue;
}
const prevPoint = routePoints[i - 1];
olddist = dist;
// add distance between points
dist += google.maps.geometry.spherical.computeDistanceBetween(prevPoint, currentPoint);
if (dist > targetDistance) {
const m = (targetDistance - olddist) / (dist - olddist);
const targetPoint = new google.maps.LatLng(
prevPoint.lat() + (currentPoint.lat() - prevPoint.lat()) * m,
prevPoint.lng() + (currentPoint.lng() - prevPoint.lng()) * m,
);
points.push(targetPoint);
} else {
points.push(currentPoint);
}
}
return points;
}
That is going to be really difficult unless you already have a way of determining that the vehicle has passed certain points already. If you are aware that it has passed certain points you just create the second polyline with the passed points and the current marker as the end of the polyline. You will probably want to change the width of the polyline in order to make it viewable when it is overlaying the initial route.
All of that being said if you don't know if the points have been passed I am not sure what you can do other than write some sort of code that determines if your current position is past a certain set of points, but I don't know how you do that if a route zigzags all over the place possibly..... Now if your points go from straight west to straight east etc, than coding something will be easy, but I doubt that is the case.
Hope this helps somewhat....

Raphael Position

How can I get the position of an object in Raphael? I can get the size using getBBox(), but there appears to be no way to get the position?
getBBox() should give you position as well as x and y properties.
var bbox = el.getBBox();
alert([bbox.x, bbox.y]);
getBBox() returns an object with 5 properties. they are:
x
y
width
height
toString()
if you set getBBox( false ) it will return coordinate data for the object's position AFTER a transformation. set it to getBBox( true ) to return coordinates for the object prior to transformation
use like this ...
paper.Raphael(10,10,300,300);
circle.paper( 30, 55, 15 );
var circleBBox = circle.getBBox( false );
edit: just downloaded R 2.1 and i believe it has added x2 and y2 to the properties returned by getBBox()
Depending on what kind of shape it is, the documentation seems to say it can be accessed using the .attr() function. So, if it's a circle...
var x = myCircle.attr('cx'); //cx is the center-x-coordinate of the circle
var y = myCircle.attr('cy'); //same, for y
var r = myCircle.attr('r'); //Radius of circle.
A square would have attrs of x, y, width, height. Check the documentation for more info.
you may also access the x and y values this way:
var x = myCircle.attrs.x;
var y = myCircle.attrs.y
attributes x, y are those within the set. The issue here is that if the set gets translated somewhere else, the x and y given in by .getBBOx() do not account for the translation.
Raphael.transformPath(path, transform) can help by applying the same transforms that the set has...
to translate that point you can:
tp = Raphael.transformPath("M"+x+","+y, set.attr('transform'))
x = tp[0][1]
y = tp[0][2]

How do I find the the exact lat/lng coordinates of a birdseye scene in Virtual Earth?

I'm trying to find the latitude and longitude of the corners of my map while in birdseye view. I want to be able to plot pins on the map, but I have hundreds of thousands of addresses that I want to be able to limit to the ones that need to show on the map.
In normal view, VEMap.GetMapView().TopLeftLatLong and .BottomRightLatLong return the coordinates I need; but in Birdseye view they return blank (or encrypted values). The SDK recommends using VEBirdseyeScene.GetBoundingRectangle(), but this returns bounds of up to two miles from the center of my scene which in major cities still returns way too many addresses.
In previous versions of the VE Control, there was an undocumented VEDecoder object I could use to decrypt the LatLong values for the birdseye scenes, but this object seems to have disappeared (probably been renamed). How can I decode these values in version 6.1?
It always seems to me that the example solutions for this issue only find the centre of the current map on the screen, as if that is always the place you're going to click! Anyway, I wrote this little function to get the actual pixel location that you clicked on the screen and return a VELatLong for that. So far it seems pretty accurate (even though I see this as one big, horrible hack - but it's not like we have a choice at the moment).
It takes a VEPixel as input, which is the x and y coordinates of where you clicked on the map. You can get that easily enough on the mouse event passed to the onclick handler for the map.
function getBirdseyeViewLatLong(vePixel)
{
var be = map.GetBirdseyeScene();
var centrePixel = be.LatLongToPixel(map.GetCenter(), map.GetZoomLevel());
var currentPixelWidth = be.GetWidth();
var currentPixelHeight = be.GetHeight();
var mapDiv = document.getElementById("map");
var mapDivPixelWidth = mapDiv.offsetWidth;
var mapDivPixelHeight = mapDiv.offsetHeight;
var xScreenPixel = centrePixel.x - (mapDivPixelWidth / 2) + vePixel.x;
var yScreenPixel = centrePixel.y - (mapDivPixelHeight / 2) + vePixel.y;
var position = be.PixelToLatLong(new VEPixel(xScreenPixel, yScreenPixel), map.GetZoomLevel())
return (new _xy1).Decode(position);
}
Here's the code for getting the Center Lat/Long point of the map. This method works in both Road/Aerial and Birdseye/Oblique map styles.
function GetCenterLatLong()
{
//Check if in Birdseye or Oblique Map Style
if (map.GetMapStyle() == VEMapStyle.Birdseye || map.GetMapStyle() == VEMapStyle.BirdseyeHybrid)
{
//IN Birdseye or Oblique Map Style
//Get the BirdseyeScene being displayed
var birdseyeScene = map.GetBirdseyeScene();
//Get approximate center coordinate of the map
var x = birdseyeScene.GetWidth() / 2;
var y = birdseyeScene.GetHeight() / 2;
// Get the Lat/Long
var center = birdseyeScene.PixelToLatLong(new VEPixel(x,y), map.GetZoomLevel());
// Convert the BirdseyeScene LatLong to a normal LatLong we can use
return (new _xy1).Decode(center);
}
else
{
// NOT in Birdseye or Oblique Map Style
return map.GetCenter();
}
}
This code was copied from here:
http://pietschsoft.com/post/2008/06/Virtual-Earth-Get-Center-LatLong-When-In-Birdseye-or-Oblique-Map-Style.aspx
From the VEMap.GetCenter Method documentation:
This method returns null when the map
style is set to VEMapStyle.Birdseye or
VEMapStyle.BirdseyeHybrid.
Here is what I've found, though:
var northWestLL = (new _xy1).Decode(map.GetMapView().TopLeftLatLong);
var southEastLL = (new _xy1).Decode(map.GetMapView().BottomRightLatLong);
The (new _xy1) seems to work the same as the old undocumented VEDecoder object.
An interesting point in the Bing Maps Terms of Use..
http://www.microsoft.com/maps/product/terms.html
Restriction on use of Bird’s eye aerial imagery:
You may not reveal latitude, longitude, altitude or other metadata;
According to http://dev.live.com/virtualearth/sdk/ this should do the trick:
function GetInfo()
{
alert('The latitude,longitude at the center of the map is: '+map.GetCenter());
}

Categories