queryFeatures is not returning all intersected features - javascript

Intended effect
When user clicks on a polygon feature (a county, region, or neighborhood/municipality) or uses the "Draw" widget, a dashboard card displays the number of intersected point features returned by queryFeatures() (see below).
localitiesLayer.queryFeatures(query).then(function(results) {
var queriedLocalities = results.features;
if (queriedLocalities.length > 0) {
var fossilsFound = queriedLocalities.length;
}
}
Issue
The maximum number of returned intersected features is 2,000 even when more than 2,000 point features have been selected.
In the photo below, there are only "2000 fossil sites in the area!" when there should be over 3,000 features returned.
Troubleshooting
The issue is fixed when instead of querying the localitiesLayer feature layer, a feature layer view is instead queried. This introduces the unsolvable issue of the number of localities returned by queryFeatures changing depending on the level of zoom (as detailed in the API Reference for queryFeatures of FeatureLayerView).
Since it seems I'm stuck using a server-side query, I need to understand why this is happening at such a seemingly arbitrary number.
At first I thought it was related to possible topology issues between features, but why would that affect the polygon generated by the Draw widget? Before writing this question I also ran the integrate tool on all features layers just to make sure that there wasn't any non-coincident polygons.
Question
Why is the upper limit of features returned by the queryFeatures() on the localitiesLayer 2,000 even when more than 2,000 point features intersect with a selected polygon?
Why does querying with a feature layer view fix this issue (though as detailed above is not a valid solution to this problem)?
CodePen of app with bug

Usually feature services has a maximum amount of features to retrieve in one query. This is what is happening here.
You can check the service endpoint of the layer (LAU_Localities_View - 0) to find these value in Max Record Count, here set to 2000.
So you will have to use some other technique in order to have all the values. One simple way is to iterate and query with an extra condition using a field as last index, for example OBJECTID. You will have to order the result by the index field.

Related

Checking if object in Open Street Maps is a building

I'm using Leaflet library in my ReactJS app and I wonder if there is a simple way to recognize if object clicked by user is a building.
Idea that came up to my mind is to check map colour under clicked position.
Does it make sense?
I appreciate your help.
Colleagues in comments advised to give some use-case:
App I'm working on is meant to mark antique buildings with elevation in bad shape so city architecture management had simpler job of searching for them.
Every user of this App can mark such building. To prevent hooligans from corrupting data with senseless points on map I wanted to validate as a first step if clicked point is a building.
I hope it will clarify problem a little bit.
I wonder if there is a simple way to recognize if object clicked by user is a building.
No.
You basically want to run arbitrary point-in-polygon queries against OSM's building dataset, and I will presume that you don't want to host that dataset yourself.
The simplest way to do this is to perform queries to an Overpass API server, passing a is_in query and filtering by the building tag key. The OSM website's query feature functionality uses such a technique.
With this technique you won't have to worry about hosting the data, just about creating the right Overpass API query. Please bear in mind that the Overpass API servers are run by volunteers and their resources are limited.
The second simplest way would be to download a OSM extract of you area of interest, and run the point-in-polygon queries yourself, by whatever means you like (PostGIS' ST_Intersect, turf.js, etc etc).
If you will be using Leaflet, another approach would be to use vector tiles, and set it up in such a way that the buildings thematic layer is interactive. This will require you to be aware of the limitations of the vector tile servers.
Idea that came up to my mind is to check map colour under clicked position.
That is unreliable. Think about labels on top of buildings, or the colour of the edge of the building area, or buildings that don't render with the standard colour (e.g. places of worship, monuments).

Using an ordered list in IndexedDB

I'm trying to learn the basics of IndexedDB by creating a trivial notepad application. I'm having difficulties using an ordered list in this environment.
The feature I'm not sure how to implement is having an ordered list of notes.
I first tried implementing the notepad application in WebSQL, and I found it quite easy to select the notes like this:
select * from notes order by position
And when inserting a note at a specified position, I first did ...
update notes set position = position + 1 where position >= insert_position
... to shift each note to make space for the new note at position insert_position.
But I saw that WebSQL is actually deprecated.
What are the possibilities to achieve such a feature in IndexedDB? I don't fully understand how to create an ordered list in an environment such as IndexedDB since a quick query like the above is not applicable.
As a side note, I know it's possible to store an array in IndexedDB, but then I would just have one record which I'm using each time. I'm rather looking for a way to somehow have an ordered list of all records (each record representing a note), and to be able to update the ordering (like the shifting query above).
Could someone shed some light on the IndexedDB way of an ordered list?
As with many things there are a few ways to crack this nut.
If you were creating an app that orders notes based on creation time, it would be as simple as using an auto-incrementing key (this flag is specified on objectStore creation). Note one would have the id (aka primaryKey) of 1, the second 2 and so forth. This would use the default keyPath, so you could open up a cursor without having to create an index.
To order notes by something that could change, such as modified on time, you'd create an index on that field and be sure to specify it when adding or putting objects. You would open up a cursor with a lower bound of, say 0 (lexicographically ordered keys means this comes before all strings) and leave the upper bound open. You'd then cursor across each row one by one firing onsuccess handlers until you exhaust your cursor and it returns null in event. target.result.
It sounds like you might be looking to have a field such as "position" and order on that. That's totally doable with a regular index and cursor, as above. One note of advice would be to make position field a floating point number rather than an integer as with the former you can update the order without having to alter any other rows (position n = ( ( position 1 + position 2 ) / 2 )).

Hexagon map coordinates -> html rendering

I have an hexagonal map but I realized that my coordinate system is bad for using some findpath algorithms so I want to reform it.
I chose one system that fully satisfies me. You can find it here. But at the referenced example the whole map is rotated differently than I need.
My old version of my map is here:
http://dark-project.cz/wesnoth/map-view/1
And my question is how to render my map in HTML to have the same map as I have now but with the new coordinate system?
(I render it using PHP cycle. For each field I have this informations:
coordinates, field type (grass, village, ...) and dimensions of field image)
Thank you for your answers!
PS: I think it could be done by using HTML5 Canvas but I want good browser support and I haven't got any experience with HTML5 (but I'm not againt rendering on the client side if it was fast and had good browser support) so I prefer server side (PHP) solution!
Your coordinate system is not compatible with the one used in the algorithm demo.
I think your best bet is to alter the algorithm you have found to use your coordinate system.
AFAIK you essentially have to change:
the part that takes a given coordinate and determines the 6 neighbouring coordinates.
the function that determines if a given coordinate is inside the map boundaries.
(sort of) the function that calculates the cost/distance
I notice the demo code goes:
function hex_distance(x1,y1,x2,y2) {
dx = Math.abs(x1-x2);
dy = Math.abs(y2-y1);
return Math.sqrt((dx*dx) + (dy*dy));
}
But that's an inaccurate estimate as the axes aren't perpendicular. It could produce non-optimal results - the requirement of a score function in A* search is to produce a value not higher than the real cost. This function may violate that rule.
Your coordinate system would actually make that function more accurate, but you could also get away with just the manhattan distance:
function hex_distance(x1,y1,x2,y2) {
return abs(x2-x1) + abs(y2-y1);
}
Which, if I am not mistaken, works out to the number of tile steps needed to get from (x1,y1) to (x2,y2).

How do I get more locations?

I am trying to get some locations in New York using FourSquare API using the following API call:
https://api.foursquare.com/v2/venues/search?ll=40.7,-74&limit=50
What I don't understand is that if the call imposes a limit of 50 search results (which is the maximum), how can I get more locations? When using Facebook API, the results being returned were random so I could issue multiple calls to get more results but FourSquare seems to be returning the same result set. Is there a good way to get more locations?
EDIT:
Ok. So there was a comment saying that I could be breaking a contractual agreement and I am not sure why this would be the case. I would gladly accept a reasoning for this. My doubt is this: Let us say that hypothetically, the location I am searching for is not in the 50 results returned? In that case, shouldn't there be a pagination mechanism somewhere?
The API docs here can help.
Foursquare searching is very closely linked to the location 'point' (the 'll' param on the query) that you provide. The simple answer is that to find more venues within a given area, you need to simply query again with a different location 'point' within that area.
Two queries, both at points close to one another:
https://api.foursquare.com/v2/venues/search?ll=40.700,-74.000&limit=50
https://api.foursquare.com/v2/venues/search?ll=40.705,-74.005&limit=50
will get you two different sets of venues (that may overlap, depending on how close the points are).
The default intent for the search method is 'checkin', which will return the 50 most popular locations closest to that point. If instead you want to look at all the venues within an area, you can use the 'browse' intent. This takes either a 'radius' parameter, in which case it returns venues inside a circle around the given point with the given radius, or it takes two coordinates representing the 'sw' and 'ne' corners of a rectangle. So, you could do:
https://api.foursquare.com/v2/venues/search?ll=40.705,-74.005&limit=50&intent=browse&radius=50
which will give you 50 venues within the 50m circle around that point. A smaller radius will reduce the number of venues returned. So, by varying the radius and the point at which you search (or the size and position of the rectangle described by the 'sw' and 'ne' parameters), you can get more venues returned.
Hope that helps.
The current API limits results to 50. You should try altering your coordinates to be more precise to avoid not finding your venue.
Pagination would be nice but 50 is a lot of venues for a search.

Data visualization: Bubble charts, Venn diagrams, and tag clouds (oh my!)

Suppose I have a large list of objects (thousands or tens of thousands), each of which is tagged with a handful of tags.
There are dozens or hundreds of possible tags and their usage follows a typical power law:
some tags are used extremely often but most are rare.
All but the most frequent couple dozen tags could typically be ignored, in fact.
Now the problem is how to visualize the relationship between these tags.
A tag cloud is a nice visualization of just their frequencies but it ignores which tags occur with which other tags.
Suppose tag :bar only occurs on objects also tagged :foo.
That should be visually apparent.
Similarly for three tags that tend to occur together.
You could make each tag a bubble and let them partially overlap with each other.
Technically that's a Venn diagram but treating it that way might be unwieldy.
For example, Google charts can create Venn diagrams, but only for 3 or fewer sets (tags):
http://code.google.com/apis/chart/docs/gallery/venn_charts.html
The reason they limit it to 3 sets is that any more and it looks horrendous.
See "extentions to higher numbers of sets" on the Wikipedia page: http://en.wikipedia.org/wiki/Venn_diagrams
But that's only if every possible intersection is non-empty.
If no more than 3 tags ever co-occur (maybe after throwing out the rare tags) then a collection of Venn diagrams could work (with the sizes of the bubbles representing tag frequency).
Or perhaps a graph (as in vertices and edges) with visually thicker or thinner edges to represent frequency of co-occurrence.
Do you have any ideas, or pointers to tools or libraries?
Ideally I'd do this with javascript but I'm open to things like R and Mathematica or really anything else.
I'm happy to share some actual data (you'll laugh if I tell you what it represents) if anyone is curious.
Addendum: The application I originally had in mind was TagTime but it occurs to me that this also maps well to the problem of visualizing one's delicious bookmarks.
If i understand your question correctly, an image matrix should work nicely here. The implementation i have in mind would be an n x m matrix in which the tagged items are rows, and each tags type is a separate column. Every cell in the matrix would consist entirely of "1's" and "0's", i.e., a particular item either has a given tag or it doesn't.
In the matrix below (which i rotated 90 degrees so it would fit better in this window--so columns actually represent tagged items, and each row shows the presence or absence of a given tag across all items), i simulated the scenario in which there are 8 tags and 200 tagged items. , a "0" is blue and a "1" is light yellow.
All values in this matrix were randomly selected (each tagged item is eight draws from a box consisting of two tokens, one blue and one yellow (no tag and tag, respectively). So not surprisingly there's no visual evidence of a pattern here, but if there is one in your data, this technique, which is dead simple to implement, can help you find it.
I used R to generate and plot the simulated data, using only base graphics (no external packages or libraries):
# create the matrix
A = matrix(data=r1, nrow=1, ncol=8)
# populate it with random data
for (i in seq(0, 200, 1)){r1 = sample(0:1, 8, replace=TRUE); A = rbind(A, r1)}
# now plot it
image(z=A, ann=F, axes=F, col=topo.colors(12))
I would create something like this if you are targeting the web. Edges connecting the nodes could be thicker or darker in color, or perhaps a stronger force connecting them so they are close in distance. I would also add the tag name inside the circle.
Some libraries that would be very good for this include:
Protovis (Javascript)
Flare (Adobe Flash)
Some other fun javascript libraries worth looking into are:
Processing for Javascript
Raphael
Although this is an old thread, I just came across it today.
You may also want to consider using a Self-Organizing Map.
Here is an example of a self-organizing map for world poverty. It used 39 of what you call your "tags" to arrange what you call your "objects".
http://www.cis.hut.fi/research/som-research/povertymap.gif
Note sure it would work as I did not test that, but here is how I would start:
You can create a matrix as doug suggests in his answer, but instead of having documents as rows and tags as columns, you take a square matrix where tags are rows and columns. Value of the cell T1;T2 will be the number of documents tagged with both T1 and T2 (note that by doing that you'll get a symetric matrix because [T1;T2] will have the same value as [T2;T1]).
Once you have done that, each row (or column) is a vector locating the tag in a space with T dimensions. Tags near each others in this space often occur together. To visualize co-occurrence you can then use a method to reduce your space dimensionality or any clustering method. For example you can use a kohonen self organizing map to project your T-dimensions space to a 2D space, you'll then get a 2D matrix where each cell represents an abstract vector in the tag space (meaning the vector won't necessary exists in your data set). This vector reflect a topological constraint of your source space, and can be seen as a "model" vector reflecting a significant co-occurence of some tags. Moreover, cells near each others on this map will represent vectors close to each other in the source space, thus allowing you to map the tag space on a 2D matrix.
Final visualization of the matrix can be done in many ways but I cannot give you advice on that without first seeing the results of the previous processing.

Categories