I have been using the Lookback API to query for user stories from the Rally environment. While the querying functionality is stronger than the WsapiDataStore, by allowing me to query using the RPM hierarchy, it does not seem to be able to return full data fields' values, such as Owner and Project. Instead, the OIDs for these fields are returned. To try to work around this, my idea was to first do a Lookback API query to get all the story OIDs within the RPM hierarchy I am concerned with. I will capture the story OIDs and keep them in an array. Then, I can use a WsapiDataStore query to get the detailed info for the stories matching the OIDs in the array. When using the Lookback API, I have the option to use the 'in' operator, so the query would look like this:
{
property: 'ObjectID',
operator: 'in',
value: [ '71352862', '44523976', '61138496' ]
}
I can't use this functionality in the WsapiDataStore however. Also, when I try to 'OR' them all together in one long query string I am getting an error about an invalid request. I assume the query string is too long since in most cases I am searching for about 1000 User Stories. I would prefer not to have to make a separate query for each OID but right now that is seeming like the only solution. Is there a way to get full details from the Lookback API, or at least filter using an array on the WsapiDataStore query?
ObjectID now supports the in operator.
From the WSAPI docs:
Here is an example usage: (ObjectID in 1,2,3)
Related
I am struggling to find good material on best practices for filtering data using firebase firestore. I want to filter my data based on the categories selected by the user. I have a collection of documents stored on my firestore database and each document have an array which has all the appropriate categories for that single document. For the sake of filtering, I'm keeping a local array with a user's preferred categories as well. All I want to do is to filter the data based on the user's preferred categories.
firestore categories field
consider I have the user's preferred categories stored as an array of strings ( ["Film", "Music"] ) .I was planning on using firestore's 'array-contains' method like
db.collection(collectioname)
.where('categoriesArray', 'array-contains', ["Film", "Music"])
Later I found out that I can't use 'array-contains' against an array itself and after investigating on this issue, I decided to change my data structure as mentioned here.
categories changed to Map
Once I changed the categories from an array to map, I thought I could use multiple where conditions to filter the documents
let query = db.collection(collectionName)
.where(somefield, '==', true)
this.props.data.filterCategories.forEach((val) => {
query = query.where(`categories.${val}`, '==', true);
});
query = query
.orderBy(someOtherField, "desc")
.limit(itemsPerPage)
const snapshot = await query.get()
Now problem number 2, firebase requires to add indexes for compound queries. The categories I have saved within each document is dynamic and there's no way I can add these indexes in advance. What would be the ideal solution in such cases? Any help would be deeply appreciated.
This is a new feature of Firebase JavaScript SDK launched at November 7, 2019:
Version 7.3.0 - November 7, 2019
array-contains-any
"array-contains-any operator to combine up to 10 array-contains clauses on the same field with a logical OR. An array-contains-any query returns documents where the given field is an array that contains one or more of the comparison values"
citiesRef.where('regions', 'array-contains-any',
['west_coast', 'east_coast']);
Instead of iterating through each category that you wish to query and appending clauses to a single query object, each iteration should be its own independent query. And you can keep the categories in an array.
<document>
- itemId: abc123
- categories: [film, music, television]
If you wish to perform an OR query, you would make n-loops where each loop would query for documents where array-contains that category. Then on your end, you would dedup (remove duplicates) from the results based on the item's identifier. So if you wanted to query film or music, you would make 2 loops where the first iteration queried documents where array-contains film and the second loop queried documents where array-contains music. The results would be placed into the same collection and then you would simply remove all duplicates with the same itemId.
This also does not pose a problem with the composite-index limit because categories is a static field. The real problem comes with pagination because you would need to keep a record of all fetched itemId in case a future page of results returns an item that was already fetched and this would create an O(N^2) scenario (more on big-o notation: https://rob-bell.net/2009/06/a-beginners-guide-to-big-o-notation/). And because you're deduping locally, pagination blocks as the user sees them are not guaranteed to be even. If each pagination block is set to 25 documents, for example, some pages may end up displaying 24, some 21, others 14, depending on how many duplicates were removed from each block.
Are you planning on retrieving documents with the exact category array? Say, your user preference is listed as ["Film", "Music"]. Do you wish to retrieve only those documents with Film AND Music, or do you wish to retrieve documents having Film OR music?
If it's the latter, then maybe you can query for all documents with "Film" and then query for all documents with "Music", then merge it. However, the drawback here is some redundant document reads, when such document has both "Film" and "Music" in the categoryArray field.
You can also explore using Algolia to enable full-text search. In this case, you'd probably store the category list as a string maybe separated by commas, then update the whole string when the user changes their preferences.
For the former case, I have not come across sa workable solution other than maybe storing it as a concatenated string in alphabetical order? Others might have a more solid solution than mine.
Hope this helps!
Your query includes an orderBy clause. This, in combination with any equality filter, requires that you create an index to support that query. There is no way to avoid this.
If you remove the orderBy, you will be able to have flexible, dynamic filters for equality using the map properties in the document. This is the only way you will be able to have a dynamic filter without creating an index. This of course means that you will have to order and page the query results on the client.
I want to query object from Parse DB through javascript, that has only 1 of some specific relation object. How can this criteria be achieved?
So I tried something like this, the equalTo() acts as a "contains" and it's not what I'm looking for, my code so far, which doesn't work:
var query = new Parse.Query("Item");
query.equalTo("relatedItems", someItem);
query.lessThan("relatedItems", 2);
It seems Parse do not provide a easy way to do this.
Without any other fields, if you know all the items then you could do the following:
var innerQuery = new Parse.Query('Item');
innerQuery.containedIn('relatedItems', [all items except someItem]);
var query = new Parse.Query('Item');
query.equalTo('relatedItems', someItem);
query.doesNotMatchKeyInQuery('objectId', 'objectId', innerQuery);
...
Otherwise, you might need to get all records and do filtering.
Update
Because of the data type relation, there are no ways to include the relation content into the results, you need to do another query to get the relation content.
The workaround might add a itemCount column and keep it updated whenever the item relation is modified and do:
query.equalTo('relatedItems', someItem);
query.equalTo('itemCount', 1);
There are a couple of ways you could do this.
I'm working on a project now where I have cells composed of users.
I currently have an afterSave trigger that does this:
const count = await cell.relation("members").query().count();
cell.put("memberCount",count);
This works pretty well.
There are other ways that I've considered in theory, but I've not used
them yet.
The right way would be to hack the ability to use select with dot
notation to grab a virtual field called relatedItems.length in the
query, but that would probably only work for me because I use PostGres
... mongo seems to be extremely limited in its ability to do this sort
of thing, which is why I would never make a database out of blobs of
json in the first place.
You could do a similar thing with an afterFind trigger. I'm experimenting with that now. I'm not sure if it will confuse
parse to get an attribute back which does not exist in its schema, but
I'll find out, by the end of today. I have found that if I jam an artificial attribute into the objects in the trigger, they are returned
along with the other data. What I'm not sure about is whether Parse will decide that the object is dirty, or, worse, decide that I'm creating a new attribute and store it to the database ... which could be filtered out with a beforeSave trigger, but not until after the data had all been sent to the cloud.
There is also a place where i had to do several queries from several
tables, and would have ended up with a lot of redundant data. So I wrote a cloud function which did the queries, and then returned a couple of lists of objects, and a few lists of objectId strings which
served as indexes. This worked pretty well for me. And tracking the
last load time and sending it back when I needed up update my data allowed me to limit myself to objects which had changed since my last query.
I have data in a standalone Neo4j REST server, including an index of nodes. I want pure JavaScript client to connect to Neo4j and serve the formatted data to d3.js, a visualisation library built on Node.js.
JugglingDB is very popular, but the Neo4j implementation was done "wrong": https://github.com/1602/jugglingdb/issues/56
The next most popular option on github is: https://github.com/thingdom/node-neo4j
looking at the method definitions https://github.com/thingdom/node-neo4j/blob/develop/lib/GraphDatabase._coffee
I'm able to use "getNodeById: (id, _) ->"
> node1 = db.getNodeById(12, callback);
returns the output from the REST server, including node properties. Awesome.
I can't figure out how to use "getIndexedNodes: (index, property, value, _) ->"
> indexedNodes = db.getIndexedNodes:(index1, username, Homer, callback);
...
indexedNodes don't get defined. I've tried a few different combinations. No joy. How do I use this command?
Also, getIndexedNodes() requires a key-value pair. Is there any way to get all, or a subset of the items in the index without looping?
One of the authors/maintainers of node-neo4j here. =)
indexedNodes don't get defined. I've tried a few different combinations. No joy. How do I use this command?
Your example seems to have some syntax errors. Are index1, username and Homer variables defined elsewhere? Assuming not, i.e. assuming those are the actual index name, property name and value, they need to be quoted as string literals, e.g. 'index1', 'username' and 'Homer'. But you also have a colon right before the opening parenthesis that shouldn't be there. (That's what's causing the Node.js REPL to not understand your command.)
Then, note that indexedNodes should be undefined -- getIndexedNodes(), like most Node.js APIs, is asynchronous, so its return value is undefined. Hence the callback parameter.
You can see an example of how getIndexedNodes() is used in the sample node-neo4j-template app the README references:
https://github.com/aseemk/node-neo4j-template/blob/2012-03-01/models/user.js#L149-L160
Also, getIndexedNodes() requires a key-value pair. Is there any way to get all, or a subset of the items in the index without looping?
getIndexedNodes() does return all matching nodes, so there's no looping required. Getting a subset isn't supported by Neo4j's REST API directly, but you can achieve the result with Cypher.
E.g. to return the 6th-15th user (assuming they have a type property set to user) sorted alphabetically by username:
db.query([
'START node=node:index1(type="user")',
'RETURN node ORDER BY node.username',
'SKIP 5 LIMIT 10'
].join('\n'), callback);
Cypher is still rapidly evolving, though, so be sure to reference the documentation that matches the Neo4j version you're using.
As mentioned above, in general, take a look at the sample node-neo4j-template app. It covers a breadth of features that the library exposes and that a typical app would need.
Hope this helps. =)
Neo4j 2 lets you do indices VIA REST. Docs here
REST Indicies
I'm interested in using the visualsearch.js control for my website but, having read through the documentation, I am still unclear regarding how to effectively obtain the output search collection data. Based on the example, the output string is constructed through serialization of the search collection. However, I was wondering if there is a way to access the search collection in a more array-like fashion (so that for/in loops can be used) rather than having to parse a single serialized string. Ultimately, I need to construct SQL queries from the search collection data.
If there is an even more efficient or appropriate way of accessing the search collection data, please let me know!
Thanks!
as far as i know there are 2 ways to fetch data from visual search
it is also directly explained in their documentation in usage #4
like you said, the stringified version of the search.
visualSearch.searchBox.value();
// returns: 'country: "United States" state: "New York" account: 5-samuel title: "Pentagon Papers"'
or the facetted object to loop over
visualSearch.searchQuery.facets();
// returns: [{"country":"United States"},{"state":"New York"},{"account":"5-samuel"},{"title":"Pentagon Papers"}]
as you can see, this option gives you an array, per facet that was filtered on, and for each asset the value that was entered.
mhmmm.. ok, the answer is not so straightforward. I would suggest you to get some practice with backbone structure just making some modification to the todo-list app. It is a great startpoint. So you get familiar with some of the wonderful backbone.js methods for collections
The Basic idea is the following:
With visualsearch you can obtain a list of "facets", that is to say an array of key/values objects.
var myFacets = visualSearch.searchQuery.facets();
//my facets is then something like [{"field1":"value1-a"},{"field2":"value2-c"}]
after this you can use myFacets elements to iterativrely filter you collection with the WONDERFUL filter method hinerithed from _underscore lib.
How to do it? You can use the _.each method in the underscore lib
_.each(myFacets,function(facet){
myCollection=myCollection.filter(function(item){
return item.get(facet.get('category')) == facet.get('value');
});
});
}
Here you use the filter method of backbone.js, which returns only the values are true according to your clause. So, you filter your collection once for each single facet. It is like telling to javascript: "Return me only the elements of the collection which match with this facets (value)", and you do it iteratively for all the different facets you got.
Hope this helps.
Ah.. one last thing, just to mess ideas up :-) :Visualsearch is built on backbone.js, and the searchQuery object is nothing but a backbone Collection, so you can use the methods and the properties of the basic backbone collection. Read this line again if this is not clear, because this can be a key point for future implementations! :-)
I suggest you to have a look at the search_jquery.js file in the lib/js/models folder. It's very interesting...
I have a pretty big array of JSON objects (its a music library with properties like artist, album etc, feeding a jqgrid with loadonce=true) and I want to implement lucene-like (google-like) query through whole set - but locally, i.e. in the browser, without communication with web server. Are there any javascript frameworks that will help me?
Go through your records, to create a one time index by combining all search
able fields in a single string field called index.
Store these indexed records in an Array.
Partition the Array on index .. like all a's in one array and so on.
Use the javascript function indexOf() against the index to match the query entered by the user and find records from the partitioned Array.
That was the easy part but, it will support all simple queries in a very efficient manner because the index does not have to be re-created for every query and indexOf operation is very efficient. I have used it for searching up to 2000 records. I used a pre-sorted Array. Actually, that's how Gmail and yahoo mail work. They store your contacts on browser in a pre-sorted array with an index that allows you to see the contact names as you type.
This also gives you a base to build on. Now you can write an advanced query parsing logic on top of it. For example, to support a few simple conditional keywords like - AND OR NOT, will take about 20-30 lines of custom JavaScript code. Or you can find a JS library that will do the parsing for you the way Lucene does.
For a reference implementation of above logic, take a look at how ZmContactList.js sorts and searches the contacts for autocomplete.
You might want to check FullProof, it does exactly that:
https://github.com/reyesr/fullproof
Have you tried CouchDB?
Edit:
How about something along these lines (also see http://jsfiddle.net/7tV3A/1/):
var filtered_collection = [];
var query = 'foo';
$.each(collection, function(i,e){
$.each(e, function(ii, el){
if (el == query) {
filtered_collection.push(e);
}
});
});
The (el == query) part of course could/should be modified to allow more flexible search patterns than exact match.