How do we use the output of visualsearch.js? - javascript

I'm interested in using the visualsearch.js control for my website but, having read through the documentation, I am still unclear regarding how to effectively obtain the output search collection data. Based on the example, the output string is constructed through serialization of the search collection. However, I was wondering if there is a way to access the search collection in a more array-like fashion (so that for/in loops can be used) rather than having to parse a single serialized string. Ultimately, I need to construct SQL queries from the search collection data.
If there is an even more efficient or appropriate way of accessing the search collection data, please let me know!
Thanks!

as far as i know there are 2 ways to fetch data from visual search
it is also directly explained in their documentation in usage #4
like you said, the stringified version of the search.
visualSearch.searchBox.value();
// returns: 'country: "United States" state: "New York" account: 5-samuel title: "Pentagon Papers"'
or the facetted object to loop over
visualSearch.searchQuery.facets();
// returns: [{"country":"United States"},{"state":"New York"},{"account":"5-samuel"},{"title":"Pentagon Papers"}]
as you can see, this option gives you an array, per facet that was filtered on, and for each asset the value that was entered.

mhmmm.. ok, the answer is not so straightforward. I would suggest you to get some practice with backbone structure just making some modification to the todo-list app. It is a great startpoint. So you get familiar with some of the wonderful backbone.js methods for collections
The Basic idea is the following:
With visualsearch you can obtain a list of "facets", that is to say an array of key/values objects.
var myFacets = visualSearch.searchQuery.facets();
//my facets is then something like [{"field1":"value1-a"},{"field2":"value2-c"}]
after this you can use myFacets elements to iterativrely filter you collection with the WONDERFUL filter method hinerithed from _underscore lib.
How to do it? You can use the _.each method in the underscore lib
_.each(myFacets,function(facet){
myCollection=myCollection.filter(function(item){
return item.get(facet.get('category')) == facet.get('value');
});
});
}
Here you use the filter method of backbone.js, which returns only the values are true according to your clause. So, you filter your collection once for each single facet. It is like telling to javascript: "Return me only the elements of the collection which match with this facets (value)", and you do it iteratively for all the different facets you got.
Hope this helps.
Ah.. one last thing, just to mess ideas up :-) :Visualsearch is built on backbone.js, and the searchQuery object is nothing but a backbone Collection, so you can use the methods and the properties of the basic backbone collection. Read this line again if this is not clear, because this can be a key point for future implementations! :-)
I suggest you to have a look at the search_jquery.js file in the lib/js/models folder. It's very interesting...

Related

Use _.filter on object created by _.indexBy

I started off with an array of objects and used _.filter to filter down on some search criteria and _.findWhere to select out asingle object based on ID.
Unfortunately the amount of data has increased so much so that it's much more efficient to use _.indexBy to index by ID so I can just do data[ID] = id for the _.findWhere's.
However I am stumped on how to replace the _.filter method without looping through all the keys in data.
Is there a better way?!
Edit
The IDs are always unique.
I can't show any real data as it is sensitive but the structure is
data = {
1: {id: 1, name: 'data1', date: 20/1/2016}
2: {id: 2, name: 'data2', date: 21/1/2016},
3: {....
}
and I need to something like:
var recentData = _.filter(data, function(d){d.date > 1/1/2016});
To get an array of data or ids.
(n.b. the dates are all in epoch times)
This is really an optimization question, rather than simply which function to use.
One thing to go about this would be if we could rely on sort order of the whole collection. If it's already sorted, you go with something like binary search to find the border elements of your date range and then splice everything from this point. (side note: array would probably work better for this).
If the array is not sorted you could also consider sorting it first on your end - but that makes sense only if you need to retrieve such information several times from the same dataset.
But if all you got is just the data, unsorted and you need to pick all elements starting from a certain date - no other way that iterate through it all with something like _.filter().
Alternatively you could revert back to the source of your data and check whether you can improve the results that way - if you're using some kind of API, maybe there are extra params for sorting or narrowing down the date selection (generally speaking database engines are really efficient at what you're trying to do)? Or if you're using a huge static JSON as the source - maybe consider improving that source object with sort order?
Just some ideas. Hard to give you the best resolution without knowing all the story behind what you're trying to do.

Parse.com relations count

I want to query object from Parse DB through javascript, that has only 1 of some specific relation object. How can this criteria be achieved?
So I tried something like this, the equalTo() acts as a "contains" and it's not what I'm looking for, my code so far, which doesn't work:
var query = new Parse.Query("Item");
query.equalTo("relatedItems", someItem);
query.lessThan("relatedItems", 2);
It seems Parse do not provide a easy way to do this.
Without any other fields, if you know all the items then you could do the following:
var innerQuery = new Parse.Query('Item');
innerQuery.containedIn('relatedItems', [all items except someItem]);
var query = new Parse.Query('Item');
query.equalTo('relatedItems', someItem);
query.doesNotMatchKeyInQuery('objectId', 'objectId', innerQuery);
...
Otherwise, you might need to get all records and do filtering.
Update
Because of the data type relation, there are no ways to include the relation content into the results, you need to do another query to get the relation content.
The workaround might add a itemCount column and keep it updated whenever the item relation is modified and do:
query.equalTo('relatedItems', someItem);
query.equalTo('itemCount', 1);
There are a couple of ways you could do this.
I'm working on a project now where I have cells composed of users.
I currently have an afterSave trigger that does this:
const count = await cell.relation("members").query().count();
cell.put("memberCount",count);
This works pretty well.
There are other ways that I've considered in theory, but I've not used
them yet.
The right way would be to hack the ability to use select with dot
notation to grab a virtual field called relatedItems.length in the
query, but that would probably only work for me because I use PostGres
... mongo seems to be extremely limited in its ability to do this sort
of thing, which is why I would never make a database out of blobs of
json in the first place.
You could do a similar thing with an afterFind trigger. I'm experimenting with that now. I'm not sure if it will confuse
parse to get an attribute back which does not exist in its schema, but
I'll find out, by the end of today. I have found that if I jam an artificial attribute into the objects in the trigger, they are returned
along with the other data. What I'm not sure about is whether Parse will decide that the object is dirty, or, worse, decide that I'm creating a new attribute and store it to the database ... which could be filtered out with a beforeSave trigger, but not until after the data had all been sent to the cloud.
There is also a place where i had to do several queries from several
tables, and would have ended up with a lot of redundant data. So I wrote a cloud function which did the queries, and then returned a couple of lists of objects, and a few lists of objectId strings which
served as indexes. This worked pretty well for me. And tracking the
last load time and sending it back when I needed up update my data allowed me to limit myself to objects which had changed since my last query.

Querying for User Story detail using array of ObjectIDs

I have been using the Lookback API to query for user stories from the Rally environment. While the querying functionality is stronger than the WsapiDataStore, by allowing me to query using the RPM hierarchy, it does not seem to be able to return full data fields' values, such as Owner and Project. Instead, the OIDs for these fields are returned. To try to work around this, my idea was to first do a Lookback API query to get all the story OIDs within the RPM hierarchy I am concerned with. I will capture the story OIDs and keep them in an array. Then, I can use a WsapiDataStore query to get the detailed info for the stories matching the OIDs in the array. When using the Lookback API, I have the option to use the 'in' operator, so the query would look like this:
{
property: 'ObjectID',
operator: 'in',
value: [ '71352862', '44523976', '61138496' ]
}
I can't use this functionality in the WsapiDataStore however. Also, when I try to 'OR' them all together in one long query string I am getting an error about an invalid request. I assume the query string is too long since in most cases I am searching for about 1000 User Stories. I would prefer not to have to make a separate query for each OID but right now that is seeming like the only solution. Is there a way to get full details from the Lookback API, or at least filter using an array on the WsapiDataStore query?
ObjectID now supports the in operator.
From the WSAPI docs:
Here is an example usage: (ObjectID in 1,2,3)

How to handle indices in Neo4j server via javascript REST client?

I have data in a standalone Neo4j REST server, including an index of nodes. I want pure JavaScript client to connect to Neo4j and serve the formatted data to d3.js, a visualisation library built on Node.js.
JugglingDB is very popular, but the Neo4j implementation was done "wrong": https://github.com/1602/jugglingdb/issues/56
The next most popular option on github is: https://github.com/thingdom/node-neo4j
looking at the method definitions https://github.com/thingdom/node-neo4j/blob/develop/lib/GraphDatabase._coffee
I'm able to use "getNodeById: (id, _) ->"
> node1 = db.getNodeById(12, callback);
returns the output from the REST server, including node properties. Awesome.
I can't figure out how to use "getIndexedNodes: (index, property, value, _) ->"
> indexedNodes = db.getIndexedNodes:(index1, username, Homer, callback);
...
indexedNodes don't get defined. I've tried a few different combinations. No joy. How do I use this command?
Also, getIndexedNodes() requires a key-value pair. Is there any way to get all, or a subset of the items in the index without looping?
One of the authors/maintainers of node-neo4j here. =)
indexedNodes don't get defined. I've tried a few different combinations. No joy. How do I use this command?
Your example seems to have some syntax errors. Are index1, username and Homer variables defined elsewhere? Assuming not, i.e. assuming those are the actual index name, property name and value, they need to be quoted as string literals, e.g. 'index1', 'username' and 'Homer'. But you also have a colon right before the opening parenthesis that shouldn't be there. (That's what's causing the Node.js REPL to not understand your command.)
Then, note that indexedNodes should be undefined -- getIndexedNodes(), like most Node.js APIs, is asynchronous, so its return value is undefined. Hence the callback parameter.
You can see an example of how getIndexedNodes() is used in the sample node-neo4j-template app the README references:
https://github.com/aseemk/node-neo4j-template/blob/2012-03-01/models/user.js#L149-L160
Also, getIndexedNodes() requires a key-value pair. Is there any way to get all, or a subset of the items in the index without looping?
getIndexedNodes() does return all matching nodes, so there's no looping required. Getting a subset isn't supported by Neo4j's REST API directly, but you can achieve the result with Cypher.
E.g. to return the 6th-15th user (assuming they have a type property set to user) sorted alphabetically by username:
db.query([
'START node=node:index1(type="user")',
'RETURN node ORDER BY node.username',
'SKIP 5 LIMIT 10'
].join('\n'), callback);
Cypher is still rapidly evolving, though, so be sure to reference the documentation that matches the Neo4j version you're using.
As mentioned above, in general, take a look at the sample node-neo4j-template app. It covers a breadth of features that the library exposes and that a typical app would need.
Hope this helps. =)
Neo4j 2 lets you do indices VIA REST. Docs here
REST Indicies

Lucene-like searching through JSON objects in JavaScript

I have a pretty big array of JSON objects (its a music library with properties like artist, album etc, feeding a jqgrid with loadonce=true) and I want to implement lucene-like (google-like) query through whole set - but locally, i.e. in the browser, without communication with web server. Are there any javascript frameworks that will help me?
Go through your records, to create a one time index by combining all search
able fields in a single string field called index.
Store these indexed records in an Array.
Partition the Array on index .. like all a's in one array and so on.
Use the javascript function indexOf() against the index to match the query entered by the user and find records from the partitioned Array.
That was the easy part but, it will support all simple queries in a very efficient manner because the index does not have to be re-created for every query and indexOf operation is very efficient. I have used it for searching up to 2000 records. I used a pre-sorted Array. Actually, that's how Gmail and yahoo mail work. They store your contacts on browser in a pre-sorted array with an index that allows you to see the contact names as you type.
This also gives you a base to build on. Now you can write an advanced query parsing logic on top of it. For example, to support a few simple conditional keywords like - AND OR NOT, will take about 20-30 lines of custom JavaScript code. Or you can find a JS library that will do the parsing for you the way Lucene does.
For a reference implementation of above logic, take a look at how ZmContactList.js sorts and searches the contacts for autocomplete.
You might want to check FullProof, it does exactly that:
https://github.com/reyesr/fullproof
Have you tried CouchDB?
Edit:
How about something along these lines (also see http://jsfiddle.net/7tV3A/1/):
var filtered_collection = [];
var query = 'foo';
$.each(collection, function(i,e){
$.each(e, function(ii, el){
if (el == query) {
filtered_collection.push(e);
}
});
});
The (el == query) part of course could/should be modified to allow more flexible search patterns than exact match.

Categories