Possible bug when querying using the 'take' method in Breeze - javascript

I am using breeze to query a table of customers. I have to implement a very complex query so I've decided to pass a parameter to the method and let the server to make the query. The problem is that using the method TAKE of BREEZE the list of customers that is returned by the server has a different order from the returned by the server.
I have made some test, and only this order is changed, when I am using the method TAKE of BREEZE. This is a bit of my code in the server and in the client:
//CLIENT
function(searchText,resultArrayObservable, inlineCountObservable){
query = new breeze.EntityQuery("CustomersTextSearch");
.skip(0)
.take(30)
.inlineCount(true)
.withParameters({ '': searchText});
return manager.executeQuery(query).then(function(data){
//The data results are not in the same order as the server resturn.
inlineCountObservable(data.inlineCount);
resultArrayObservable(customerDto.mapToCustomerDtos(data.results));
});
}
//SERVER ASP.NET WEB API
[HttpGet]
public IQueryable<Customer> CustomersTextSearch(string textSearch = "")
{
//Here, customers has the rigth order.
var customers= _breezeMmUow.Customers.GetBySearchText(textSearch, CentreId);
return customers;
}
Maybe is not a BUG, maybe I am doing something incorrect. Can somebody help me?
-------------EDIT-------------
1.3.2
Fix for Breeze/EF bug involving a single query with "expand", "orderBy", and "take" performing incorrect ordering.
I have found in the breeze page, that the problem was fixed, but I have the last version and it is still not working well with TAKE.

This was a bug, and has been fixed in Breeze v 1.3.6, available now.

Related

MeteorJS - No user system, how to filter data at the client end?

The title might sound strange, but I have a website that will query some data in a Mongo collection. However, there is no user system (no logins, etc). Everyone is an anonymouse user.
The issue is that I need to query some data on the Mongo collection based on the input text boxes the user gives. Hence I cannot use this.userId to insert a row of specifications, and the server end reads this specifications, and sends the data to the client.
Hence:
// Code ran at the server
if (Meteor.isServer)
{
Meteor.publish("comments", function ()
{
return comments.find();
});
}
// Code ran at the client
if (Meteor.isClient)
{
Template.body.helpers
(
{
comments: function ()
{
return comments.find()
// Add code to try to parse out the data that we don't want here
}
}
);
}
It seems possible that at the user end I filter some data based on some user input. However, it seems that if I use return comments.find() the server will be sending a lot of data to the client, then the client would take the job of cleaning the data.
By a lot of data, there shouldn't be much (10,000 rows), but let's assume that there are a million rows, what should I do?
I'm very new to MeteorJS, just completed the tutorial, any advice is appreciated!
My advice is to read the docs, in particular the section on Publish and Subscribe.
By changing the signature of your publish function above to one that takes an argument, you can filter the collection on the server, and limiting the data transferred to what is required.
Meteor.publish("comments", function (postId)
{
return comments.find({post_id: postId});
});
Then on the client you will need a subscribe call that passes a value for the argument.
Meteor.subscribe("comments", postId)
Ensure you have removed the autopublish package, or it will ignore this filtering.

WebAPI using BreezeJS throws an error as soon as I use $skip

I have a working WebAPI (v2) which utilizes the awesome BreezeJS product. I am attempting to add paging capabilities, but as soon as I include $skip in the URL as a parameter, the WebAPI generates this error:
{
$id: "1",
$type: "System.Web.Http.HttpError, System.Web.Http",
Message: "An error has occurred."
}
Debugging the API does not give me any additional information, since it doesn't crash.
The parameters I'm passing are: http://www.example.com/api/Test/Designs?$skip=5&$top=5&$inlinecount=allpages&
If I call it without the $skip parameter, it works fine. The other "$" params seem to work just fine, as I can call:
http://www.example.com/api/Test/Designs?$top=3
and it works as expected.
I have verified that I'm not using any BreezeQueryable attributes or anything, so $skip should be allowed.
Additional setup info if it helps:
SQL Server Express v2012
Breeze on the server side is v1.5.0.0
Entity Framework v6
Microsoft.Data.OData is v5.6
Is there something else I need to have enabled in order to utilize paging? Or is there a way I can find the true cause of this error? I can provide a working URL if requested.
Thank you.
A sort is required to use skip:
From the breeze docs:
// Skip the first 10 Products and return the rest
// Note that the '.orderBy' clause is necessary to use '.skip'
// This is required by many server-side data service implementations
var query3 = EntityQuery.from('Products')
.orderBy('ProductName')
.skip(10);

Ruby MongoDB Driver Client Side Query Matcher

I'm building a system in Ruby using WebSockets that will notify JS clients of changes to collections that are applicable to the models & collections the JS client is viewing. I would like to have the JS client periodically send registration messages to the WebSocket telling it what models it is currently viewing, and also the collections (or collection subset specified by query).
So in order to make this work, the API hosting the WebSocket server will need to test if a query matches a document that has been updated/created. I would like to do this without sending a query to Mongo, and I found a solution in the C driver that would work on the (mongo) client side: http://api.mongodb.org/c/current/mongoc_matcher_new.html
http://api.mongodb.org/c/current/matcher.html
Unfortunately I didn't see a way of calling this method through the Ruby drivers. Any clue how I might be able to use the mongoc_matcher_new function in Ruby? Or does anyone have a better suggestion to improve the architecture of this solution to only send applicable updates to JS clients?
I don't think that you're going to be able to do it without querying back to Mongo. The standard way of doing this is by tailing the oplog, but the oplog is only going to give you the database/collection and the _id. So I don't think you would be able to support an arbitrary query with just the oplog, you would need to fetch the document in order to determine a match.
I would suggest that you take a look at how Meteor does this. This blog post gives an overview of their approach. There is also a wiki page with more specifics on OplogObserveDriver.
I eventually used FFI to run the code I needed:
MongoC.test_query_match(document_string, json_query_string)
C code:
#include <bcon.h>
#include <mongoc.h>
#include <stdio.h>
int test_query_match(char *document, char *query) {
mongoc_matcher_t *matcher;
bson_t *d;
bson_t *q;
bson_error_t doc_parse_error;
bson_error_t query_parse_error;
bson_error_t matcher_error;
d = bson_new_from_json(document, strlen(document), &doc_parse_error);
q = bson_new_from_json(query, strlen(query), &query_parse_error);
matcher = mongoc_matcher_new(q, &matcher_error);
if ( !matcher ) {
bson_destroy(q);
bson_destroy(d);
return 0;
}
int match = mongoc_matcher_match(matcher, d);
bson_destroy(q);
bson_destroy(d);
mongoc_matcher_destroy(matcher);
return match;
}
The Ruby FFI code:
require 'ffi'
module MongoC
extend FFI::Library
ffi_lib 'c'
ffi_lib File.dirname(__FILE__) + '/mongoc/mongoc.so'
attach_function :test_query_match, [:string, :string], :int
end

Best practices for dealing with ObjectId with mongo and Javascript

I am developing an app with Mongo, Node.JS and Angular
Every time the object is delivered and handled in the front-end, all objectId's are converted to strings (this happens automatically when I send it as json), but when when I save objects back into mongo, I need to convert _id and any other manual references to other collections back to ObjectID objects. If I want to nicely separate database layer from the rest of my backend, it becomes even more messy, lets assume my database layer has the following signature:
database.getItem(itemId, callback)
I want my backend business treat itemId as opaque type (i.e no require'ing mongo or knowing anything about ObjectId outside of this database layer), yet at the same time I want to be able to take the result of this function and send it directly to
the frontend with express js.
exports.getItem = function(req, res) {
database.getItem(req.params.id, function(err, item) {
res.json(item);
});
};
What I end up doing now is:
exports.getItem = function(itemId, callback) {
if (typeof itemId == 'string') {
itemId = new ObjectID(itemId);
}
var query = {_id: itemId};
items.findOne(query, callback);
};
This way it can handle both calls which come from within the backend, where itemId reference might be coming from another object and thus might already be in the right binary format, as well as requests with string itemId's.
As I already mentioned above, when I am saving an object that came from front-end and which contains many manual references to other collections that is even more painful, since I need to go over the object and change all id strings to ObjectIds.
This all feels very wrong, there must be a better way to do it. What is it?
Thanks!

Struggling to build a JS/PHP validation function for my app

I have a web service that returns a JSON object when the web service is queried and a match is found, an example of a successful return is below:
{"terms":[{"term":{"termName":"Focus Puller","definition":"A focus puller or 1st assistant camera..."}}]}
If the query does not produce a match it returns:
Errant query: SELECT termName, definition FROM terms WHERE termID = xxx
Now, when I access this through my Win 8 Metro app I parson the JSON notation object using the following code to get a JS object:
var searchTerm = JSON.parse(Result.responseText)
I then have code that processes searchTerm and binds the returned values to the app page control. If I enter in a successful query that finds match in the DB everything works great.
What I can't work out is a way of validating a bad query. I want to test the value that is returned by var searchTerm = JSON.parse(Result.responseText) and continue doing what I'm doing now if it is a successful result, but then handle the result differently on failure. What check should I make to test this? I am happy to implement additional validation either in my app or in the web service, any advice is appreciated.
Thanks!
There are a couple of different ways to approach this.
One approach would be to utilize the HTTP response headers to relay information about the query (i.e. HTTP 200 status for a found record, 404 for a record that is not found, 400 for a bad request, etc.). You could then inspect the response code to determine what you need to do. The pro of this approach is that this would not require any change to the response message format. The con might be that you then have to modify the headers being returned. This is more typical of the approach used with true RESTful services.
Another approach might be to return success/error messaging as part of the structured JSON response. Such that your JSON might look like:
{
"result":"found",
"message":
{
"terms":[{"term":{"termName":"Focus Puller","definition":"A focus puller or 1st assistant camera..."}}]}
}
}
You could obviously change the value of result in the data to return an error and place the error message in message.
The pros here is that you don't have to worry about header modification, and that your returned data would always be parse-able via JSON.parse(). The con is that now you have extra verbosity in your response messaging.

Categories