I was writing a DB test (runned by Mocha), calling findOne after saving a document and found a weird behavior:
pseudocode here:
...
test1
A.save(check('A',done));
test2
B.save(check('B',done));
test3
C.save(check('C',done));
...
check = function(name, done) {
theModel.findOne({name:name}, function(err,result) {
assert.notEqual(result,null);
result.remove(done);
});
}
Then, the A test passes BUT the B test doesnt pass. I check the logs and I find a curious thing:
first its performed a insert, after a query, after a remove (for the first test, ok thats the expected behaviour).
After the first test, I got shocked when I saw the query was performed BEFORE the insert (and so the test failed and nothing got removed).
The third and so on behaves the same way! query is performed before insert :(
So the only test which passes is the first one (if I change A by B, then B is passed and A isnt). If I look at the mongodb collection I can see the other inserts which were performed after the query (and since assert failed, they werent removed)
I'm using mongoose 2.7.2, (but i was using a previous version, just updated to see if it was a solved bug). Help :(
I was wrong...
Didnt notice I was actually calling check before calling save (since I was calling the function inside the save method ... im so stupid and new to javascript).
Ok so I should perform the check in other way:
A.save(function(err, result) { check('A', done);})
my apologizes!
Related
I am having difficulty in distinguishing between if a PFUser is being updated or if it is being saved for the first time. I implemented the solution found on parse.com here. However, it is not working as explained in the answer found at the link. I always get a false regardless of if it is on the creation or an update. Here is the code that I have implemented.
Parse.Cloud.afterSave(Parse.User, function(request) {
Parse.Cloud.useMasterKey();
//this error log prints false regardless of if the user is signing up.
// or having a key updated.
console.error(request.object.existed());
if (!request.object.existed()) {
//This is on sign up, the user has not existed prior.
//This doesn't work. This block of code is always run.
}
else {
//This is supposedly when the user is having a key updated.
//Unfortunately, this code is never run regardless.
}
)}
This code does not behave as I expect it to because the request.object.existed() always returns false. I think this may have to do that this is saving a PFUser and not some generic PFObject. Is there something I am missing using request.object.existed() on a PFUser afterSave? Thanks.
This seems to be an unresolved Parse cloud bug
Possible workaround is to check the difference between create and update times to see if your object is being updated or is being created.
I am trying to create a game with meteor. Since many people told me to use mongo db (because it's vanilla, fast and reactive) I realised, that I would need to "listen" to the mongo db update, in order to be able to respond to the recived code, and make changes to the DOM.
Can I use the Meteor Trackers like this:
var handle = Tracker.autorun(function () {
handleEvent(
collection.find({}, {sort: {$natural : 1}, limit: 1 }) // find last element in collection
);
});
What you are looking for is the observe and observerChanges functions of cursors. See here: http://docs.meteor.com/#/full/observe
You could use the tracker, but I think observing the cursor is more scalable.
Since in your example you seem to be interested only in responding to the last added object, here is a skeleton on how you could do that with observeChanges:
var cursor = Collection.find();
cursor.observeChanges({
added: function(id, object) {
// This code runs when a new object "object" was added to collection.
}
});
cursor.observer() seems to be exactly what I was looking for.
My resolution looks like this:
collection.find({}).observe({
addedAt: function (document, atIndex, before) {
handleEvent(
document
);
}
});
The only "problem" I realised is, that while testing it seemed like the event got fired twice. (but htat will probably go into another thread. sometime soon)
(This is (I think) because of the latency compensation, where the object gets inserted in the client db, then the server executes his method, and then sends the new collection to the client, where the "added" event is triggered again. right?)
I think Tracker.autorun only work for client, so your code can run on the client but not server. May be if you want to update your client side collection, you can autorun subscription instead.
friendID = 1234;
$route.when(
"/friends/:friendID/raw",
{
event: "friends.view"
}
);
When I run the above the URL in Chrome Dev Tools shows that the url tried is http://domain.com/friends/raw?0=1&1=2&2=3&3=4
is there a way to actually get it to run as http://domain.com/friends/1234/raw
It's difficult to tell what's going on here with only this snippet. Can you post the values of your JSON object? At first glance, it appears that your object it's passing multiple values. You might try setting its value to 1234 and see if it passes correctly.
I'm having an odd error where a backbone where function (Titanium Alloy, kinda homogeneous) returns empty while the fetch method returns a list of models. Ive checked over and over again, I tried putting the where function in the success callback of the fetch method but STILL it results in an unresolvable error
Alloy.Collections.favorites.fetch({
success: function(collection) {
console.log(JSON.stringify(collection));
console.log(self.get('id'));
var favorite = collection.where({
jobId: self.get('id')
});
console.log(JSON.stringify(favorite));
});
The above output is:
[{"jobId":5162179,"dateAdded":1414590144,"candidateId":99,"id":19},{"jobId":5161302,"dateAdded":1414588983,"candidateId":99,"id":17},{"jobId":5161437,"dateAdded":1414588785,"candidateId":99,"id":16}]
5161437
[]
How can the above happen? How can somebody reproduce this? Is the collection being occupied or is it a bug within Titanium Alloy? This process is part of a databind on a view (view A) and this exact code works on a different part where the only difference is that view A is not directly influenced by changes in the collection.
Any help? Is this even possible with backbone? I cant get my head around this
APPARENTLY the .where function strictly compares 2 values (=== operator) and the id i gave was in the form of a string while the id within the collection was an integer. Too bad the backbone documentation doesnt state this information
I am having trouble getting my head round how i can use sinon to mock a call to postgres which is required by the module i am testing, or if it is even possible.
I am not trying to test the postgres module itself, just my object to ensure it is working as expected, and that it is calling what it should be calling in this instance.
I guess the issue is the require setup of node, in that my module requires the postgres module to hit the database, but in here I don't want to run an integration test I just want to make sure my code is working in isolation, and not really care what the database is doing, i will leave that to my integration tests.
I have seen some people setting up their functions to have an optional parameter to send the mock/stub/fake to the function, test for its existence and if it is there use it over the required module, but that seems like a smell to me (i am new at node so maybe this isn't).
I would prefer to mock this out, rather then try and hijack the require if that is possible.
some code (please note this is not the real code as i am running with TDD and the
function doesn't do anything really, the function names are real)
TEST SETUP
describe('#execute', function () {
it('should return data rows when executing a select', function(){
//Not sure what to do here
});
});
SAMPLE FUNCTION
PostgresqlProvider.prototype.execute = function (query, cb) {
var self = this;
if (self.connection === "")
cb(new Error('Connection can not be empty, set Connection using Init function'));
if (query === null)
cb(new Error('Invalid Query Object - Query Object is Null'))
if (!query.buildCommand)
cb(new Error("Invalid Query Object"));
//Valid connection and query
};
It might look a bit funny to wrap around the postgres module like this but there are some design as this app will have several "providers" and i want to expose the same API for them all so i can use them interchangeably.
UPDATE
I decided that my test was too complicated, as i was looking to see if the connect call had been made AND then returning data, which smelt to me, so i stripped it back and put it into two tests:
The Mock Test
it('should call pg.connect when a valid Query object is parsed', function(){
var mockPg = sinon.mock(pg);
mockPg.expects('connect').once;
Provider.init('ConnectionString');
Provider.execute(stubQueryWithBuildFunc, null, mockPg);
mockPg.verify();
});
This works (i think) as without the postgres connector code it fails, with it passes (Boom :))
Issue now is with the second method, which i am going to use a stub (maybe a spy) which is passing 100% when it should fail, so i will pick that up in the morning.
Update 2
I am not 100% happy with the test, mainly because I am not hijacking the client.query method which is the one that hits the database, but simply my execute method and forcing it down a path, but it allows me to see the result and assert against it to test behaviour, but would be open to any suggested improvements.
I am using a spy to catch the method and return null and a faux object with contains rows, like the method would pass back, this test will change as I add more Query behaviour but it gets me over my hurdle.
it('should return data rows when a valid Query object is parsed', function(){
var fauxRows = [
{'id': 1000, 'name':'Some Company A'},
{'id': 1001, 'name':'Some Company B'}
];
var stubPg = sinon.stub(Provider, 'execute').callsArgWith(1, null, fauxRows);
Provider.init('ConnectionString');
Provider.execute(stubQueryWithBuildFunc, function(err, rows){
rows.should.have.length(2);
}, stubPg);
stubPg.called.should.equal.true;
stubPg.restore();
});
Use pg-pool: https://www.npmjs.com/package/pg-pool
It's about to be added to pg anyway and purportedly makes (mocking) unit-testing easier... from BrianC ( https://github.com/brianc/node-postgres/issues/1056#issuecomment-227325045 ):
Checkout https://github.com/brianc/node-pg-pool - it's going to be the pool implementation in node-postgres very soon and doesn't rely on singletons which makes mocking much easier. Hopefully that helps!
I very explicitly replace my dependencies. It's probably not the best solution but all the other solutions I saw weren't that great either.
inject: function (_mock) {
if (_mock) { real = _mock; }
}
You add this code to the module under test. In my tests I call the inject method and replace the real object. The reason why I don't 100% like it is because you have to add extra code only for testing.
The other solution is to read the module file as a string and use vm to manually load the file. When I investigated this I found it a little to complex so I went with just using the inject function. It's probably worth investigating this approach though. You can find more information here.