When running my unit tests, sometimes all are OK, sometimes some random test(s) fail for one controller only for some reason I can't see.
The failures report something like:
Expected spy exec to have been called with [ Object({}) ] but actual calls were [ Object({}) ]
And I really can't find any difference between expected and actual calls.
(using compare tools)
What separates this controller from others is that it contains recursion.
It contains a data-source that returns an array and for every item in that array asynchronous code is executed, something like:
var array = [{id:1}, {id:2}, {id:3}];
//first check the entire array
//then process the entire array
//then do something after the entire array is processed.
checkArray(0, array, object).then(function(){
processArray(0, array, object).then(function() {
doSomething(object);
});
});
function checkArray(index, array, object) {
return $q(function(resolve) {
var record = array[index];
//object is altered in doSomeStuff
doSomeStuff(record, object).then(function(){
if(++index !== array.length) {
return resolve(checkArray(index, array, object));
} else {
return resolve(true);
}
});
});
});
The actual code may stop halfway checking or processing the array do to some function calling an error, in which case an error popup is shown and the objects final state is compared in the unit tests.
There are however also tests failing again within this same controller (for no reason?) that do not make use of these recursively executed functions.
These spies are however actually called with an object that at the time of calling is slightly different. Jasmine does however not keep the state of an object at calling time, but instead reports the final state of the object.
(that's fine, it's just not as executed, but I don't care)
The code as is functions as required, I would however also like for the tests to run consistently without imaginary errors.
How could I prevent these errors from popping up?
(I see no other things that are really different from other controllers, therefore I assume that this recursion is the cause for my troubles)
When I eliminate the recursion by setting the array size to 1, I however still get the inconsistent results.
My objects contained a new Date() object.
Because the difference in seconds was less then 0.5 seconds it did not show as a difference, on many occasions the difference in time was less than a millisecond, and the results of the expected object and actual object were therefore identical.
By replacing the Date object with some random string the test consistently succeeded as expected.
Related
I have a recursive function which performs the following things(among others), in that order:
Prints the array A which is passed as a parameter
Concatenates some new values into it:
A=A.concat(localList);
Prints the array A again
Runs a for loop, each iteration of which calls the function again
While the print sandwich shows correct concatenation, I notice that different(parallel?) instances do not aknowledge the changes other make. Aren't arrays passed as reference?
I've included minimal info because I feel this is some basic fact I'm missing.
Array.concat returns a new instance. In order to keep the reference intact, you can concat a list like so:
yourarray.push.apply(yourarray, newitems)
Or more modern variant:
yourarray.push(...newitems)
To be clear, this will still not work if you're using multiple threads (Workers) since objects passed between Workers are cloned.
I'm trying to start a foreach for a variable that can be a single id one or an array of multiple ids. First I try to verify if its an array or not and if not I declare it as an array of only 1 item.
router.post("/providerQuote", function (req, res) {
console.log(req.body.idQuote);
var idQuote = [];
if(req.body.idQuote.isArray)
{
idQuote = Object.values(req.body.idQuote);
}else{
idQuote = [req.body.idQuote];
}
console.log(idQuote);
idQuote.forEach(function (quote){
console.log(quote);
});
This is the console log:
Server Started...
[ '5bfed54c9b0d061574d874c0', '5bfed54c9b0d061574d874bf' ]
[ [ '5bfed54c9b0d061574d874c0', '5bfed54c9b0d061574d874bf' ] ]
[ '5bfed54c9b0d061574d874c0', '5bfed54c9b0d061574d874bf' ]
The problem here is that somehow it is inserting the req.body into another array.
From what I understand you’re saying, the idQuote value in the response may be a string or it may be an array of strings, and you’re trying to just always have a single array of strings. If I’m correct in that, I would just change your code to the following:
router.post("/providerQuote", function (req, res) {
console.log(req.body.idQuote);
var idQuote = req.body.idQuote;
if(!Array.isArray(idQuote)) {
idQuote = [idQuote];
}
console.log(idQuote);
idQuote.forEach(function (quote){
console.log(quote);
});
This will simply wrap your idQuote in an array if it is not currently an array.
By defaulting the idQuote variable to req.body.idQuote, I’m doing a few things:
I’m simplifying the code. If idQuote is actually an array, there’s nothing more to do! We’re good to go.
We aren’t retyping req.body.idQuote multiple times, when we need to use it either way. There’s an acronym in programming called being DRY which means “Do not Repeat Yourself”. If you find yourself doing the same thing multiple times, there’s likely a way to simplify your code.
I’m avoiding doing excessive processing. Your original code created an empty array and then filled it with the values of a different array. No offense at all to you, but that’s quite a bit of unnecessary work (create a whole new array object in memory and then loop through one array copying its values into a new one) when you have a perfectly good array right there that looks exactly the way you want.
Excessive processing costs money: on a server it means you’ll need a bigger server sooner because you can’t handle quite as much as you could have if things were more efficient. On the client excessive processing means battery drain and at times a choppy and laggy user experience. It’s good to look for little things that could be improved because while one tiny thing doesn’t really matter, they can add up quickly if there’s a lot of them in your code, especially if they’re in a long loop. It’s little tricks that you learn along the way that actually help quite a bit.
Also know that isArray is a method on Array. Since you had no parenthesis, you’re basically checking whether or not the current object has an isArray method, and I’m pretty confident that none of it did (because it’s a static method on the Array class), which then pushed you down into the else where you wrapped it in an array.
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/isArray
I think you could try this
if ( Array.isArray(req.body.idQuote) ) {
// Perform your operations here
} else {
idQuote.push(req.body.idQuote)
}
I have an array filled with objects. It contains 23 items in total, however when I perform a .length on it, it returns 20.
// Get the new routes
var array = PostStore.getPostList();
console.log(array.objects);
console.log(array.objects.length);
Image of what my console returns:
What's going wrong here?
The problem is probably that the array changed between the time you logged it and the time you opened it in the console.
To get the array at the logging time, clone it:
console.log(array.objects.slice());
console.log(array.objects.length);
Note that this won't protect against the array element properties changing. If you want to freeze them too, you need a deep cloning, which is most often possible with
console.log(JSON.parse(JSON.stringify(array.objects.slice()));
This won't work if the objects aren't stringifyable, though (cyclic objects, very deep objects, properties throwing exceptions at reading, etc.). In that case you'll need a specific cloning, like my own JSON.prune.log.
A alternative to logging is also, in such a case, to debug. Set a breakpoint and look at the objects while the code is stopped.
I have been exploring patterns in various MV* frameworks out there and today noticed a weird one, which seems to cause some issues
Model prototype. has a property collections: []
Collection prototype. has a property models: []
When a collection gets a new model, it is being pushed into collection.models but the model itself is also decorated to be aware of the collection it is a member of - i.e. the collection instance is pushed into model.collections.
so model.collections[0] is a collection that contains a .models[0] being the model that has a collection property... and so on.
at its most basic:
var A = function() {
this.collections = [];
},
B = function() {
this.models = [];
this.add = function(what) {
what.collections.push(this);
this.models.push(what)
};
};
var model = new A();
var collection = new B();
collection.add(model);
Here's the guilty party in action: https://github.com/lyonbros/composer.js/blob/master/composer.js#L310-313 and then further down it's pushing into models here: https://github.com/lyonbros/composer.js/blob/master/composer.js#L781-784
I suppose there is going to be a degree of lazy evaluation - things won't be used until they are needed. That code - on its own - works.
But I was also writing tests via buster.js and I noticed that all the tests that had reliance on sinon.spy() were producing InternalError: too much recursion (FF) or RangeError: Maximum call stack size exceeded(Chrome). The captured FF was even crashing unresponsively, which I have never encountered with buster test driver before - it even went to 3.5gb of ram use over my lunch break.
After a fair amount of debugging, I undid the reference storage and suddenly, it was all working fine again. Admittedly, the removal of the spy() assertions also worked but that's not the point.
So, the question is - having code like that, is it acceptable, how will the browsers interpret it, what is the bottleneck and how would you decorate your models with a pointer to the collection they belong in (perhaps a collection controller and collection uids or something).
full gist of the buster.js test that will fail: https://gist.github.com/2960549
The browsers don't care. The issue is that the tool you were using failed to check for cyclic reference chains through the object graph. Those are perfectly legitimate, at least they are if you want them and expect them.
If you think of an object and its properties, and the objects referenced directly or indirectly via those properties, then that assembly makes up a graph. If it's possible to follow references around and wind up back where you started, then that means the graph has a cycle. It's definitely a good thing that the language allows cycles. Whether it's appropriate in a given system is up to the relevant code.
Thus, for example, a recursive function that traverses an object graph without checking to see if it's already visited an object will definitely trigger a "too much recursion" error if the graph is cyclic.
There will only be two objects referencing each other (called "circular reference").
var a, b = {a: a={b: b}};
// a.b: pointer to b
// b.a: pointer to a
There is no recursion at all. If you are getting too much recursion or Maximum call stack size exceeded errors, there needs to be a function which is invoked too often. This could e.g. happen when you try to clone the objects and recurse over the properties without caring for circular references. You'll need to look further in your code, also the error messages should include a (very long) call stack.
Overview
So I have pulled out a document from my database. Inside is a nested collection of objects. Each of the objects inside this nested collection has an '_id' attribute. I want to find one on these objects specifically by its '_id' using Javascript.
Example
http://jsfiddle.net/WilsonPage/tNngT/
Alternative Example
http://jsfiddle.net/WilsonPage/tNngT/3/
Questions
Is my example the best way of achieving this?
Will this block in Node.js?
Yes, if you only know a specific value which is contained by one of your objects (which are held in an Array) you need to loop over the whole structure and compare those values.
As you also did right, break the iteration when you found one (return in your example).
So my answer to your first question would be yes, in terms of performance this is the right and best way.
What I don't get is the "Async" example. You just moved the code and changed the structure. Your code is still "blocking" since you're using a normal for-loop to search. If that array would be huge, it would block your node-app for the amount of time the loop needs to finish.
To really make it asynchronously, you would need to get rid of any loop. You would need to loop over the structure with a runway-timer.
var findById = function(collection, _id, cb){
var coll = collection.slice( 0 ); // create a clone
(function _loop( data ) {
if( data._id === _id ) {
cb.apply( null, [ data ] );
}
else if( coll.length ) {
setTimeout( _loop.bind( null, coll.shift() ), 25 );
}
}( coll.shift() ));
};
And then use it like
findById( myCollection, 102, function( data ) {
console.log('MATCH -> ', data);
});
This technique (which is a simplified example), we are creating an self-invoking anonymous function and we pass in the first array item (using .shift()). We do our comparison and if we found the item we are looking for, execute a callback function the caller needs to provide. If we don't have a match but the array still contains elements (check for the .length), we create a timeout of 25ms using setTimeout and call our _loop function again, this time with the next array item, because .shift() gets and removes the first entry. We repeat that until either no items are left or we found the element. Since setTimeout gives other tasks in the JS main thread (on a browser, the UI thread) the chance to do things, we don't block and screw up the whole show.
As I said, this can get optimized. For instance, we can use a do-while loop within the _loop() method and also use Date.now() to do things until we go over the 50ms mark for instance. If we need longer than that, create a timeout the same way and repeat the above described operation again (so its like, do as much operation as possible within 50ms).
I'd pre-sort the array by each item's _id and at least implement a binary search, if not something a little more sophisticated if speed is really an issue.
You could try using binary search, in most cases it's faster than linear search. As jAndy said, you will still block with standard for loop, so take a look at some node asynchronous library. First that falls to my mind is async.js
I messed around with async.js to produce this solution to my problem. I have tried make it a reusable as possible so it is not just locked down to search the '_id' attribute.
My Solution:
http://jsfiddle.net/WilsonPage/yJSjP/3/
Assuming you can generate unique strings from your _id you could hash them out with js's native object.
findById = (collection, _id, callback, timeout = 500, step = 10000)->
gen = ->
hash = {}
for value, i in collection
hash[value._id] = value
unless i % step then yield i
hash[_id]
it = gen()
do findIt = ->
{done, value} = it.next()
if done then callback value
else
console.log "hashed #{(value/collection.length*100).toFixed 0}%"
setTimeout findIt, timeout