I have been looking in to this article for Jasmine unit testing. I found this example:
describe("Episode", function() {
beforeEach(function() {
this.episode = new Backbone.Model({
title: "Hollywood - Part 2"
});
});
it("should expose an attribute", function() {
expect(this.episode.get("title"))
.toEqual("Hollywood - Part 2");
});
});
This example is using this.episode in beforeEachand it. As far as I know JS doesn't work like this. this.episode doesn't work in describe block at all?
Jasmine introduced a new way to share variables between beforeEach, it, and afterEach through this keyword.
You should also know that each spec’s beforeEach, it, afterEach has this as the same empty object that is set back to empty for the next spec.
From Github
For every test (and their beforeEach/afterEach hooks), jasmine sets
the receiver of each function to an initially empty object. This
object, which is called userContext within Jasmine's source code, can
have properties assigned to it, and gets blown away at the end of each
test. In an attempt to address the issues we were having, we recently
switched over to assigning variables to this object, rather than
declaring them within describe and then assigning them.
This new approach is considered better because of:
No more global leaks
Clear meaning
Improved code reuse via dynamic invocation
Reduced Code Duplication via Lazy Evaluation
Related
I'm unit testing JavaScript with Jasmine and I am running into some problems.
I have a large file to test and it has a lot of dependencies and those dependencies have their own dependencies. Because of said dependencies I want to mock all I can. There lies the problem. How can I mock a constructor so that it includes the methods that belong to it?
Lets say I'm testing a method createMap of class Map:
In that createMap method it calls for Layers class constructor using
var layers = new Layers()
I'm spying on it using
spyOn(window, 'Layers').and.callThrough()
That works fine but later in the createMap method it calls for layers.addLayer() where addLayer is a method of Layers class. Problem is that because I mocked the Layers call it doesn't recognize the addLayer method.
Is there a way to mock it so that it includes all the methods of the called class or is my only option to stub the whole Layers class or not mock it?
Or what would be a good way to handle this? I've tried to spyOn(Layers, 'addLayer') but there it says that no method addLayer is found.
I'm sorry if it's confusing a bit. I had trouble thinking how should I ask it.
IMO, it's unnecessary to spy on window, since you can easily shadow the variable in local scope by creating a spy object with the same name:
describe('Map', function () {
var Layers;
beforeEach(function () {
Layers = function () {
// alternatively, you could move this to Layers.prototype
this.addLayers = jasmine.createSpy('Layers#addLayers');
};
});
/* ... */
});
If you want an automatic mocking and using CommonJS modules, you may try Jest framework, which is built on top of Jasmine.
Let's talk in terms of example classes you have provided.
You're writing a test suite for Map. All its dependencies (in example we have only Layer) MUST be mocked. Because in a unit test you're supposed to test one layer, as small functionality as possible. It means that you should provide such a mocked Layer constructor that exposes interface used in Map. For example:
function Layers() {
this.addLayer = sinon.spy();
}
In this test suite only Map class should remain "real". I.e. it's code must not be altered. And with such mockups like Layer you make sure that you do not trigger any interaction with real-code dependencies (own-written dependencies should be tested in a different test suite, also make sure you don't try to test framework functions, like $tate.resolve, $inject etc.). If class Map is complicated and has multiple dependencies, investigate sinon features that help automate this process, for example sinon.mock
If you ever transpile class syntax to a es3 or another pre-2015 dialect you will discover something interesting.
class a {
constructor(){
...
}
index()
{
...
}
}
Becomes:
var a = /** #class */ (function () {
function a() {
...
}
a.prototype.index = function () {
...
};
return a;
}());
This same implementation is used by later standards but masked by the 2015 class syntax. In other words a.index doesn't exist instead it's defined as a.prototype.index. Thus you need spyOn(a.prototype, 'index') to spy on it.
Change spyOn(Layers, 'addLayer') to spyOn(Layers.prototype, 'addLayer')
So I have a module I have created that does a kind of "state" routing for me. I made my own little version to get my exact intended effect, and it seems to be working great until I plug it into separate modules to test.
I inject it into the 2 separate modules, define the information in the .config of each module I need to use it, then call it in a controller to use my change state kind of effect.
It had been going pretty good until I plugged it into separate modules, and now what seems to be happening is the module I have created to handle all of this is creating separate instances for each module. Let me show you what I mean:
Here is an example of one of the modules using it for testing -
angular.
module('urlTesting2', [ 'urlTesting'])
.config(function($moduleObjectProvider) {
var callback = function(name, obj) {
console.log(name, obj);
}
$moduleObjectProvider.$get().set("module2", callback)
.addState("calender", ["day", "week", "month"]);
}).controller("testControl2", function($scope, checkUrl) {
$scope.addSecond = function() {
checkUrl.goState("module2", "calender", ["yes", "no", "maybe"]);
}
});
So it's injected, and in the config I call the provider and set a new modules with states. In the controller I just call goState. This works great when its just by itself. The issue is when I add a separate module in doing the same. I have a fiddle here showing the problem -
https://jsfiddle.net/7hn3ovgz/1/
So - I like to test this in my own browser window but fiddle seems to be the easiest way to share this. It will not change the actual url in the browser but it will still log all the effects.
Basically what I think is happening is when I click to change state in a module, it fires it twice and looks for the state in the other module too (which isn't there). My desired effect was that ALL modules setting a config would be all in one place. So when you do the .set - it just adds the object into a variable called currentModules in the provider. It seems like the configs are setting separate instances (like a closure) of this, instead of pushing all the config set() into one big object for reference.
Apologies if this is unclear, hopefully the fiddle will show clearly enough, and thank you for taking the time to read.
Seems like the issue is the injector for the provider, every time it is called it creates a new instance of that function, so all you should have to do is switch
function $moduleObjectProvider() {
var currentModules = {};
to
var currentModules = {};
function $moduleObjectProvider() {
or restructure the provider not to be an injected function if possible
I am in doubt if the following design pattern would cause a memory leak.
I have been using it for some time with success, but I haven't seen this pattern used by others, so I'd like some confirmation if you see something wrong with it.
As from next month I have to start working on a large project, and I want to know for sure that I can use this without problems, or if I should use another strategy.
controller.js:
var Controller = function(options){
};
Controller.prototype.makeView = function(options){
options.controller = this;
options.otheroption = options.otheroption;
var view = new View(options);
};
Controller.prototype.getModel = function(options){
//--- Get model ---
var model = new Model();
var promise = model.fetch();
return promise;
});
view.js:
var View = Backbone.View.extend({
initialize: function(options){
this.controller = options.controller;
this.otheroption = options.otheroption;
},
getModel: function(){
var promise = this.controller.getModel();
promise.done(_.bind(function(model){
//Do something with the returned model instance
}, this));
};
});
Instantiate controller, eg. from the router, or another controller:
//--- Instantiate the controller and build the view ---//
var controller = new Controller();
controller.makeView(options)
To me, this doesn't look like a circular reference, because both the controller and view are declared as a local variable.
Yet the instantiated view can access the controller functions, which allows me to isolate the RESTful server interactions via models / collections that the view uses.
For me it would seem as if the only reference remaining would be the view that keeps a reference to the controller object.
What I do afterwards is clean up the view (I destroy the instance and its references when I don't need it anymore.
Your opinion on this pattern is highly appreciated.
My purpose is to isolate creation of views / server interactions in separate controller files: if you see holes in my method and have a better way of doing it, please share.
Thanks.
Short answer: There is no memory leak problem in the code you have posted. The view holds a reference to the controller, but not vice versa. So as long as the controller lives longer than the view, that reference does not keep your objects from being garbage-collected. I don't see a circular reference anywhere in your code.
Longer answer: The pitfalls would be in the code you haven't posted. In particular, any event handlers in your view must be cleaned up properly, otherwise your views never fade into oblivion. But you have said in your question that you clean up your view, so I guess you are aware of that sort of problem.
What controller doing is here looks like a utility to me. Could have been easily managed by a global level singleton. I see some issues in first glance.
Code repetition, assuming you would creating separate Controller for different types of Models and Views, makeView and getModel code needs to be repeated for each controller. If you extending from a BaseController, then you need to pass View and Model Class to getModel and makeView functions.
How do you handle a use-case where you have to use same model in different Views?
makeView and getModel is designed assuming for each makeView there would be a getModel call, in assumed order
I would rather write a utility function which can create and deploy views for me.
var deployView = function(view, config){
//do the view rendering
view.render();
view.$el.appendTo(config.el);
}
var createView = function(config) {
var view;
var viewType = 'model';
if (config.collection || config.Collection) {
viewType = 'collection';
}
if (viewType === 'model') {
if (config.Model) {
config.model = new config.Model(config.modelAttributes);
//fetch if needed
}
} else {
if (config.Collection) {
config.collection = new config.Collection(config.items);
//fetch if needed
}
}
var filteredConfig = _.omit(config, 'Collection', 'Model', 'View');
view = new config.View(filteredConfig);
deployView(view, filteredConfig)
}
JavaScript implementations haven't had a problem with circular references for a long time. (IE6 did have a memory leak from circular references if I recall correctly, which wasn't shared by any other major browser from that period.)
Modern JavaScript implementations perform garbage collection through a "mark and sweep" algorithm. First they scan through your web app's entire memory structure starting from the global object, and mark everything they find. Then they sweep through every object stored in memory and garbage collect anything that wasn't marked. As long as there isn't a reference to your object from the global object or any stored function, it can be garbage collected.
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Memory_Management#Mark-and-sweep_algorithm
You're probably thinking of a reference counting-based implementation, which does have issues with memory leaks from circular references. In that implementation as long as one object contained a reference to another, that second object can't be garbage collected. That method was once used in web browsers but not anymore.
Nowadays, most memory leaks are from globally-accessible objects you forget to clean up and accidentally retaining data in function closures (a function that creates another function and passes/saves it somewhere). Since the closure's local variables can be accessed by the function created inside of them, they have to be retained as long as that function exists.
So go ahead and add all the circular references you want. Unless you need to target IE6, your code's fine.
Could someone explain the fundamental difference between:
define(['backbone'], function(Backbone) {
MyModel = Backbone.Model.extend({
});
});
define(['backbone', 'models/mymodel'], function(Backbone){
var app = Backbone.View.extend({
initialize: function() {
var model = new MyModel();
}
});
});
and:
define(['backbone'], function(Backbone) {
var MyModel = Backbone.Model.extend({
});
return MyModel;
});
define(['backbone', 'models/mymodel'], function(Backbone, MyModel){
var app = Backbone.View.extend({
initialize: function() {
var model = new MyModel();
}
});
});
In the former, the first module simply defines MyModel. In the latter, it's created as a variable and returned, and the second module needs to have it put in the parameters when imported.
RequireJS examples I see around seem to vary between the two, but I don't really understand the difference - does one return an instance and the other a constructor?
In my application I didn't even notice that I was actually using both ways in different places, and I think it was causing problems. I was using a lot of
self = this
self.model.doSomething
inside my views and models, and as my app got bigger, I started getting errors because there were conflicts with definitions of self.
Short Version: 1st version == wrong.
Medium Version: The first one bypasses Require entirely by using global variables, while the second one actually uses Require.
Long version:
The way Backbone modules work is that you run "define", pass it a function (and usually an array of dependencies also), and whatever gets returned from that function is defined as that module. So if I do:
// Inside foo.js
define([], function() {
return 1;
});
I've defined the "foo" module to be 1, so if elsewhere I do:
define(['foo'], function(foo) {
alert(foo); // alerts 1
});
Your first version doesn't return anything, so it's not actually creating a Require module at all.
How does it work then? Well, in that version you do:
MyModel = Backbone.Model.extend({
NOT:
var MyModel = Backbone.Model.extend({
So that's really the same as doing:
window.MyModel = Backbone.Model.extend({
Then when the second part of the code runs, it access window.MyModel, and works ... but it's completely bypassing Require.js in the process.
I think the most important thing to takeaway is: ALWAYS DECLARE (ie. var) YOUR JAVASCRIPT VARIABLES. I don't agree with everything Crockford says, but he's dead right on this one. You will get lots of bugs (with Require and without) if you don't make this a habit.
Beyond that, the next most important thing is probably: ALWAYS RETURN SOMETHING FROM THE FUNCTION YOU PASS TO define. There are certain special cases where you don't want to return anything, but unless you are deliberately trying to solve one of those cases you should always return something to define the module.
Finally, if you're using Require, every variable in your code should either:
Come from the define function (ie. it should be an argument variable from the function that you pass to define), or
It should be declared (ie. var-ed ) inside that file
If you use JSLint or 'use strict'; (as Valentin Nemcev suggested), or if you use an editor like Eclipse, your tools can help you ensure this (and in fact make it easy to ensure).
MyModel = Backbone.Model.extend({});
Here you are not returning a constructor, you are defining a global variable and accessing it later in different module.
Actually it is wrong, it works by accident. You should return your modules from define and access them via parameters in other modules.
Like this:
return Backbone.Model.extend({});
You should use strict mode to avoid problems with global variables in JS.
Also, constructor in JS is just a function that is meant to be run with new. Backbone extend always returns a constructor function, and you create a model instance by calling the constructor with new, like you are doing in both examples.
So I'm writing a whole bunch of vendor-specific files in node which all have a similar controller pattern, so it makes sense for me to cut them out and put into a common file.
You can see my common controller file here: https://gist.github.com/081a04073656bf28f46b
Now when I use them in my multiple modules, each consecutively loaded module is overwriting the first. This is because the file is only required once and passed dynamically through to each module on load (this allows me to add extra modules and these modules are able to add their own routes, for example). You can see an example module here: https://gist.github.com/2382bf93298e0fc58599
You can see here on line 53 I've realised that we need to create a seperate instance every time, so I've tried to create a new instance by copying the standardControllers object into a new object, then initialising the new object. This has zero impact on the code, and the code behaves in exactly the same way.
Any ideas guys? I'm in a bit of a jam with this one!
First thing I'd do is try to make things simpler and reduce coupling by invoking the single responsibility principle, et al.
http://www.codinghorror.com/blog/2007/03/curlys-law-do-one-thing.html
Put those Schemas into their own files, eg
models/client.js
models/assistant.js
models/contact.js
I've also found that embedded docs + mongoose is generally a PITA. I'd probably promote all those to top level docs.
You don't need to enclose your object's keys in quotes.
routes = {
list: function() {} // no quotes is aok
}
Also 'list' in typical REST apps is called 'index'. Anyway.
Ok, I'd break this up differently. Since you're requiring stuff from the index.js file in the middleware, they become tightly coupled, which is bad. in fact, I think I'd rewrite this whole thing so it was tidier. Sorry.
I'd probably replace your 'middleware' file with an express-resource controller
https://github.com/visionmedia/express-resource (built by author of express). This is a good framework for restful controllers, such as what you're building. The auto-loader is really sweet.
You may also want to look at: http://mcavage.github.com/node-restify/ It's new, I haven't tried it out, but I've heard good things.
Since what you're building is basically an automated mongoose-crud system, with optional overriding, I'd create an express-resource controller as your base
/controllers/base_controller.js
and it might look like
var BaseController = function() {} // BaseController constructor
BaseController.prototype.index = function() {
// copy from your middleware
}
BaseController.prototype.show = function() {
// copy from your middleware
}
BaseController.prototype.create = function() {
// copy from your middleware
}
// etc
module.exports = BaseController
Then I'd do something like:
/controllers/some_resource_controller.js
which might look something like:
var BaseController = require('./base_controller')
var NewResourceController = function() {
// Apply BaseController constructor (i.e. call super())
BaseController.apply(this, arguments)
}
NewResourceController.prototype = new Base()
NewResourceController.prototype.create = function() {
// custom create method goes here
}
module.exports = NewResourceController
Then to use it, you can do:
var user = app.resource(myResourceName, new ResourceController());
…inside some loop which sets myResourceName to be whatever crud you're trying to set up.
Here's some links for you to read:
http://tobyho.com/2011/11/11/js-object-inheritance/
http://yehudakatz.com/2011/08/12/understanding-prototypes-in-javascript/
Also, it sounds like you're not writing tests. Write tests.
http://www.codinghorror.com/blog/2006/07/i-pity-the-fool-who-doesnt-write-unit-tests.html