Tinytest is fairly undocumented at this point, but it looks like a nice lightweight framework to use. I have a package that depends on some publish/subscribe data and a bit stymied on how to use this. I can't seem to subscribe to a user.
Do i need to guarantee the server block runs first, such that the publish blocks are available to be subscribed to?
I also made sure my package under test included the various required other/auth packages.
Package.on_test(function (api) {
api.use([
'accounts-ui',
'accounts-facebook', ...
I was hoping to find some examples in LiveData test suite or even the Iron-Router test suite, but haven't turned anything up yet. FWIW i've reviewed other links on testing with meteor, and Laika and RTD RTD look a bit overkill for now.
Pointers appreciated, how to get a basic sample like below working.
if Meteor.isServer
Tinytest.add "Chatbot data", (test) ->
test.equal 1, 1, "server tests running"
userdata = {
email: "c#c.com"
profile:
nickname: "chaka"
name: "chaka"
icon: "/images/bots/chaka/icon/50.png"
}
Meteor.users.insert(userdata)
check = Meteor.users.findOne()
test.isNotNull check, "can't create user"
console.log("userCheck", check) # ok
Meteor.publish 'allUsers', ->
return Meteor.users.find()
if Meteor.isClient
Tinytest.add "Chatbot create", (test) ->
Meteor.subscribe("allUsers")
user = Meteor.users.findOne()
if (user == undefined)
test.fail("cant find user:", user)
edit: this guidance on asycn tests seems to be very related.
If you want to test if your method subscribes with the proper find selector I would spy Meteor.users.find() and check if that is called with the right arguments.
var actualFind = Meteor.users.find, receivedArguments;
Meteor.users.find = function () {
receivedArguments = arguments;
actualFind.apply(actualFind, arguments);
}
Then do tests on receivedArguments.
Testing subscribe/publish itself is something the meteor team should do.
You should only ever write unit tests for code you've written.
Related
I've recently setup Protractor testing for our angular apps at our company - and was looking for a simple way to capture pass/fail status of each scenario in the spec classes. Is there a simple way to do this? I've tried messing with the jasmine-spec-reporter, but maybe I was missing something there to extract the data I need. Any help would be appreciated.
I've tried things like this:
let currentSpec = jasmine.getEnv().currrentSpec, passed = currentSpec.results().passed();
but am always getting issues like
currentSpec not specified
Ideally I would like to capture pass or fail without jasmine reporting, if possible.
What you are looking for is actually specDone not afterEach. You either need to modify the specDone function of the reporter you are currently using or build a custom reporter that fits your needs.
https://jasmine.github.io/2.1/custom_reporter.html#section-specDone
Create your custom reporter:
// myReporter.js
module.exports = {
specDone: (result) => {
// do stuff...
}
}
Then in your protractor config you would have something like this:
const myReporter = require('myReporter');
// other config properties
onPrepare: function() {
jasmine.getEnv().addReporter(myReporter);
}
Are you using this as well?:
https://www.npmjs.com/package/protractor-html-reporter-2
To have a better error description of the error and adding them to the jasmine reporter:
https://www.npmjs.com/package/jasmine2-custom-message
So I'm working with an enterprise tool where we have javascript scripts embedded throughout. These scripts have access to certain built-in objects.
Unfortunately, the tool doesn't give any good way to unit test these scripts. So my thinking was to maintain the scripts in a repo, mock the built-in objects, and then set up unit tests that run on my system.
I'm pretty ignorant to how JavaScript works in terms of building, class loading, etc. but I've been just trying things and seeing what works. I started by trying out Mocha by making it a node project (even though it's just a directory full of scripts, not a real node project). The default test works, but when I try and test functions from my code, I get compiler errors.
Here's what a sample script from my project looks like. I'm hoping to test the functions, not the entire script:
var thing = builtInObject.foo();
doStuff(thing);
doMoreStuff(thing);
function doStuff(thing) {
// Code
}
function doMoreStuff(thing) {
// More Code
}
Here's what a test file looks like:
var assert = require('assert');
var sampleScript = require('../scripts/sampleScript.js');
describe('SampleScript', function() {
describe('#doStuff()', function() {
it('should do stuff', function() {
assert.equal(-1, sampleScript.doStuff("input"));
});
});
});
Problem happens when I import ("require") the script. I get compilation errors, because it doesn't builtInObject. Is there any way I can "inject" those built in objects with mocks? So I define variables and functions that those objects contain, and the compiler knows what they are?
I'm open to alternative frameworks or ideas. Sorry for my ignorance, I'm not really a javascript guy. And I know this is a bit hacky, but it seems like the best option since I'm not getting out of the enterprise tool.
So if I get it right you want to do the unit tests for the frontened file in the Node.js environment.
There are some complications.
First, in terms of Node.js each file has it's own scope so the variables defined inside of the file won't be accessible even if you required the file. So you need to export the vars to use them.
module.exports.doStuff = doStuff; //in the end of sample script
Second, you you start using things like require/module.exports on the frontend they'll be undefined so you'll get an error.
The easiest way to run your code would be. Inside the sample script:
var isNode = typeof module !== 'undefined' && module.exports;
if (isNode) {
//So we are exporting only when we are running in Node env.
//After this doStuff and doMoreStuff will be avail. in the test
module.exports.doStuff = doStuff;
module.exports.doMoreStuff = doMoreStuff;
}
What for the builtInObject. The easies way to mock it would be inside the test before the require do the following:
global.builtInObject = {
foo: function () { return 'thing'; }
};
The test just passed for me. See the sources.
Global variables are not good anyway. But in this case seems you cannot avoid using them.
Or you can avoid using Node.js by configuring something like Karma. It physically launches browser and runs the tests in it. :)
I read a lot about Express / SocketIO and that's crazy how rarely you get some other example than a "Hello" transmitted directly from the app.js. The problem is it doesn't work like that in the real world ... I'm actually desperate on a logic problem which seems far away from what the web give me, that's why I wanted to point this out, I'm sure asking will be the solution ! :)
I'm refactoring my app (because there were many mistakes like using the global scope to put libs, etc.) ; Let's say I've got a huge system based on SocketIO and NodeJS. There's a loader in the app.js which starts the socket system.
When someone join the app it require() another module : it initializes many socket.on() which are loaded dynamically and go to some /*_socket.js files in a folder. Each function in those modules represent a socket listener, then it's way easier to call it from the front-end, might look like this :
// Will call `user_socket.js` and method `try_to_signin(some params)`
Queries.emit_socket('user.try_to_signin', {some params});
The system itself works really well. But there's a catch : the module that will load all those files which understand what the front-end has sent also transmit libraries linked with req/res (sessions, cookies, others...) and must do it, because the called methods are the core of the app and very often need those libraries.
In the previous example we obviously need to check if the user isn't already logged-in.
// The *_socket.js file looks like this :
var $h = require(__ROOT__ + '/api/helpers');
module.exports = function($s, $w) {
var user_process = require(__ROOT__ + '/api/processes/user_process')($s, $w);
return {
my_method_called: function(reference, params, callback) {
// Stuff using $s, $w, etc.
}
}
// And it's called this way :
// $s = services (a big object)
// $w = workers (a big object depending on $s)
// They are linked with the req/res from the page when they are instantiated
controller_instance = require('../sockets/'+ controller_name +'_socket')($s, $w);
// After some processes ...
socket_io.on(socket_listener, function (datas, callback) {
// Will call the correct function, etc.
$w.queries.handle_socket($w, controller_name, method_name, datas);
});
The good news : basically, it works.
The bad news : every time I refresh the page, the listeners double themselves because they are in a loop called on page load.
Below, this should have been one line :
So I should put all the socket.on('connection'...) stuff outside the page loading, which means when the server starts ... Yes, but I also need the req/res datas to be able to load the libraries, which I get only when the page is loaded !
It's a programing logic problem, I know I did something wrong but I don't know where to go now, I got this big system which "basically" works but there's like a paradox on the way I did it and I can't figure out how to resolve this ... It's been a couple of hours I'm stuck.
How can I refacto to let the possibility to get the current libraries depending on req/res within a socket.on() call ? Is there a trick ? Should I think about changing completely the way I did it ?
Also, is there another way to do what I want to do ?
Thank you everyone !
NOTE : If I didn't explain well or if you want more code, just tell me :)
EDIT - SOLUTION : As seen above we can use sockets.once(); instead of sockets.on(), or there's also the sockets.removeAllListeners() solution which is less clean.
Try As Below.
io.sockets.once('connection', function(socket) {
io.sockets.emit('new-data', {
channel: 'stdout',
value: data
});
});
Use once instead of on.
This problem is similar as given in the following link.
https://stackoverflow.com/questions/25601064/multiple-socket-io-connections-on-page-refresh/25601075#25601075
I would like to include a module/library in a protractor test case, however, as soon as I add the line
var lib = require('./loginactions.js');
All the references to protractor and related objects are lost. In other words, if I don't have the require line, the 'protractor' and 'browser' variables are found and test runs fine (using the functions within the file), but after adding that line the variables are not found any more.
Here is a minimal test case:
var lib = require('./loginactions.js'); //problematic line
describe('Login / Logout to Application',function(){
var ptor;
beforeEach(function(){
ptor = protractor.getInstance(); //protractor reference lost
browser.get('http://localhost:80'); //browser reference lost
});
it('should login and then logout successfully', function(){
//Do things here
lib.login(user, pass, ptor);
});
});
I export the functions in this way:
module.exports.Login = Login;
module.exports.Logout = Logout;
//Params: ptor - protractor instance
function Login(user, pass, ptor)
{
//stuff
}
function Logout(ptor)
{
//stuff
}
I also wonder, is this even the correct way of including the own libraries into the project. So my question is, how to properly include libraries into a protractor test case?
To answer my own question, using protractor as a library method worked, this way the references to protractor were restored. So adding these two requires solved my issue:
var protractor = require('/path/to/protractor');
require('/path/to/protractor/jasminewd');
So my test looked similar to the updated code in
'no method expect' when using Protractor as a library question.
However, I am not entirely sure about the global browser object. It is a wrapper around the WebDriver object according to http://www.ng-newsletter.com/posts/practical-protractor.html, but so is protractor instance. So I decided to replace all 'browser' variables with 'ptor' and so far no complaints. This may backfire, but as I said, I'm not entirely sure whether the global browser object that is created along with the global protractor object, when running protractor normally and not as library.
I am having trouble getting my head round how i can use sinon to mock a call to postgres which is required by the module i am testing, or if it is even possible.
I am not trying to test the postgres module itself, just my object to ensure it is working as expected, and that it is calling what it should be calling in this instance.
I guess the issue is the require setup of node, in that my module requires the postgres module to hit the database, but in here I don't want to run an integration test I just want to make sure my code is working in isolation, and not really care what the database is doing, i will leave that to my integration tests.
I have seen some people setting up their functions to have an optional parameter to send the mock/stub/fake to the function, test for its existence and if it is there use it over the required module, but that seems like a smell to me (i am new at node so maybe this isn't).
I would prefer to mock this out, rather then try and hijack the require if that is possible.
some code (please note this is not the real code as i am running with TDD and the
function doesn't do anything really, the function names are real)
TEST SETUP
describe('#execute', function () {
it('should return data rows when executing a select', function(){
//Not sure what to do here
});
});
SAMPLE FUNCTION
PostgresqlProvider.prototype.execute = function (query, cb) {
var self = this;
if (self.connection === "")
cb(new Error('Connection can not be empty, set Connection using Init function'));
if (query === null)
cb(new Error('Invalid Query Object - Query Object is Null'))
if (!query.buildCommand)
cb(new Error("Invalid Query Object"));
//Valid connection and query
};
It might look a bit funny to wrap around the postgres module like this but there are some design as this app will have several "providers" and i want to expose the same API for them all so i can use them interchangeably.
UPDATE
I decided that my test was too complicated, as i was looking to see if the connect call had been made AND then returning data, which smelt to me, so i stripped it back and put it into two tests:
The Mock Test
it('should call pg.connect when a valid Query object is parsed', function(){
var mockPg = sinon.mock(pg);
mockPg.expects('connect').once;
Provider.init('ConnectionString');
Provider.execute(stubQueryWithBuildFunc, null, mockPg);
mockPg.verify();
});
This works (i think) as without the postgres connector code it fails, with it passes (Boom :))
Issue now is with the second method, which i am going to use a stub (maybe a spy) which is passing 100% when it should fail, so i will pick that up in the morning.
Update 2
I am not 100% happy with the test, mainly because I am not hijacking the client.query method which is the one that hits the database, but simply my execute method and forcing it down a path, but it allows me to see the result and assert against it to test behaviour, but would be open to any suggested improvements.
I am using a spy to catch the method and return null and a faux object with contains rows, like the method would pass back, this test will change as I add more Query behaviour but it gets me over my hurdle.
it('should return data rows when a valid Query object is parsed', function(){
var fauxRows = [
{'id': 1000, 'name':'Some Company A'},
{'id': 1001, 'name':'Some Company B'}
];
var stubPg = sinon.stub(Provider, 'execute').callsArgWith(1, null, fauxRows);
Provider.init('ConnectionString');
Provider.execute(stubQueryWithBuildFunc, function(err, rows){
rows.should.have.length(2);
}, stubPg);
stubPg.called.should.equal.true;
stubPg.restore();
});
Use pg-pool: https://www.npmjs.com/package/pg-pool
It's about to be added to pg anyway and purportedly makes (mocking) unit-testing easier... from BrianC ( https://github.com/brianc/node-postgres/issues/1056#issuecomment-227325045 ):
Checkout https://github.com/brianc/node-pg-pool - it's going to be the pool implementation in node-postgres very soon and doesn't rely on singletons which makes mocking much easier. Hopefully that helps!
I very explicitly replace my dependencies. It's probably not the best solution but all the other solutions I saw weren't that great either.
inject: function (_mock) {
if (_mock) { real = _mock; }
}
You add this code to the module under test. In my tests I call the inject method and replace the real object. The reason why I don't 100% like it is because you have to add extra code only for testing.
The other solution is to read the module file as a string and use vm to manually load the file. When I investigated this I found it a little to complex so I went with just using the inject function. It's probably worth investigating this approach though. You can find more information here.