I am working on an socket.io IRC and I don't want users to have a long username. I wrote the following (mocha) test to verify that the server doesn't send out a response to every connected socket when a longer username is provided:
it("should not accept usernames longer than 15 chars", function (done) {
var username = "a".repeat(server.getMaxUsernameLength() + 1);
client1.emit("username change", username);
client2.on("chat message", function (data) {
throw Error("Fail, server did send a response.");
});
setTimeout(function () {
done();
}, 50);
});
This currently does work, but it's far from optimal. What if my CI platform is slower or the server does respond after more than 50 ms? What's the best way to fail a test when a response is given, or should I structure my tests differently?
Thanks!
P.s. This question is different from Testing asynchronous function with mocha, because while the problem does have to do with asynchronous testing, I am aware of the done() method (and I'm using it obviously).
What you're trying to do is verify that the callback to client2.on("chat message"... is never called. Testing for negative cases can be tough, and your case seems to be exacerbated by the fact that you're trying to do a complete end-to-end (client-to-server-to-client) integration test. Personally, I would try to test this in a unit case suite and avoid introducing the complexity of asynchronicity to the test.
However, if it must be done, here's a tip from Eradicating Non-Determinism in Tests:
This is the trickiest case since you can test for your expected response, but there's nothing to do to detect a failure other than timing-out. If the provider is something you're building you can handle this by ensuring the provider implements some way of indicating that it's done - essentially some form of callback. Even if only the testing code uses it, it's worth it - although often you'll find this kind of functionality is valuable for other purposes too.
Your server should send some sort of notice to client1 that it's going to ignore the name change, even if you aren't testing, but since you are you could use such a notification to verify that it really didn't send a notification to the other client. So something like:
it("should not accept usernames longer than 15 chars", function (done) {
var chatSpy = sinon.spy();
client2.on("chat message", chatSpy);
client1.on('error', function(err) {
assertEquals(err.msg, 'Username too long');
assert(chatSpy.neverCalledWith(...));
done();
});
var username = "a".repeat(server.getMaxUsernameLength() + 1);
client1.emit("username change", username);
});
would be suitable.
Also, if for whatever reason, server.getMaxUsernameLength() ever starts returning something other than 15, the best case scenario is that your test description becomes wrong. It can become worse if getMaxUsernameLength and the server code for handling the name change event don't get their values from the same place. A test probably should not rely on the system under test to provide test values.
Related
I'm writing E2E tests in Cypress (version 12.3.0). I have a page with a table in a multi-step creation process that requires some data from back-end application. In some cases (rarely, but it occurs) the request gets stuck and the loader never disappears. The solution is simple: go back to the previous step and return to the "stuck" table. The request is sent anew and most likely receives a response - the process and the tests can proceed further. If the loader is not present, then going back and forth should be skipped (most of the times).
I managed to work around that with the code below, but I'm wondering if it could be done with some built-in Cypress functions and without explicit waiting. Unfortunately, I didn't find anything in the Cypress docs and on StackOverflow. I thought that maybe I could use the then function to work on a "conditionally present" element, but it fails on get, that's why I've used find on the jQuery ancestor element.
waitForTableData() {
return cy.get('.data-table')
.should('exist')
.then(table => {
if (this.loaderNotPresent(table)) {
return;
}
cy.wait(200)
.then(() => {
if (this.loaderNotPresent(table)) {
return;
}
cy.get('button')
.contains('Back')
.click()
.get('button')
.contains('Next')
.click()
.then(() => this.waitForTableData());
});
});
}
loaderNotPresent(table: JQuery) {
return !table.find('.loader')?.length;
}
Your code looks to me to be the best you could do at present.
The cy.wait(200) is about the right size, maybe a bit smaller would be better - 50 - 100 ms. The recursive call is going to give you similar behaviour to Cypress retry (which also waits internally, in order not to hammer the test runner).
Another approach would be to cy.intercept() and mock the backend, presuming it's the backend that gets stuck.
Also worth trying a simple test retry, if the loading only fails on a small percentage of times.
So I have been developing some codes using AWS Lambda with NodeJS 6.10. Because of my lack of knowledge in integration testing (don't worry, the unit tests are done), I didn't test my code. Of course, I missed a bug that caused two sleepless nights. It keeps running even after I put in this
return workerCallback(err);
I thought it would stop the function from running other codes past the if clause because I returned it. Anyway, I was able to fix my issue by adding a return just after the asynchronous function
SQSService.deleteMessage
is called. The rest of the codes did not run and the lambda function ran and ended as expected.
Here are now the code that works as expected.
function myFoo(currentRequest,event, workCallback){
var current_ts = moment();
var req_ts = moment(currentRequest.request_timestamp);
if (req_ts.diff(current_ts, 'minutes') > 5) {
SQSService.deleteMessage(event.ReceiptHandle, function(err, data){
if (err) {
return workerCallback(err);
} else {
return workerCallback(null, "Request stale! Deleting from queue...");
}
}); //end of SQS Service
return; //This line... this line!
}
/* Codes below will execute because the code above is asynchronous
but it should not as the asynchronous code above is a terminator function
from a business logic point of view
*/
//more codes that will run should currentRequest.request_timestamp is 5 minutes past
}
Can someone please guide me on how to test this code or create a test that would at least prevent me from doing the same mistake again? I'd like to avoid these mistakes from happening again by testing. Thanks!
(I'm moving it to an answer so the comments thread doesn't fill up - and so I can type more).
The key is to get the proper grasp of async-ness in your code. myFoo seems to be asynchronous, so you need to decide whether all errors or failure modes should be handled as errors passed to its callback handler, or whether some types of error should return synchronous errors to the caller of myFoo itself. My general approach is, if any errors are going through the callback handler, to have them all go there - with the minor exception of certain types of bad-coding errors (e.g. passing in things of the wrong type, or passing in null for arguments that should always have variables) which I might throw Error() for. But if this kind of error (project_ref_no == null) is the kind of error that you should handle gracefully, then I'd probably pass it through to the error handler. The general idea is that, when you call myFoo, and it returns, all you know is that some work is going to get done at some point, but you don't know what is going to happen (and won't get a result in the response) - the response will come back later in the call to the callback handler).
But, more importantly, it's key to understand what code is being run immediately, and what code is in a callback handler. You got tripped up because you mentally imagines the internally generated callback handler (passed to SQSService.deleteMessage) was being run when you called myFoo.
As for testing strategy, I don't think there's a silver bullet to the issue of mistaking asynchronous calls (with callback handlers) with code that is run synchronously. You could sprinkle assertions or throw Error()'s all over the place (where you think code should never get to), but that'd make your code ridiculous.
Typescript helps with this a bit, because you can define a function return type, and your IDE should give you a warning if you've got code paths that don't return something of that type (something most/all? typed languages give you) - and that would help somewhat, but it won't catch all cases (e.g. functions that return void).
If you're new to javascript and/or javascript's asynchronous models, you might check out the following link:
https://medium.com/codebuddies/getting-to-know-asynchronous-javascript-callbacks-promises-and-async-await-17e0673281ee
I'm writing integration tests for my meteor project. I want to test the webhook POST handler in my app. This is how it looks like:
post() {
Meteor.defer(() => {
// some logic here, e.g insert / update database
})
return {
statusCode: 200,
}
}
Note: Meteor.defer is a must because I want to return code 200 (OK) as soon as possible.
To test this webhook, I create a fake POST request to this webhook, then check if the database is updated accordingly. The thing is that in the test I don't know when the code inside Meteor.defer has finished, therefore my assertions are failed because the database hasn't been updated yet.
Any suggestions ?
I came up with a workaround: using Mocha's test timeouts to wait for a specific amount of time before doing assertions. It's not the best solution but it works at the moment.
Consider the following:
function useCredits(userId, amount){
var userRef = firebase.database().ref().child('users').child(userId);
userRef.transaction(function(user) {
if (!user){
return user;
}
user.credits -= amount;
return user;
}, NOOP, false);
}
function notifyUser(userId, message){
var notificationId = Math.random();
var userNotificationRef = firebase.database().ref().child('users').child(userId).child('notifications').child(notificationId);
userNotificationRef.transaction(function(notification) {
return message;
}, NOOP, false);
}
These are called from the same node js process.
A user looks like this:
{
"name": 'Alex',
"age": 22,
"credits": 100,
"notifications": {
"1": "notification 1",
"2": "notification 2"
}
}
When I run my stress tests I notice that sometimes the user object passed to the userRef transaction update function is not the full user it is only the following:
{
"notifications": {
"1": "notification 1",
"2": "notification 2"
}
}
This obviously causes an Error because user.credits does not exist.
It is suspicious that the user object passed to update function of the userRef transaction is the same as the data returned by the userNotificationRef transaction's update function.
Why is this the case? This problem goes away if I run both transactions on the user parent location, but this is a less optimal solution as I am then effectively locking on and reading the whole user object, which is redundant when adding a write once notification.
In my experience, you can't rely on the initial value passed into a transaction update function. Even if the data is populated in the datastore, the function might be called with null, a partial value, or a stale old value (in case of a local update in flight). This is not usually a problem as long as you take a defensive approach when writing the function (and you should!), since the bogus update will be refused and the transaction retried.
But beware: if you abort the transaction (by returning undefined) because the data doesn't make sense, then it's not checked against the server and won't get retried. For this reason, I recommend never aborting transactions. I built a monkey patch to apply this fix (and others) transparently; it's browser-only but could be adapted to Node trivially.
Another thing you can do to help a bit is to insert an on('value') call on the same ref just before the transaction and keep it alive until the transaction completes. This will usually cause the transaction to run on the correct data on the first try, doesn't affect bandwidth too much (since the current value would need to be transmitted anyway), and increases local latency a little if you have applyLocally set or defaulting to true. I do this in my NodeFire library, among many other optimizations and tweaks.
On top of all the above, as of this writing there's still a bug in the SDK where very rarely the wrong base value will get "stuck" and the transaction retry continuously (failing with maxretry every so often) until you restart the process.
Good luck! I still use transactions in my server, where failures can be retried easily and I have multiple processes running, but have given up on using them on the client -- they're just too unreliable. In my opinion it's often better to redesign your data structures so that transactions aren't needed.
I have a angular application. I have written some test cases for the login-page, checking normal login scenarios.
describe('Login screen tests', function () {
var ptor = protractor.getInstance();
beforeEach(function(){
ptor.get('http://url:3000/login/#/');
});
it('Blank Username & Password test', function() {
ptor.findElement(protractor.By.id("submit")).click();
var message = ptor.findElement(protractor.By.repeater('message in messages'));
message.then(function(message){
expect(message.getText()).toContain('Username or Password can\'t be blank');
});
});
it ('Blank Password test', function(){
....
});
it ('Invalid username test', function(){
....
});
...... //Similarly more test cases folow for the login screen.
});
The tests run properly as expected.
Problem: The tests are very slow, this takes about 1.5 mins to complete. If I run the same tests using a selenium via java. It takes only around 2-3 seconds which should be ideal.
I want to use protractor coz the application is entirely on top of angular.
I would guess that there maybe a default time-out of lets say 300ms after each tests. which make the tests slow. So even if the check is done it waits for the time-out.
Is there some polling mechanism so that if the test completes before time-out it can move forward. I tried using done() like in jasmine, but done() gives an error, i inquired to know that done() is internally pached with protractor.
There is a chance the slowness is related to protractor waiting to sync with each page each time.
You can disable that feature with
ptor.ignoreSynchronization = true;
Keep in mind that the option was intended to be passed to non-angular pages, temporary.
But if speed is more important to you, i guess you can go with that.
Note ptor is the old syntax, you should upgrade and start using browser instead, like this:
browser.ignoreSynchronization = true;
If you start experiencing flaky tests, i suggest you go back to the default false value on the specific pages that has random issues.