I've written some E2E tests in cypress for a react application.
My team plans to put these tests in the CI/CD pipeline. The problem is, that the react app checks the login on every URL visit, logs in, and then continues with the E2E test.
In every "it" test, I visit the URL and have a wait of 1000ms implemented to let the page load properly. The problem is, that there are a lot of tests that make this testing really slow. One complete test group takes around 4000-5000ms and there would be more than 10-20 test groups. This would become really slow during the CI/CD.
Another problem is, that a lot of these tests implement the typing using the .type() function. It is really slow when we use this. Is there any workaround for this?
The last problem that I notice sometimes even when the elements have been rendered, the tests sometimes fail saying that the element was not found or was detached from the DOM, but when looking at the web page at that moment, I can clearly see the element. And re-running the tests, it passes. It becomes very uncertain and these tests also fail sometimes in the headless mode (which will be used in CI/CD I assume). Any comments on this?
Any suggestions/opinions on cypress + react in CI/CD?
to run the tests faster - use: (it can help to solve problem #2)
https://www.npmjs.com/package/cypress-parallel
Related
Apologies if this sounds strange, but I'll try to describe what's happening as best I can. We were provided a small service that provides a single form UI that we capture the results of via an event. We've integrated this into two applications and we're seeing some different behavior when we load the UI in a modal window via an iframe. When integrated into one UI it loads pretty quickly. However, in the other UI it takes several seconds to load. Now, the only difference I could find is that setTimeout is being triggered sever seconds after the model is created. I discovered this using the Firefox development tools in the Performance tab.
Now, I believe that the UI for this form is built in a non-recent version of Angular (AngularJS?) based on some Google searches using strings that I could see in a minimized polyfill.xxxx.js file. However, I can't understand the code that was minimized and I have no version information to help me get back to a version that I can try to read and understand.
I did testing using the Performance API before the iframe is created in case the issue was something in my code, but the tested code is finished in < 100ms, so that didn't appear to be the issue. It's not a network issue as the requests occur pretty quickly. Also, both applications are referencing the same instance, so the only difference is the app that it's integrated into.
So, my primary question is what could be causing Angular (AngularJs) to set a timeout on page load? My secondary question is what advice is there for trying to debug this? I don't use Angular at all, so I'm not even sure where to begin outside of what I've already tried. The only custom app code I see looks to be Angular configuration/properties, so no JavaScript to debug.
The best advice with setTimeout() in such a situation is to not use any setTimeout().
I ran into same situation not only angular most of the framework treat setTimeout() a bit differently.
What I mean setTimeout() in a plain JS app and angularJS app and Angular App will differ the time interval.
setTimeout() is set to execute after a certain time, but that's is not guaranteed by the thread.
Some time angular completing change detection and watcher life cycle, all mingle with setTimeout() and you will end up with strange behavior.
So better not to use that unless you are sure it's not gonna mingle with other running things.
Please share the code snippet if possible
I'm starting to introduce TDD into an existing JavaScript/jQuery project.
Currently, I'm testing with Mocha and Chai under Grunt in a CLI shell in Emacs.
This works nicely for the parts of the code that are side-effect-free, synchronous, don't use jQuery, etc.
I've found many online articles addressing individual issues in setting up a more inclusive test environment, but I've not managed to find a good getting-started guide, without diving into the weeds of competing libraries and setups.
I don't need a "best" answer, nor anything too fancy. I don't even need mock button presses or user-input; I'm happy just testing my handler code.
Just looking for a guide or set of recommended best practices to test client-side JavaScript code where:
The existing code uses jQuery and AJAX;
The test environment should be running continuously;
The test environment should be launched from my gruntfile. Or, I'd be ok moving to gulp or any other similar driver.
Ideally, I'd like the tests to be running in an Emacs buffer. But, if need be, I'd be ok having it running in another window that I can stick in the corner of my screen;
Tests should run reasonably fast. I want them to trigger automatically on every file save.
I think I'm describing a very vanilla set of test requirements, so I'd expect there to be common answers. But, my search-fu must be low today because I'm not finding what I want.
If you're using Mocha and Chai, then you already have the basics set up.
If your code under test modifies the document, you can substitute an artificial document for your tests (via jsdom).
If your code under test fires Ajax calls and you'd like to test them, you can use sinon to put a fake XMLHttpRequest provider. sinon also offers a convenient mock for setTimeout and the family.
If the code under test uses jQuery, then you can either separate the jQuery-dependent part, or just run jQuery on the server using the jsdom document. jQuery installs with npm easily.
If all of this seems not realistic enough for your purpose and you'd like a more true environment, you can have a look at karma - it's an automation tool that can open a browser in the background, run any tests inside and report the errors in the console. It's much slower than mocha but you get to run your code (and tests) in a real browser, perhaps even several browsers at the same time.
Both tools have their places, e.g. you could use mocha for testing vanillajs and simple DOM modification (also e.g. React components if you're into that), and resort to karma for writing slower, more realistic tests that depend more on real browser behaviour.
I've written some functional tests for the Intern, which supposedly should work on SauceLabs, BrowserStack, TestingBot, or my own Selenium grid.
The same code doesn't seem to work on all services though. I initially got the functional tests working on SauceLabs, so I'm using that service as my "base", so to speak.
On BrowserStack, the tests appeared to fail because the commands were executing too quickly. For example, I am using .pressKeys('TEST\nIN\nPROGRESS\n'), where \n is supposed to execute javascript on the page to turn the previous text into a tag (like the SO tags for this question: [intern] [javascript] [testing]).
That command should result in the following:
[TEST] [IN] [PROGRESS]
but instead results in
[TESTIN] [PROGRESS]
causing my assertions to fail. Changing the pressKeys command to
.pressKeys('TEST\n')
.sleep(500)
.pressKeys('IN\n')
.sleep(500)
.pressKeys('PROGRESS\n')
did not solve the issue. The test would pass / fail inconsistently, with the tags sometimes coming out as [TEST] [IN] [PROGRESS], and sometimes as [TESTIN] [PROGRESS].
Another example is that it wouldn't always wait for the next page to load when I .click() on a link, even with a .sleep() command after.
With regards to TestingBot, the application flat-out failed to upload files, and I could not for the life of me figure out how to enable the file_watcher service required to do so. They have a file upload example here, but I don't know how to configure the Intern to do this for me.
Isn't the Intern supposed to take care of these differences in the cloud providers for the tests?
Is there some standardized way of writing my tests in the Intern so that I can change my cloud testing provider without changing the tests themselves?
It should be possible to run the same test suite against any cloud-hosted Selenium providers and have them execute successfully, but there are some things you must do:
You need to make sure you’ve correctly configured providers so they all run the same version of Selenium. There is no standard for this; each provider uses a different key to decide which Selenium version to run. Check each provider’s documentation for the correct key to use.
You need to write tests that don’t have race conditions. What you’re describing here sounds like a classic race condition where you are performing some action that completes asynchronously, and so only happens in environments that execute operations within a certain period of time. Modifying this specific test so it has a find timeout and then tries to find the element you expect to be generated when the return key is hit should be a good solution, since this will allow you to wait as long as necessary without making your test slow.
Unfortunately, even with this advice, all of the cloud-hosted providers for Web browser testing are garbage and mess with things in a way that randomly causes tests to break. BrowserStack is the best by far at avoiding this, but even they do things to break tests from time to time that work perfectly well in a locally hosted Selenium installation.
For file uploads, Intern will automatically upload files if has detected that the remote provider supports it, and you type a valid path to a file on the server where intern-runner is running. You can check whether the server supports uploads by looking at this.remote.session.capabilities.remoteFiles. Feature detection must be turned on for this to work, and you should run Intern 3.0.6 or newer if you are trying to upload files to a Selenium server on the same machine as intern-runner.
I want to premise that I'm aware of Ember QUnit (recently covered at EmberConf) as well as using PhantomJS so please read my points in question closely if you're thinking of marking as a duplicate.
My goal is to run unit tests from the command line, similar to a mocha test might run
mocha simple_test.js
and see the results in the form of a command line reporter.
testing ember modules in isolation. I would like to be able to new-up an ember object, route, or controller without the context of a running ember app (perhaps some kind of ember test harness) and run assertions against that module.
testing ember modules in the command line (avoiding browser reporters like QUnit or headless browsers like PhantomJS)
I already have integration and acceptance tests using a combination of karma and phantomjs, I would like to see if I can compliment with more unit tests. Has anybody come across a unit test setup similar to to what I listed above or is it not really possible and/or productive?
Update
The ember guides list unit testing strategies here:
http://emberjs.com/guides/testing/unit/
In my opinion, these seem more like integration tests.
Yeah I do this with my application. You might like to look at the new testing guides in the ember site's documenation if you haven't already seen it (it went live last week sometime). I helped edit it. It's pretty good! :-)
Good luck and let me know if you need any more help, like I say, I do unit tests all the time on all parts of Ember. The hardest so far for me has been components because they're neither integration nor unit, really... they're like a hybrid: isolated integration unit tests that still require large parts of ember and rendering in the view.
I run headless using guard, jasmine and qunit. Jasmine's my preference and I've been moving over from qunit slowly.
http://emberjs.com/guides/testing/
Also I noticed that what you seem to want is to isolate the units outside of even ember itself. To do that, I'd put your code in separate javascript libraries... otherwise you'll have troubles: afterall how are you going to unit test a piece of code without Ember present if it uses Ember?
I'm stuck with an issue with a cucumber test suite and I can't think of any way of debugging it.
We have a pretty sizeable suit of cucumber features, all of them pass on the development machines. The problem is a couple of scenarios are failing when we run the whole cucumber suite on our ci server, and running them individually makes them pass and fail (apparently) randomly when the scenario tries to fill a form (that apparently isn't on the page). Because of the random failures I thought it was a timing problem with an ajax request, but that doesn't seem to be the case, because adding really big sleep (tried everything from 1 to 60 secs) doesn't change anything. This scenario is even more fun because there's another 3 scenearios running the same steps that are failing on the first one in the same order, and these pass except if I delete the first scenario, in which case the first one to run those steps is the one that fails.
Is there any tricks to debug this sort of weirdness on cucumber? features (keep in mind these scenarios always pass on the dev machines, the problem is in the ci server).
Thanks!
I've also had the opportunity to debug intermittent test failures that only reproduce on CI. In my experience the problem always boiled down to few basic causes:
Race conditions in the front-end. For example, enabling a form for input before an xhr callback adds default values.
"Optimistic" writes from the front-end. Sometimes a front-end engineer makes an action involving a PUT/POST request more responsive by ignoring the result. In this case, there's no way to get Cucumber to wait until the request has completed, so a test against the state change in the database will have a race with the application.
Requests to resources that aren't available in the test fixture. For example, requests to 3rd party APIs might be blocked from CI. Sometimes URLs are not constructed correctly in the test environment, particularly when they are built "by hand", instead of using Rails helpers.
Intermittent Cucumber failures are always challenging to debug. Don't give up! It's worth the effort to figure out how to build a testable, race-free front-end. You can use capybara-webkit to debug the CI-only failures. Get the javascript console output to printed out on CI, and then you can add prints to your javascript to trace its state up to the point of test failure. You can also hack capybara-webkit to print out information about requests made by the front-end. Here is an example: https://github.com/joshuanapoli/capybara-webkit/commit/96b645073f7b099196c5d3da4653606e98a453e4