I've written some functional tests for the Intern, which supposedly should work on SauceLabs, BrowserStack, TestingBot, or my own Selenium grid.
The same code doesn't seem to work on all services though. I initially got the functional tests working on SauceLabs, so I'm using that service as my "base", so to speak.
On BrowserStack, the tests appeared to fail because the commands were executing too quickly. For example, I am using .pressKeys('TEST\nIN\nPROGRESS\n'), where \n is supposed to execute javascript on the page to turn the previous text into a tag (like the SO tags for this question: [intern] [javascript] [testing]).
That command should result in the following:
[TEST] [IN] [PROGRESS]
but instead results in
[TESTIN] [PROGRESS]
causing my assertions to fail. Changing the pressKeys command to
.pressKeys('TEST\n')
.sleep(500)
.pressKeys('IN\n')
.sleep(500)
.pressKeys('PROGRESS\n')
did not solve the issue. The test would pass / fail inconsistently, with the tags sometimes coming out as [TEST] [IN] [PROGRESS], and sometimes as [TESTIN] [PROGRESS].
Another example is that it wouldn't always wait for the next page to load when I .click() on a link, even with a .sleep() command after.
With regards to TestingBot, the application flat-out failed to upload files, and I could not for the life of me figure out how to enable the file_watcher service required to do so. They have a file upload example here, but I don't know how to configure the Intern to do this for me.
Isn't the Intern supposed to take care of these differences in the cloud providers for the tests?
Is there some standardized way of writing my tests in the Intern so that I can change my cloud testing provider without changing the tests themselves?
It should be possible to run the same test suite against any cloud-hosted Selenium providers and have them execute successfully, but there are some things you must do:
You need to make sure you’ve correctly configured providers so they all run the same version of Selenium. There is no standard for this; each provider uses a different key to decide which Selenium version to run. Check each provider’s documentation for the correct key to use.
You need to write tests that don’t have race conditions. What you’re describing here sounds like a classic race condition where you are performing some action that completes asynchronously, and so only happens in environments that execute operations within a certain period of time. Modifying this specific test so it has a find timeout and then tries to find the element you expect to be generated when the return key is hit should be a good solution, since this will allow you to wait as long as necessary without making your test slow.
Unfortunately, even with this advice, all of the cloud-hosted providers for Web browser testing are garbage and mess with things in a way that randomly causes tests to break. BrowserStack is the best by far at avoiding this, but even they do things to break tests from time to time that work perfectly well in a locally hosted Selenium installation.
For file uploads, Intern will automatically upload files if has detected that the remote provider supports it, and you type a valid path to a file on the server where intern-runner is running. You can check whether the server supports uploads by looking at this.remote.session.capabilities.remoteFiles. Feature detection must be turned on for this to work, and you should run Intern 3.0.6 or newer if you are trying to upload files to a Selenium server on the same machine as intern-runner.
Related
I've written some E2E tests in cypress for a react application.
My team plans to put these tests in the CI/CD pipeline. The problem is, that the react app checks the login on every URL visit, logs in, and then continues with the E2E test.
In every "it" test, I visit the URL and have a wait of 1000ms implemented to let the page load properly. The problem is, that there are a lot of tests that make this testing really slow. One complete test group takes around 4000-5000ms and there would be more than 10-20 test groups. This would become really slow during the CI/CD.
Another problem is, that a lot of these tests implement the typing using the .type() function. It is really slow when we use this. Is there any workaround for this?
The last problem that I notice sometimes even when the elements have been rendered, the tests sometimes fail saying that the element was not found or was detached from the DOM, but when looking at the web page at that moment, I can clearly see the element. And re-running the tests, it passes. It becomes very uncertain and these tests also fail sometimes in the headless mode (which will be used in CI/CD I assume). Any comments on this?
Any suggestions/opinions on cypress + react in CI/CD?
to run the tests faster - use: (it can help to solve problem #2)
https://www.npmjs.com/package/cypress-parallel
I am currently load testing my companies new webpage and have used JMeter for this task. We have an assessment which pulls down javascript locally to the users machine which allows them to take a test, once completed tests are uploaded back to the database.
The issue I'm having is that it is well documented that JMeter is not a browser and does not interact with javascript. We need a way to test the time it takes for requests to browse to the page assessment and how long it takes to pull. It is also required to up the amount of requests over a specific period of time so we can determine at what point the server falls over.
I have also tried using Gatling however I am running into the same issue. Has anyone else ran into these problems and how did they get around this?
Thanks in advance!
Very few tools run downloaded JS code in the client/VU threads when executing a load test, for performance reasons mainly. You can try Selenium Grid or some online service based on it, like https://www.loadbooster.com/ or Blazemeter, but if you want to run tests in your own environment, Selenium Grid may be your only choice.
The alternative is to emulate the JS client side code when you script the load test scenario. Many tools can do that, at some level, but to make translation of existing JS code easier, I would choose a tool that offers a real scripting language, such as e.g. Grinder, Locust, Wrk or k6 (or possibly Gatling). k6 may be the simplest as you script it in Javascript, so translating the client-side JS code should be somewhat less work there.
https://k6.io
https://gatling.io/
https://locust.io
https://github.com/wg/wrk
http://grinder.sourceforge.net/
You need to split your test into 2 major areas:
JavaScript processing happens solely on client side and you need to test it separately, the majority of modern web browsers come with developer tools allowing testing the performance of JavaScript execution:
Firefox Developer Tools - Performance
Chrome DevTools - Analyze Runtime Performance
Profiling JavaScript performance
Server side impact is the fact of downloading the JavaScript and uploading the results, you should be able to mimic the corresponding calls using JMeter's HTTP Request sampler, check out Performance Testing: Upload and Download Scenarios with Apache JMeter article for more details.
See built in JavaScript profiler inside of developer tools inside every browser. This is a question tied to single definition of a browser/os/machine/running Apps/browser extensions question, not one of multi user performance
I'm starting to introduce TDD into an existing JavaScript/jQuery project.
Currently, I'm testing with Mocha and Chai under Grunt in a CLI shell in Emacs.
This works nicely for the parts of the code that are side-effect-free, synchronous, don't use jQuery, etc.
I've found many online articles addressing individual issues in setting up a more inclusive test environment, but I've not managed to find a good getting-started guide, without diving into the weeds of competing libraries and setups.
I don't need a "best" answer, nor anything too fancy. I don't even need mock button presses or user-input; I'm happy just testing my handler code.
Just looking for a guide or set of recommended best practices to test client-side JavaScript code where:
The existing code uses jQuery and AJAX;
The test environment should be running continuously;
The test environment should be launched from my gruntfile. Or, I'd be ok moving to gulp or any other similar driver.
Ideally, I'd like the tests to be running in an Emacs buffer. But, if need be, I'd be ok having it running in another window that I can stick in the corner of my screen;
Tests should run reasonably fast. I want them to trigger automatically on every file save.
I think I'm describing a very vanilla set of test requirements, so I'd expect there to be common answers. But, my search-fu must be low today because I'm not finding what I want.
If you're using Mocha and Chai, then you already have the basics set up.
If your code under test modifies the document, you can substitute an artificial document for your tests (via jsdom).
If your code under test fires Ajax calls and you'd like to test them, you can use sinon to put a fake XMLHttpRequest provider. sinon also offers a convenient mock for setTimeout and the family.
If the code under test uses jQuery, then you can either separate the jQuery-dependent part, or just run jQuery on the server using the jsdom document. jQuery installs with npm easily.
If all of this seems not realistic enough for your purpose and you'd like a more true environment, you can have a look at karma - it's an automation tool that can open a browser in the background, run any tests inside and report the errors in the console. It's much slower than mocha but you get to run your code (and tests) in a real browser, perhaps even several browsers at the same time.
Both tools have their places, e.g. you could use mocha for testing vanillajs and simple DOM modification (also e.g. React components if you're into that), and resort to karma for writing slower, more realistic tests that depend more on real browser behaviour.
I'm stuck with an issue with a cucumber test suite and I can't think of any way of debugging it.
We have a pretty sizeable suit of cucumber features, all of them pass on the development machines. The problem is a couple of scenarios are failing when we run the whole cucumber suite on our ci server, and running them individually makes them pass and fail (apparently) randomly when the scenario tries to fill a form (that apparently isn't on the page). Because of the random failures I thought it was a timing problem with an ajax request, but that doesn't seem to be the case, because adding really big sleep (tried everything from 1 to 60 secs) doesn't change anything. This scenario is even more fun because there's another 3 scenearios running the same steps that are failing on the first one in the same order, and these pass except if I delete the first scenario, in which case the first one to run those steps is the one that fails.
Is there any tricks to debug this sort of weirdness on cucumber? features (keep in mind these scenarios always pass on the dev machines, the problem is in the ci server).
Thanks!
I've also had the opportunity to debug intermittent test failures that only reproduce on CI. In my experience the problem always boiled down to few basic causes:
Race conditions in the front-end. For example, enabling a form for input before an xhr callback adds default values.
"Optimistic" writes from the front-end. Sometimes a front-end engineer makes an action involving a PUT/POST request more responsive by ignoring the result. In this case, there's no way to get Cucumber to wait until the request has completed, so a test against the state change in the database will have a race with the application.
Requests to resources that aren't available in the test fixture. For example, requests to 3rd party APIs might be blocked from CI. Sometimes URLs are not constructed correctly in the test environment, particularly when they are built "by hand", instead of using Rails helpers.
Intermittent Cucumber failures are always challenging to debug. Don't give up! It's worth the effort to figure out how to build a testable, race-free front-end. You can use capybara-webkit to debug the CI-only failures. Get the javascript console output to printed out on CI, and then you can add prints to your javascript to trace its state up to the point of test failure. You can also hack capybara-webkit to print out information about requests made by the front-end. Here is an example: https://github.com/joshuanapoli/capybara-webkit/commit/96b645073f7b099196c5d3da4653606e98a453e4
I want to do a smoke test in order to test the connection between my web app and the server itself. Does Someone know how to do it? In addition I want to do an acceptance tests to test my whole application. Which tool do you recommend?
My technology stack is: backbone and require.js and jquery mobile and jasmine for BDD test.
Regards
When doing BDD you should always mock the collaborators. The tests should run quickly and not depend on any external resources such as servers, APIs, databases etc.
The way you would want to make in f.e. Jasmine is to declare a spy that pretends to be the server. You then move on to defining what would be the response of the spy in a particular scenario or example.
This is the best aproach if you want your application to be environment undependent. Which is very needed when running Jenkins jobs - building a whole infrastructure around the job would be hard to reproduce.
Make spy/mock objects that represent the server and in your specs define how the external sources behave - this way you can focus on what behavior your application delivers under specified circumstances.
This isn't a complete answer, but one tool we've been using for our very similar stack is mockJSON. It's a jQuery plugin that does a nice job both:
intercepting calls to a URL and instead sending back mock data and
making it easy to generate random mock data based on templates.
The best part is that it's entirely client side, so you don't need to set up anything external to get decent tests. It won't test the actual network connection to your server, but it can do a very good job validating that type of data your server would be kicking back. FWIW, we use Mocha as our test framework and haven't had any trouble getting this to integrate with our BDD work.
The original mockJSON repo is still pretty good, though it hasn't been updated in a little while. My colleagues and I have been trying to keep it going with patches and features in my own fork.
I found a blog post where the author explain how to use capybara, cucumber and selenium outside a rails application and therefore can be use to test a javascript app. Here are the link: http://testerstories.com/?p=48