In the context of web-development many times users provide screenshots of their "invalid state".
I use React and I was wondering if we could debug the opposite way: starting from the current state:
User gives us screenshot of invalid UI state.
Via devtools we change props of components to match & find that specific permutation (let's imagine all components are stateless).
Feed the permutation to "a program" that shows us the things in our code that caused the props to change that way.
From a technical point of view I wonder whether this is even possible: can we go backwards in time from a specific state of a program?
This should be possible since the state is derived from a specific execution path of our code; so the opposite should also be possible? The reason I am interested is for debugging purposes but I also feel it would be an interesting exploratory excercise.
Are there any resources or tools that I can look into to find if this is possible?
EDIT: to make this a bit more clear I'm looking for a way to give the system as input a "state", and return a possible list of "path traces" in time that my code could execute to arrive at such state:
showHowWeCanArriveTo({
name: 'Bob',
showPopup: true,
colors: ['blue', 'green']
});
// Returns:
[
[setName('Bob'), setShowPopup(true), addColor('blue'), addColor('green')]
]
I've used Logrocket before in react for that exact same purpose. If you're using redux, they provide a middleware that logs the user state and its history and is accessible from their web.
You might be looking for time travel debugging, reverse debugging, or replay debugging. Basically, they work by recording what the program is doing as it executes and keeping enough logs that you can then go backwards.
Alternatively, it is possible in principle to use techniques from program analysis (program slicing, reverse program execution, etc.) to try to execute the program in reverse and find all execution paths that might lead to that state. However, there might be many such paths (the further back you go, the more there might be), and implementing such a thing from scratch is technically non-trivial.
I recently watched a number of talks from the AssertJS conference (which I highly recommend), among them #kentcdodds "Write Tests, Not Too Many, Mostly Integration." I've been working on an Angular project for over a year, have written some unit tests, and just started playing with Cypress, but I still feel this frustration around integration tests, and where to draw the lines. I'd really love to talk to some pro who does this day in and day out, but I don't know any where I work. Since I'm tired of not being able to figure this out, I thought I'd just ask the world here, cause you all are fantastic.
So in Angular (or React or Vue, etc), you have component code, and then you have the HTML template, and usually they interact in some way. The component code has functions in it that can be unit tested, and that part I'm ok with.
Where I haven't gotten things straight in my mind is, do you call it an integration test when you're testing how a component function changes the UI? If you're testing that kind of thing, should that be done just in E2E tests? Because Angular/Jasmine(or Jest) lets you do this kind of thing, referencing the UI:
const el = fixture.debugElement.queryAll(By.css('button'));
expect(el[0].nativeElement.textContent).toEqual('Submit')
But does that mean you should? And if you do, then do you not cover that in your E2E tests?
And regarding integration with things like services, how far do you go with integrating? If you mock the actual HTTP call, and just test that it would get called with the right functions, is that an integration test, or is it still a unit test?
To sum up, I intuitively know what I need to test to have confidence that things are working as they should, I'm just not sure how to discern when something requires all three kinds of tests or not.
I know this is getting long, but here's my example app:
There's a property called hasNoProducts that is set after a product is chosen and data is returned from the server (or not if there is none). If hasNoProducts is true, UI (through an *ngIf) shows that "Sorry" message. If false, then other selections become available. Depending on the product picked, those options change.
So I know I can write a unit test and mock the HTTP request so that I can test that hasNoProducts is set correctly. But then, I want to test that the message is displayed, or that the additional options are displayed. And if there is data, test that switching the product changes the data in the other lists that would subsequently show on screen. If I do that using Angular/Jasmine, is it an integration test since I'm "integrating" component and template? If not, then what would be an integration test?
I could keep asking questions, but I'll stop there in the hopes that someone has read this far and has some insight. Again, I've read tons of articles, watched tons of videos and done tutorials, but every time I sit down to apply to a real project, I get stuck on things like this, and I want to get past this! Thanks in advance.
What distinguishes unit-tests and integration-tests (and then subsystem-tests and system-tests) is the goal that you want to achieve with the tests.
The goal of unit-testing is to find those bugs in small pieces of code that can be found if these pieces of code are isolated. Note that this does not mean you truly must isolate the code, but it means your focus is the isolated code. In unit-testing, mocking is very common, since it allows to stimulate scenarios that otherwise are hard to test, or it speeds up build and execution times etc., but mocking is not mandatory: For example, you would not mock calls to a mathematical sin() function from the standard library, because the sin() function does not keep you from reaching your testing goals. But, leaving the sin() function in does not turn these tests into integration tests. Strictly speaking, you could even have unit-tests where some real network accesses take place (if you are too lazy to mock the network access), but due to the non-determinism, delays etc. these unit tests would be slow and unreliable, which means they would simply not be well suited to specifically find the bugs in the isolated code. That's why everybody says that "if there is some real network access, it is not a unit-test", which is not formally but practically correct.
Since in unit-testing you intentionally only focus on the isolated code, you will not find bugs that are due to misunderstandings about interactions with other components. If you mock some depended-on-component, then you implement these mocks based on your understanding of how the other component behaves. If your understanding is wrong, your mock implementations will reflect your wrong understanding, and your unit-tests will succeed, although in the integrated system things will break. That is not a flaw of unit-testing, but simply the reason why there are other test levels like integration testing. In other words, even if you do unit-testing perfectly, there will unavoidably remain some bugs that unit-testing is not even intending to find.
Now, what are integration tests then? They are defined by the goal to find bugs in the interactions between (already tested) components. Such bugs can, for example, be due to mutual misconceptions of the developers of the components about how an interface is meant to work. For example, in the case of a library component B that is used from A: Does A call functions from the right component B (rather than from C), do the calls happen while B is already in a proper state (B might not be initialized yet or in error state), do the calls happen in the proper order, are the arguments provided in the correct order and have values in the expected form (e.g. zero based index vs. one based index?, null allowed?), are the return values provided in the expected form (returned error code vs. exception) and have values in the expected form? This is for one integration scenario - there are many others, like, components exchanging data via files (binary or text? which end-of-line marker: unix, dos, ...?, ...).
There are many possible interaction errors. To find them, in integration testing you integrate the real components (the real A and the real B, no mocks, but possibly mocks for other components) and stimulate them such that the different interactions actually take place - ideally in all interesting ways, like, trying to force some boundary cases in the interaction (exchanged file is empty, ...). Again, just the fact that the test operates on a software where some components are integrated does not make it an integration test: Only if the test is specifically designed to initiate interactions such that bugs in these interactions become apparent, then it is an integration test.
Subsystem tests (which are the next level) then focus, again, on the remaining bugs, that is, those bugs which neither unit-testing nor integration testing intend to find. Examples are requirements on the component C that were not considered when C was decomposed into A and B, or, if C is built using some outdated version of A where some bug was still in. However, when climbing up from unit-testing via integration testing to subsystem testing and above, it is a challenge to stay focused: Only to have tests for bugs that could not have been found before, and not to, say, repeat unit-tests on subsystem level.
Is the Page Object Model still the best way to automate web applications ?
For me the Screenplay pattern seems a nicer way to manage it.
Or using JavaScript alternatives like Acceptance Testing with Cucumber.js & WebdriverIO for example.
Your question makes it sound like there is only one answer, but actually you can merge these all together if you code it correctly.
Page Object Model
This is good for separation of the elements on the page from the rest of your code. If implemented correctly, a change on the page can be corrected for all scenarios using that page by simply changing one line in the POM.
nextButton.click();
fullName.sendKeys("John Doe");
Screenplay Pattern
This is good for separation of actions that occur on the different pages, the different workflows.
james.attemptsTo(
goToTheNextPage(),
fillOutHisDetails()
);
If the journey has workflow that is slightly changed, the idea is that you can simply reorder the Screenplay pattern, or remove the actions that are no longer necessary.
In the example above, if the business were to decide that the registration form should be a single page instead of multiple, it would make more sense to delete this single line:
goToTheNextPage(),
instead of deleting the 2 that I would have put in:
driver.findElement({css:"#next"}).click();
driver.findElement({css:"#registrationDetails"});
CucumberJS + WebdriverIO
This is good to portray the information of a scenario in pure business language.
Scenario: I am filling out the registration form
Given I am a new user
And I want to register for an account
When I fill out the registration form
Then I should be able to log in
Merging them
If you want truely human readable code, you can merge all 3 of these.
You have your feature file at business level language.
You have your step definitions written with the Screenplay pattern in mind.
You have the Screenplay pattern steps written with the Page Object Model.
This may seem like a lot of layers, but it means that the business will be able to understand the scenarios, the testers and developers looking back over the code will understand the workflow of a certain journey, and looking further into the code on debugging will allow the tester to change the element values across multiple journeys by only changing one line of code.
A question for the testers out there:
Consider that you're writing a test like this: One opens a modal, flips a toggle to "on", saves its state (which closes the modal), then opens it again to check that the state was saved successfully. You also have to check that the toggle, when flipped back to "off", saves successfully and maintains the toggle's "off" state.
Is it reasonable to write your tests so that they chain off the previous test? It feels painfully inefficient to not chain them, especially if your starting process involves logging in, navigating to some page, clicking some tab, then getting started on the stuff you actually want to test.
Although flipping to "on" and flipping to "off" are, strictly speaking, separately testable items, there's no need for you to test them in complete isolation - especially if it means that you have to reset data, re-do logins and navigation etc.
If you're confident that the two tests work together without one interfering with or polluting the other one, then just carry on with one single test.
Automated testers do need to be aware of the efficiency of their tests, and the amount of resources and time it takes to run the complete suite, because if they are too slow or expensive there will be pressure to cut-back.
*I was wondering if i could do all these in javascript, as opposed to having rails helpers
In my application, there are several types of "if-else" UI cases. This is a fairly common scenario, and i wonder if anyone has a tidy way to do these?
EG 1) There might be several types of links with the same behavior: "upvote", etc.
If it is the user's own article, i do not want it to be passed to the server, but pop up a dialog box
EG 2) There might be a several links called "follow", "become a fan", etc
If the user already have done the given action before, it should be a text "followed" instead of a link.
Normally you'd use helpers for this. You write helpers called, based on your examples, upvote_button (which might take a user as a parameter) or follow_button (where this one might take true/false for already-following/not-following.
If you can be more specific as to what you need, I can probably be more specific in my answer.