Test Automation: Don’t report the bugs it catches

, , , , ,
Baby spider by Umesh Soni from unsplash

Reading time: 3 minutes

Don’t report the bugs your test automation catches. Report the reduction in uncertainty that the system works.

When you report the bugs you send the signal that test automation is there to catch bugs. But that’s not what it’s for. Test automation is there to tell you if your system is still behaving as you intended it to.

What are automated tests for?

Each automated test should be some isolated aspect of the behaviour of the system. Collectively these tests tell you that when you make a change to the system it still behaves as you want it to. What automated tests do is reduce your uncertainty that the system still behaves as you expect it to.

Framing test automation as reducing uncertainty

Framing test automation as reducing uncertainty help emphasize that there are always things we don’t know. Whereas if you frame it as increased certainty it can give the impression that we know more than we do.

Framing testing as increasing certainty
Framing testing as reducing uncertainty

What happens when a test passes or fails

When an automated test passes it’s sending a signal that this specific behaviour still exists. Therefore reducing some of your uncertainty that whatever changes you made have not affected this specific behaviour.

When a test fails it signals that this expected behaviour didn’t occur, but that’s it. What it doesn’t tell you is if it is a bug or if it was due to the change to the system. Someone still needs to investigate the failure to tell you that.

So what we should report is to what extent our uncertainty has been reduced by these tests. But how do we do that?

How to frame test automation as reducing uncertainty

Well a good place to start is to help people understand what behaviour is covered by the tests. For instance, you could categorise the behaviour of your system into 3 buckets such as primary, secondary and tertiary.

Primary could be things that are core to your product’s existence. For example for a streaming service, this could be video playback, playback controls and sign up etc. Tests in this bucket must pass before a release can be made.

Secondary could be behaviour that supports the primary behaviours but if they didn’t exist would be annoying at most but still allows the core features to function. For example, searching for new content or advanced playback controls (think variable playback speeds). Tests in this bucket can fail but they should not render the application unusable. Issues discovered here can be fixed with a patch release.

Tertiary behaviours could be experiments, new features that haven’t yet been proven out or other less frequently used features that are not considered core. Tests in this bucket can also fail and don’t have to be fixed with patch releases.

But be careful of accessibility behaviours falling into Secondary and Tertiary buckets. They might not be your biggest users but those features are critical for others to be able to use your systems.

Defining these categories is a team exercise with all the main stakeholders as it is key that they have a joint understanding of what the categories mean and what behaviours can fall into them.

Then when you report that your primary and secondary tests are passing you signal that the core and supporting features are behaving as expected. This reduces the team’s uncertainty that the system behaves as we expect. You can then decide what you want to do next.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published.