Ben Linders recently interviewed me for my talk at AgileTD on how we failed at testability. That resulted in this InfoQ post about how to build in testability you need developers and testers to collaborate. But to be able to do that, you need psychological safety
Testability can enable teams to make changes to their code bases without requiring extensive regression testing. To build testability, team members must collaborate and leverage each other’s unique skills. Unfortunately, effective collaboration does not come naturally to people and therefore needs leadership to nurture people’s ability to speak up and share their knowledge.
Testability is all about building quality-in. It’s about identifying known issues before they become a problem while coding. Pairing testers into this process can supercharge the testability feedback loop. It can allow you to pick up known and unknown issues.
But pairing devs and testers together needs courage. Courage so that both disciplines can take interpersonal risks and share hard things such as what they don’t know, don’t understand or mistakes they’ve made. This will need both groups to listen, understand and ask questions to help each other through the process. Both groups will need to show curiosity, humility and empathy for one another. You will not only feel uncomfortable during the process but it will take time too. The temptation to go back to inspecting for quality – dev and test handing work off to each other – will be hard to resist.
Pairing for testability is not just pair programming but working together to understand what the behaviour of the code being written should and shouldn’t do.
Devs and testers should work together to leverage the skills that each have, not get hung up about the skills they lack. If your pair is more exploratory focused identify ways that allow you to make the best use of those skills. If they are more technically inclined then focus there.
Remember the key is to build quality-in not inspect for quality. So what can you do now that helps your team move in that direction?
Don’t report the bugs your test automation catches. Report the reduction in uncertainty that the system works.
When you report the bugs you send the signal that test automation is there to catch bugs. But that’s not what it’s for. Test automation is there to tell you if your system is still behaving as you intended it to.
What are automated tests for?
Each automated test should be some isolated aspect of the behaviour of the system. Collectively these tests tell you that when you make a change to the system it still behaves as you want it to. What automated tests do is reduce your uncertainty that the system still behaves as you expect it to.
Framing test automation as reducing uncertainty
Framing test automation as reducing uncertainty help emphasize that there are always things we don’t know. Whereas if you frame it as increased certainty it can give the impression that we know more than we do.
Framing testing as increasing certainty
Framing testing as reducing uncertainty
What happens when a test passes or fails
When an automated test passes it’s sending a signal that this specific behaviour still exists. Therefore reducing some of your uncertainty that whatever changes you made have not affected this specific behaviour.
When a test fails it signals that this expected behaviour didn’t occur, but that’s it. What it doesn’t tell you is if it is a bug or if it was due to the change to the system. Someone still needs to investigate the failure to tell you that.
So what we should report is to what extent our uncertainty has been reduced by these tests. But how do we do that?
How to frame test automation as reducing uncertainty
Well a good place to start is to help people understand what behaviour is covered by the tests. For instance, you could categorise the behaviour of your system into 3 buckets such as primary, secondary and tertiary.
Primary could be things that are core to your product’s existence. For example for a streaming service, this could be video playback, playback controls and sign up etc. Tests in this bucket must pass before a release can be made.
Secondary could be behaviour that supports the primary behaviours but if they didn’t exist would be annoying at most but still allows the core features to function. For example, searching for new content or advanced playback controls (think variable playback speeds). Tests in this bucket can fail but they should not render the application unusable. Issues discovered here can be fixed with a patch release.
Tertiary behaviours could be experiments, new features that haven’t yet been proven out or other less frequently used features that are not considered core. Tests in this bucket can also fail and don’t have to be fixed with patch releases.
But be careful of accessibility behaviours falling into Secondary and Tertiary buckets. They might not be your biggest users but those features are critical for others to be able to use your systems.
Defining these categories is a team exercise with all the main stakeholders as it is key that they have a joint understanding of what the categories mean and what behaviours can fall into them.
Then when you report that your primary and secondary tests are passing you signal that the core and supporting features are behaving as expected. This reduces the team’s uncertainty that the system behaves as we expect. You can then decide what you want to do next.
Exploratory testing is about testing in an unpredictable context and therefore detecting unpredictable failures in our software. Automated testing is about testing in a predictable context and therefore detecting predictable failures. The mistake we make with automation is we try to apply it to the wrong context. You can’t use testing methods developed for predictable context in an unpredictable environment.
While there is nothing physically stopping you neither practice is particularly efficient if used in the wrong context. Exploratory testing in a predictable environment would just confirm what you already knew only slower and less consistent when repeating the testing . While automated testing in an unpredictable environment would lead to false negatives.
It’s also not a one size fits all solution either as we work in both contexts. Predictable when initially developing the software and unpredictable once running in the live environment.
The only way you can replace exploratory testing with automation is to make the test environment predictable. But that would then mean you are trying to detect predictable issues. This then negates the outcome you were looking for which is trying to detect unpredictable or complex failures.
Testing in unpredictable contexts
The best way to detect unpredictable failures is to use methodologies that can operate in an unpredictable environment.
One of the best known methods is exploratory testing (sometime called manual testing) but there are other technique too. Such as monitoring of the live environment. Which is good for issues we can predict in an unpredictable environment. Observability using logs, graphs and other telemetry to see how the system is behaving in the live environment. This is helpful for issues we can’t predict and need to debug in the live environment. Phased rollout of features using techniques such as feature toggles, blue/green deployments, canary releasing etc. Useful for limiting the impact of unintended issues in a unpredictable environment. Basically anything that allows you to slowly enable a feature for subsets of users.
Using monitoring and observability in conjunction with phased rollouts can greatly improve your ability to understand and limit how new code behaves in unpredictable environments.
Testing in predictable contexts
This is not to say automated testing is invaluable as it can help detect smaller predictable issues. Which if left unchecked could develop into larger unknown failures that only occur with the right mix of other smaller issues. Some issues maybe within our control (software we develop) and some outside of our control (other people’s software). For software in our control (a predictable environment) automated testing is almost a prefect match. For software outside of our control (an unpredictable environment) contract testing, exploratory testing, monitoring and observability and phased roll outs of software is preferable.
Control and isolation
Next time you’re looking at testing techniques think about how much control (and therefore isolation) you have over your test environment. The greater the level of control then the more automation you should consider, but the less control you have then the more you should consider exploratory testing coupled with monitoring, observability and phased rollouts.
Testing techniques
The following diagram will help you see how different testing techniques stack up against each other. This is by no means an exhaustive list and is only comparing them on a speed of feedback, value of feedback and testing environment bases. So the next time you get into a discussion about testing you could use these characteristics as a good way to frame that discussion.
Testing techniques plotted on a speed, value and environment axis
Are there testing techniques that should be plotted on the chart?
Do you agree with the axis? Is there another more important characteristics of testing that should be captured?
I was lucky enough to speak at AgileTD this year and also attend some of the talks. These are my main takeaways from the conference based on the talks that I was able to make.
My confirmation bias sense is tingling with this but…
The future of testers is not in automation or testing – it will play a part but not as big as – helping teams build quality-in.
Most teams see testing as either bug hunting or just another cost centre that needs eliminating. Therefore testers (at all levels) need to get much better at communicating the value of testing.
As testers we need to start shifting our skill set from doing the testing to advocating for testing within teams.
The skills we need to develop will take time to build as it’s not a matter of just attending training but having hands on experience of using and applying the skills.
Otherwise testers risk becoming irrelevant in teams that begin to form without the need for testers if (or when) the next shift happens.
What is working in our favour is the slow shift to adopt new development ideas such as those expressed in Accelerate. But also teams figuring out how to really collaborate and not just cooperate. Think of the dance of passing tickets around that happens in a lot of development teams.
Which talks should you take the time to watch?
So which of these talk further lead me to believe the above. Let me break it down:
The future of testers is not in automation or testing
That is not to say it will go away, but it will not be the main objective of our roles.
A lot of people’s addiction to automation appears to come from automation tool manufactures marketing (promising the world) and sunk cost fallacy (making it hard for people to stop once they’ve started). I’d also add peoples job spec also asking for automation with no rational as to why they want it
It is good for some things, generally things we know how they should behave and especially when we can isolate them from the UI.
UI’s can behave in unpredictable ways so not always the best place to put automation that needs to be consistent and reliable
So what do you do?
Focus on teams and start small:
(Focus on) exploratory testing,
(Start small with) a good test strategy that includes what is and is not to be tested
Automation should be focused and isolated
Is not testing: Let it go by Nicola Sedgwick (Day 3)
We as testers need to let go of testing and start focusing on how we help teams understand what quality is and how they build it in
Nicola does this by being a Quality Coach and using Quality Engineers embedded in teams to help them mitigates the risks
This was a great talk and something lots of others have been advocating.
I think we still need to better define the Quality Coach and Quality Engineer roles but we have to start somewhere
A fairly simple and affected approach to getting the test team behind a testing vision that they can then use to describe what it is that they do.
Rather than the typical dry approach of traditional testing (test scripts, reports, bugs) which all reinforce the bug hunter viewpoint of testing
This gets testing focused more towards what the organisation is trying to do (think company mission statement) and focuses the testing vision towards that
This helps others understand that it more then just bug hunting but about helping make decisions on the quality of the products and how they affect end users
His approach was to create a testing mission based on the company vision statement. With a focus on the why of testing and not the what or the how (see Simon Sinek: Start with why). From there they created a number of goals that would help them achieve that mission. Then they used the goal, question, metrics technique to make it measurable.
For some in the org this approach made testing much more accessible and greatly improved their view of it.
As senior members of the test team we need to help our testers understand what value they bring to teams. Then give them the tools (verbal and written communication skills) to make their value relatable to other roles. Otherwise they are very likely to be seen as bug hunters and a cost that can be eliminated.
Really fascinating talk where he showed how everyone outside of testing views our roles (bug hunters that cost money). He then showed how we need to cover three main arguments for others to see the value we bring. These being conceptually (does it make logical sense to them) practically (how can they/others use it) and monetary (what does it cost and what’s the ROI).
He then applied these three arguments to different testing scenarios from doing no testing at all to shifting testing as far left as possible and doing it earlier and earlier in the process. Through this he showed how the initial investment in testing increased but would dramatically decrease the costs later on in the process due to issues being found earlier and therefore easier and cheaper to fix.
This all reinforced the idea that testers do much more then add costs to projects and finding bugs. Such as
Manually finding issues that would otherwise affect users
Testing earlier on in the process before code is written by testing requirements and designs to prevent issues entering into the systems earlier
Raising levels of team understanding in the product, the processes they use and potential issues that is could be introduced
Types of risk that could affect the team and acting as a type of insurance of that risk
Making testing relatable to non testers
Providing sources of information for innovation and improvements within the teams ways of working
But all of this doesn’t just happen. You have to invest in your testers (and them in themselves!) for them to be able to do this such as their technical skills, improving their awareness, understanding risk, alignment with the org etc. IMO: If you keep testers ‘dumb’ and just bug hunting then that is all you will get
He then linked this investments to potential measures so you can see if your investments was paying off and a way for testers to see improvements. Cycle and lead times where two areas that came up quite often
These measures where then linked to business value. Two main ones being faster time to market and improved customer trust in the product.
The skills we need to develop
These skills are not limited to just these talk but are great examples of what they are
Great talk about how he uses the 4 virtues of stoicism to be a better testers. I actually think this would help a lot of people within development teams so if you’ve not heard of it before I recommend checking it out.
This looks like a good resource https://iep.utm.edu/stoiceth/ but this talk focused on just the 4 virtues of wisdom, courage, justice and moderation
Great story from Tom Young on how the BBC news mobile team have grown over the years and how focusing on their team culture has been one of the best ways to build quality into their product. All they way through the talk Tom shouted out to how the whole team help deliver their product
Hearing how other people have tried to address psychological safety in organisation was very interesting. There was a lot in the talk that I recognised from Amy Edmundsons work (Teaming and Fearless organisation). They didn’t use Amy definition of what psychological safety is but from what I’ve seen all the definitions are almost the same. Simply put are people willing to take interpersonal risks within group settings. If so they have psychological safety if not then they are considered lacking it.
The things that stood out for me was that all these types of initiatives take time and constant work. They are not things that you run a workshop, take a few questionnaires and you have the safety.
Also psychological safety is very personal thing so what one person feels is not the same as another in the same team.
There is also a lot of misconception around psychological safety in that people feel that in psychologically safe environments will no longer have any conflicts and is all about everyone being comfortable. This is not the case.
PS environments are about being able to share your thoughts and ideas without the worry that it could be used against you in some way.
The main reason for PS is to establish environments conducive to learning from each other – which is what is needed for the knowledge work that we do
But to learn effectively your need some level of discomfort
Too much discomfort and it can tip into fear which causes the flight or flight response
and you’re not learning anything other then self protection
The best way to protect yourself? Don’t say anything that could lead to a situation that causes conflict…
So PS environments are about people being able to work through conflict productively that can lead to new insights and ideas
There was many, many more talks at the conference (perhaps too many) that I wasn’t able to make and thats not including the workshops so it is worth looking through the programme and seeing what stands out for you.
Think I missed a talk that should be in the list above let me know in the comments.
What is it about that particular talk that makes you think it should be included?
I’ve been thinking a lot recently about what the future of the testers role could look like. Especially in teams that not only fully embrace CI/CD or DevOps but actually get some way towards implementing the ideas behind the approaches.
I’ve broken up my thinking into two posts with the first about whats could happen and this follow up on what testers could do next…
Raising quality awareness in teams
Raising quality in teams isn’t about banging the drum of “We need to make this a quality product” or showing how the product failed some quality criteria e.g. raising a defect. It’s about helping the team understand what quality is and what that means for their system. To be able to do this testers need to be able to articulate what quality means and then apply this to their teams context.
For each tester this context will be unique to their environment. But in a large part, this will be based on how their team works, what their organisation expect that team to produce and who their end users are. Simply put, testers will sit at this intersection of teams, business and users.
Teams
This view point is all about understanding how the team works through their combination of tools, processes, technology, the people involved and what the resulting output is. How does the team do what it does but also why does it do it the way it does.
Business domain
This viewpoint is about understanding the organisation that the team works in. Why does the business exist? What is it trying to accomplish? Yes you can argue that pretty much all businesses are trying to make a profit but how exactly is it trying to do that? Is it by selling advertising, software as a service, access to some physical world good? What is your companies unique selling point? How does it compete against other companies? What external factors affects its ability to achieve its mission? How does the organisation expect the team to contribute to its mission?
Users
This viewpoint is probably the one most familiar to testers as this has been a view they’ve almost always considered. Who are the systems users and what do they expect from it? But they should take this view and expended it further. Why do the users take their time to use your product over others? What do they find valuable about it? Why do some users stay but others leave? Are these the intended people that your organisation expected to use it? Are the users the actual people who pay for the product or does someone else? Who is that someone else and why do they pay for it? Testers should take their time to build, broaden and deepen their understanding of the users, intended users and future users.
At the intersection of teams, business and users
Due to the subjective nature of quality, tester need to understand the reasons behind others views of what quality means to them. These three core areas give the testers the foundational knowledge to do just that.
Now armed with this knowledge and the why of their stakeholders quality measures they can begin to translate this into something their teams can understand and incorporate into their daily work. It’s within this translation/incorporate that you can begin to create a quality culture that isn’t about demanding quality, but creating the understanding of what it means within your teams context.
If you ask testers what does QA stand for most are likely to say its Quality Assurance and is typically described as providing confidence that some quality criteria will be fulfilled. Some people in the testing community believe that it’s actually Quality Awareness*. The thinking goes how can tester assure the quality of something if they never built it in the first place? All they can do is make your team aware of the quality. I agree with both explanations and believe that they are different sides of the same quality coin.
Read on to see what this means for testers and how it’s a useful to consider both in teams.
*I’ve also heard Quality Advocate and my favourite Question Askers, both fit this model of Quality Awareness.
…that someone could be their Products Owners (PO), the organisation they work for, the team they work with and their end users and all these groups of people could have very different views on what value means to them and even contradicting in some cases.
https://www.jitgo.uk/lenses-of-quality/
Again from the same post on what value is:
Each of these groups of people view quality with a different lens therefore see the same system differently to one another. We as testers should help our teams to see quality through these different lenses by helping them identify these groups and what their measures of quality are.
https://www.jitgo.uk/lenses-of-quality/
Simply put value will depend on the viewpoint of the person. Identify whats viewpoint that person sees the product/system through then you’re halfway there to working out what’s valuable to that person. If you take this a step further and see what incentives drive that persons viewpoint then you might identify whats valuable to them too. But thats not as easy as it sounds.
For any given team there are multiple members and stakeholders therefore they are likely to be a number of unique and overlapping quality attributes too. Identifying the key ones for each cohort of people is a valuable exercise for any team. It might help explain why some people are never happy no matter what you deliver.
a statement that something will certainly be true or will certainly happen, particularly when there has been doubt about it
One way to think about assurance is that it’s a promise that an outcome will happen so as to give others confidence. In this case it’s quality and as mentioned earlier quality means value to someone. Therefore QA or Quality Assurance means providing confidence in the quality of the product to stakeholders. This it is about certainty that an outcome will happen not a guarantee. You are saying that best efforts will be made and this is an assurances in making that happen.
knowing something; knowing that something exists and is important
interest in and concern about a particular situation or area of interest
This would lead to QA or Quality Awareness to be a person who is interested in quality (value), understands its importance and has knowledge about the quality of a product or system. To take this a step further someone who works in Quality Awareness understands that quality is value to someone, knows who those people are, what viewpoints they hold and possibly what incentives drive those views. They are then able to apply this knowledge to the system and subsequently increases their teams awareness of overall system quality. Essentially Quality Awareness is about building awareness of what quality is in a team, how it is affected and who it matters too.
Quality Awareness sits at the intersection of the team that produces the system, the domain in which the system operates in and users of the system.
Two sides of the same coin
Both are focused on quality but one is about improving the teams understanding of what quality means for their stakeholders. While the other is focused on maintaining (and hopefully improving) the quality of the product.
In this scenario Quality Awareness can improve Quality Assurance by giving it the metrics with which quality is being assessed on. In this model Quality Assurance can actually fulfil it’s job of providing confidence that the quality of the product is being upheld. Why, because it is taking into account who the stakeholders are and what is valuable to them. Then converting that value into a measurable metric which the engineering team can either:
assess themselves against to make sure they are doing what they said they would do or
provide the stakeholders the metrics to improve their confidence that not only is the engineering team doing their job but maintaining and perhaps even improving it.
The thing to keep in mind is not all quality values can be converted into an easy to measure metric. You can use proxy measures to give you an idea but some measures are always inherently subjective. On top of the this the systems we build are interdependent on other systems which are out of our control and can affect the quality of our systems. Therefore techniques such as exploratory testing can be very beneficial as it can help build a fuller awareness of what quality means for your product.
Back to Quality Assurance?
Does this mean we should go back to using QA again and naming ourselves the QA Team? No, we’ve come a long way in some areas of our industry and going back to QA teams might bring back all those old problems. Such as test team silos, testers being the gatekeepers and why didn’t we catch that bug?
How is this helpful?
Where this can be useful is giving teams another lens through which to look at their testing approach and think is this heading towards building assurance of quality or is this about raising awareness of quality. By separating out into these two camps you can see the value it’s actually going to bring and if its worth the investment. It might also help clear up who should be doing what and when.
Using this model of awareness and assurance could be helpful for testers trying to figuring out what it is that they want to do with their careers. Do you want to learn more about building team confidence in quality through test automation (Quality Assurance) or building a quality culture within teams (Quality Awareness)?
To build peoples confidence with automation you first need to understand why you’re doing it.
Why do we automate things?
If you look at automation in general the reason to do it is because we have some sort of repetitive manual task that we want to be able to do automatically. By doing so you would remove any inconsistency that could occur from doing it manually. This would also make the output of the process reliable and repeatable as and when you need it. In short automation can make processes consistent, reliable and repeatable.
This usually leads to other benefits too such as the ability to scale up the automated process in terms of frequency and speed all while reducing costs in some scenarios. Essentially you can take advantage of economies of scale.
The benefits of consistency, reliability, repeatability and scalability is that this helps the people associated with that process to have confidence in the output of that automaton. They can either see the process happening again and again or can inspect the output to validate their confidence in the process. You could even take it step further and automate the inspection too.
But what about test automation?
The above works well for say automating a physical process such as making a glass bottle. You can either see how the bottle is made or inspect the end product. But when it comes to test automation you can’t “see” the test occurring (or any software process for that matter) and the only output is likely to be a result: pass or fail.
The only way to gain confidence in the automation is to either inspect the code (process) or your confidence is based more on the person doing the automation. You trust that they wouldn’t fake it or maliciously do anything wrong.
If that confidence is lacking then the only way to feel confident that the system being tested works is to test the system again. Therefore any of the benefits gained from automation have been lost as you are now duplicating the effort. The biggest loss being the economies of scale.
This is by far one of the biggest reasons why tester in teams have very little confidence in the automation. They don’t know what is covers, how it works or even if it’s being done to a high standard. If your job is to understand and raise risks within the team then this almost leaves them with no choice but to test it again.
Building confidence
If you’re a developer or automation specialist then you have two options in improving peoples confidence in the output of the automation. Help them “see” the process or build their trust with you. Both of which will go some way to improving their confidence with the process.
Better yet, help them understand the principles behind the automation which if done with humility and compassion is naturally going to lead to those people trusting you as well. By helping them understand the principles behind the automation you enable them to work out for themselves what is and isn’t being automated but also to what standard it is happening to. This then lets them see where the gaps are in the process which they can raise as risks or work to plug them up.
I’ve written about building a team understanding of unit testing which details how you can document your principles in a way that is accessible. You can use this method to document any team principle not just unit testing.
Whenever you talk about unit testing with teams they never tell you what it means to them. They go straight to of course we do and show you 100s of passing tests. Interesting thing is by calling it unit testing everyone thinks they are talking about the same thing. But when you start digging into how they understand it you begin to see that everyone talks about it and understands it differently.
What do the unit tests test?
A selection of responses to the question what do the unit tests test
A unit means different things to different people but we never stop and ask what does a unit and unit testing mean to you? Why? Well that could be risky as you’re potentially questioning someone’s ability. Which probably says more about psychological safety in your team but thats a topics for another day.
A Unit means different things to different people
So what should you call it then? Well maybe as a stop gap just call it what it is a test that checks code; Code test. Now I know what you’re thinking “thats way too generic!” Which is kind of the point because when you do that the first thing people ask is “What’s a code test?” Now you can start the discussion without anyone feeling that you’re questioning their ability.
What are Code tests?
How do you build a team understanding of what it is?
One of the best and easiest ways is to get the team together and pose them three questions:
Three questions to ask teams about unit testing
What does a unit mean to you in unit testing?
What characteristics make a good unit test?
What characteristics make a bad unit test?
Hand out sticky notes or use whatever online tool your team prefer. (Miro is a pretty good online collaborative white board). Then ask each question one at a time. If you can do it in person then doing this in a big room with lots of wall space is best as it allows for people to talk to one-another during the idea generating stage. Allowing them to talk is advantageous as people will build on top of each others ideas. But this may not be practical for distributed teams.
Building a Team Understanding of Code Tests
Once everyone has had a chance to contribute, group and theme the responses. Then as a team look through them and see if there are any contradictions or if anyone strongly disagrees with the groups. If there is then this is a perfect time to build the teams understanding of what code testing is.
Building a Team Understanding of Code Tests
If you’re looking for some inspiration then watching as a group Ian Cooper: TDD, where did it all go wrong and J.B. Rainsberger: Integrated Tests Are A Scam are a good places to start. Both these talks are quite old now so more up-to-date versions maybe available.
You may find that you need to run the sticky note exercise again to build consensus but essentially you want the groups agreement on what a unit is and what are good and bad characteristic of a test. This will give you a high level understanding of what a code test is.
What do you do once you have group agreement?
So You’ve got a high level understanding but you need to turn that group understanding into something more solid. Something that gives them
Alignment with each others understanding
Autonomy with how they actually implement code level tests
But also Accountability so not only is it their responsibility to do but to do it well
autonomy, alignment and accountability
You could just say “look at the code for examples” but as we’ve seen from before this isn’t always the best way as the intent behind the code may not be clear to everyone that reads it.
Ideally it would be something that is lightweight, but not too light that it’s too open to interpretation e.g. sticky notes. But not too heavy either that no one ever reads e.g. 10,000 word essay hidden in a Wiki.
Lightweight documentation
We need to document it in a way that is quick and easy to read and therefore remember.
The best way to demonstrate this is through an example. Now this example isn’t describing code testing (you need to have that discussion with the team first) however it has all the elements we are looking for.
Example principle
The title is short and to the point which makes it easier to remember but also acts like super short summary of the principle itself.
The first paragraph describes what it is about. The language used is really easy to understand too. It takes no effort to read and comprehend. This allows the reader to spend more time understanding the content rather than trying to decipher the words used.
The second and third paragraphs detail good and bad behaviours respectively. Finally they have a list of links that show where they have demonstrated this behaviour.
The great thing about this structure is that each part builds on top of the previous part. The title gets built on by the description. The good and bad behaviours builds on top of the description and the links give concrete examples of those behaviours so the reader can see them in action or even gives them the opportunity to add their own.
Back to the sticky notes
What To Do With The Sticky Notes
You might have worked this out already by those sticky notes will map onto this simple title/description/good/bad framework quite easily. The what does a unit mean to you would be used to write the description of what unit testing is. The key points from the good and bad characteristics would make up the good and bad behaviours descriptions. Finally all those unit tests you have should be used to demonstrate where those good and bad behaviours have been shown in your code base. You’re on your own for coming up with a snappy title.
Autonomy, Alignment and Accountability
You’ve got your lightweight documentation but how does this relate to creating team autonomy, building alignment between developers and making them accountable for their actions?
Building Alignment through a common language
The description is all about what a unit is and gives a common language for the team to use when talking about code testing. This helps to build aliment between team members.
Creating Autonomy through why not how
The good behaviours say nothing about how to write good code tests just what makes a good test within this team. Hence the focus on characteristics during the sticky notes session. The good behaviours coupled with the bad act as guard rails in what we do want and less of what we don’t. This works to keep the developer autonomy as they still have to workout how to actually do it. If they are unsure they have links to where the team have actually implemented tests that demonstrate this behaviour or they can always speak to the other developers.
Autonomy & Alignment enables Accountability
By documenting the principle using easy to comprehend language to build a common team vocabulary and describing behaviours instead of instructions to create autonomy you increase the responsibility within the development team that they are accountable for enabling the principle. Not only that it makes it that much easier for people to find more information and lowers the barrier to approaching the subject in the first place.
One of the great things about documenting things is that you can point at that thing and say you don’t agree which is a lot easier then pointing at a person and saying the same thing.
How Does This Map Onto Alignment Autonomy Accountability?
In Summary
By following this model you can begin to create a team understanding of what unit testing means to them and create a unified language so that they can talk about. It also lowers the barrier to understanding the approach for others which really helps to improve the overall team confidence in what unit testing does and doesn’t give them.
Documenting your teams understanding of unit testing using this lightweight model means that when people eventually leave that knowledge doesn’t leave with them or slowly erode from the teams memory. Another benefit is as new members join the team they can use this to build up their understanding of how the team approach unit testing.
There is a risk that the information does become outdated but you could use the new joiners as motivation for the team to re-visit old principles and see if they are still valid or need updating. You never know by including the new joiner in this process they may add something that you hadn’t considered before and gives them an opportunity to start positively contributing to the team. At a minimum it kicks starts the conversation again and allows the team to visit old assumptions and behaviours.
You could also use this model to document other principles that the team would like to work by all while maintaining their individual autonomy, alignment with one another and emphasising accountability that it’s up to them to make it happen.
Now you can see if it still makes sense calling them code test, unit tests or something else all together.
Back in March 2018 I visited The Design Museum in London and came across the above installation.
What you can see is technology design classics all the way from the first transistor radios on one side to the very first digital clocks on the other. With everything else in-between.
If you stand back far enough you begin to see that they are not just randomly placed on the wall but in a particular order. As each piece of technology progresses in its evolution you begin to notice that it starts acquiring functionality from the technology around it. Not only that but they start to shrink in size at the same time. Eventually you realise that all of that technology has been assumed into one device: The mobile phone which is placed right in the centre of the wall.
With the older technology its size and its complexity was on show for all to see. The mobile phone however is different. It actually looks quite simple on the outside with only a screen and a few buttons. But once you turn it on you begin to realise that this is something quite different to what has come before. It can not only provide all of the functionality from the technology that came before it but much more through the use of the internet. This isn’t just limited to mobile phones but pretty much all technology that comes after. From TV’s, speakers and wrist watches everything is slowly being interconnected via the internet.
The interesting thing about a lot of this new technology is that it is actually been developed and controlled by only a handful of companies. Who on average have more resources than a lot of other more traditional companies combined. On top of that they have oriented themselves around the users unlike any other company before always working to provide them with best experience they can come up with. It’s almost like they know every users is a click a way from moving onto the next thing but something keeps those users coming back. It sometimes look hopeless competing against them, so what do we do?
Software is eating the world
Marc Andreessen back in 2011 wrote that “Software is eating the world” which actually gives us some hope. Software allows us to compete again and perhaps tempt those users away. Remember just as the competition we are only a click a way too. But what is going to get those users to click something new?
We need to be able to try different ideas and get them in-front of our users to start seeing what works and what doesn’t based on real data and not just what people think is working.
Leadership to build Collaboration and Purpose
However to be able to start doing that we need to start working better together as software teams. Simply having the best developers is not going to cut it. Research from Google’s Aristotle project showed that this wasn’t the case but 5 other team dynamics where better predicators of well functioning teams. These being psychological safety, dependability, structure & clarity and meaning & impact.
Side note: Psychological safety is all about leadership and interpersonal risk taking and not just saying this is a safe space. Read The Fearless Organisation to learn more.
Once we can collaborate more effectively we can build psychological safety, dependability and structure into the team. From there we can start working on the teams purpose. What is the teams reason for being, what are they trying to accomplish, how will this help the organisation? Purpose is all about providing the team clarity, meaning and impact. But simply asking people to collaborate and giving them a purpose isn’t going to build the team dynamics set out earlier. It’s going to need leadership to build the type of collaboration we need that has those characteristics. Leaders will need to be more hands on demonstrating interpersonal risk, building dependability between team members and setting up what the initial structure to the team is.
What is quality?
For arguments sake let say you’ve been able to get someway to doing that. Now what? Do the user of your systems just magically start appearing? Team collaboration is only one part, now you need to start iterating on the system. You could just get the team to build whatever they think is a good idea and get them to do it as fast as they can. The risk is releasing half-baked systems that end up causing you or worse, your users more problems then before. The thing is users tend to want a quality product, but quality is subjective and so means different things depending on your view point. From the lenses of quality :
For your Organisation quality could be whatever helps them reach their targets for that quarter or year.
For your Product owner their measure of a quality product could be a system or feature released on time.
For your Team it could be a system that they can build, deploy, maintain easily.
For your Users, well it could be something as simple as it just works. – Lenses of quality
Building Quality in via Testability
If quality means different things to different people how can you build quality into a product? By building in testability instead. What testability does is start to make your system objective. Meaning that instead of people saying the system feels easier to work with or they think it works correctly you use tests to back up those feelings. Those tests have to be built into the system during development. It is not something that can be added on very easily after the fact and especially by people who haven’t built the system in the first place. Testability is not about testing the system end-to-end but piece-by-piece. Each piece being a specific type of behaviour the system provides and tested in isolation from the other pieces. The scope and definition of the behaviours should be decided on by the team collaboratively. Unit testing can help with testing like this but everyone has a different opinion on what a unit is and therefore have very different approaches to testing a unit:
4\ Everyone seems to have a different opinion on what makes a unit but also what makes a a good and bad unit tests
Which is why I have a problem with calling them unit tests and outlined how you could define them by calling them code tests first and then building a team understanding of what they are.
This type of testing is what I think gets us towards what W Edward Deeming (1900-1993) known in his time as the leading management thinker on quality when he said we should
“Cease dependence on mass inspection. Build quality into the product from the start” – W Edward Deeming
So do we just need to work better together and building in testability to solve all our quality issues?
Software ate the world, so all the worlds problems get expressed in software
It’s been 9 years since Marc Andreessen wrote Software is eating the world. Ben Evans (a business analyst who worked for Andreessen) recently said in his presentation Standing on the shoulders of giants
“Software ate the world, so all the worlds problems get expressed in software” – Ben Evans
You can build in all the quality measures you want but that doesn’t address any of the problems we’ve intentionally encoded into the system. You are going to need someone who understands how the team works (and how the problems are encoded into the system), knows how the system is deployed into the real world (and the domains in which it is used) and who those users are (and what they expect of it). That someone already exists within teams but most teams have simply been using them as a safety net to check their work and to channel my inner Deeming “Carry out mass inspections of our systems”. We’ve called them Testers but maybe it’s time we start to think of them as something else?
Software levels the playing field again and allows us to innovate in ways that no other tool before it has ever allowed. However to do so we need to work collaboratively as teams to build testability into our software systems and testers to raise awareness of what quality is for our products. From this foundation we can begin to compete again and really start offering ours users that temptation to click something new.