Reducing Uncertainty in Software Delivery

I recently attended a half day online event that InfoQ held on Reducing Uncertainty in Software Delivery. The thing that made this half day event different was the underlying focus on testing but without a single tester present in the talks or panel discussions. The majority of speakers where developers and there was even a few Engineering managers, Product people and a CEO or two. It also appeared to me that none of them have come from a traditional testing background. However they all made points that a good tester would and then some. The advantage they appear to have over testers is that they were able to incorporate their knowledge of their discipline to give a much broader view than just focusing on the testing itself. 

A key theme that I’m seeing from these talks is that they are spending a lot of effort on learning from failure. Either by analysing ones that have happened in production or actively encouraging teams to cause failures. It was only the more advanced organisations that were taking this approach but the others were not far behind. Why? To make their systems even more resilient. Their approach appears to be using Site Reliability Engineers (SRE) to work along side their engineering teams to help them do the work but also enable the teams to extract the learnings from it too. This isn’t simply having chaos testing to cause failures or postmortems for production failure analysis but to also help teams with the people side of working with and handling failure productively.

The talks that caught my interests were Building in reliability (SRE at Gremlin), User Simulation for Rapid Outage Mitigation (SRE at Uber), and a panel discussion on Testing in production (with 2 CEOs, Product person and an Engineering Manager). 

Now this is a small sample, the speakers are very experienced and working or have worked at some of the best known web based organisations (Google, Uber etc) and US focused too. But I’m seeing a lot of things that testers could advocate for being pursued and implemented by Site Reliability Engineers (SRE). For example: 

  • testing in production,
  • building in observability,
  • pushing testing earlier in the process,
  • encouraging developers to test their own work 

The advantage SREs have is they already have the technical ability and are now starting to build out the socio-technological skills that they were lacking previously. These organisations have another advantage in that they are heavily focused on learning from their failures. So when they do get things wrong they work hard to make sure they extract as much value from that failure as possible. On top of that some of these organisations are actively causing failures within their systems to further limit catastrophic failures that could occur.  Some of these organisations have never had tester and from the looks of things never will. If you’re pursuing a true continuous improvement strategy testers could look like a bottleneck in the process slowing down information flow. How can testers enable the flow of information and what can they add that makes this information even more valuable?  

I’ve pulled my summaries of the talks I found interesting below 

Talk: User Simulation for Rapid Outage Mitigation

Uber uses an alternative approach to end-to-end testing due to their system being so big that no one person can ever fully understand it. Instead they use composable tests that each team will create that allows that team to tests their part of the system but mix in other parts pre and post steps built by their dependent teams. These are then run in a simulation environment that allows them to see how the system will perform when that change is deployed. To incentives team to build the tests they use a mixture of pain (woke-up at 3AM due to production failure) and mitigation support team (hold their hands at 3AM) to encourage them to build the tests. For example if you had these test you wouldn’t be awake at 3AM trying to mitigate the issue. They also don’t try and solve the issues at 3AM but mitigate them so others can also learn about outages that affect their system.

Talk: Building in reliability

Interesting talk focusing in on availability of systems within organisations. The speaker walked through how you could go from 99% availability to 99.99% and how it is a learning journey. Used a simple analogy going from crawling, walking and running to get your availability towards what makes sense for your organisation. Essentially can you do it manually, can you script it and can you automate it? I find this slide as a great way to help others understand what the outcomes are at each stage going from 99% to 99.99%.

Panel: Measuring Value Realisation Through Testing in Production

I usually only see these types of conversation from tester focused panels but none of this panel where testers. Tester focused panels typically focus on testers testing in production but this was very much focused on learning from real users in production. Interesting thing from my prospective was they made all the points that I would expect a reasonably experienced tester to bring. In some cases due to their roles being out of testing they focused on the costs and benefits that were outside of simply testing in production e.g. down side of A/B testing or product management mindset shifts that need to happen to embrace learning from users rather then whatever the road map they have decided says.

In some ways testers testing in production almost act like the middlemen of the learning that happens during testing. Could it be that in some cases testers are getting in the way for teams to learn effectively from testing in production?

Exploratory and Automated testing: Using the right techniques in the wrong contexts

Reading time 2 minutes

Exploratory testing is about testing in an unpredictable context and therefore detecting unpredictable failures in our software. Automated testing is about testing in a predictable context and therefore detecting predictable failures. The mistake we make with automation is we try to apply it to the wrong context. You can’t use testing methods developed for predictable context in an unpredictable environment.

While there is nothing physically stopping you neither practice is particularly efficient if used in the wrong context. Exploratory testing in a predictable environment would just confirm what you already knew only slower and less consistent when repeating the testing . While automated testing in an unpredictable environment would lead to false negatives.

It’s also not a one size fits all solution either as we work in both contexts. Predictable when initially developing the software and unpredictable once running in the live environment.

The only way you can replace exploratory testing with automation is to make the test environment predictable. But that would then mean you are trying to detect predictable issues. This then negates the outcome you were looking for which is trying to detect unpredictable or complex failures.

Testing in unpredictable contexts

The best way to detect unpredictable failures is to use methodologies that can operate in an unpredictable environment. 

One of the best known methods is exploratory testing (sometime called manual testing) but there are other technique too. Such as monitoring of the live environment. Which is good for issues we can predict in an unpredictable environment. Observability using logs, graphs and other telemetry to see how the system is behaving in the live environment. This is helpful for issues we can’t predict and need to debug in the live environment. Phased rollout of features using techniques such as feature toggles, blue/green deployments, canary releasing etc. Useful for limiting the impact of unintended issues in a unpredictable environment. Basically anything that allows you to slowly enable a feature for subsets of users.

Using monitoring and observability in conjunction with phased rollouts can greatly improve your ability to understand and limit how new code behaves in unpredictable environments. 

Testing in predictable contexts

This is not to say automated testing is invaluable as it can help detect smaller predictable issues. Which if left unchecked could develop into larger unknown failures that only occur with the right mix of other smaller issues. Some issues maybe within our control (software we develop) and some outside of our control (other people’s software). For software in our control (a predictable environment) automated testing is almost a prefect match. For software outside of our control (an unpredictable environment) contract testing, exploratory testing, monitoring and observability and phased roll outs of software is preferable. 

Control and isolation

Next time you’re looking at testing techniques think about how much control (and therefore isolation) you have over your test environment. The greater the level of control then the more automation you should consider, but the less control you have then the more you should consider exploratory testing coupled with monitoring, observability and phased rollouts. 

Testing techniques

The following diagram will help you see how different testing techniques stack up against each other. This is by no means an exhaustive list and is only comparing them on a speed of feedback, value of feedback and testing environment bases. So the next time you get into a discussion about testing you could use these characteristics as a good way to frame that discussion.

Testing techniques plotted on a speed, value and environment axis
Testing techniques plotted on a speed, value and environment axis

Are there testing techniques that should be plotted on the chart?

Do you agree with the axis? Is there another more important characteristics of testing that should be captured?

How would you plot the testing techniques?

My biggest takeaways from AgileTD 2020: The future of testers isn’t in automation or testing

I was lucky enough to speak at AgileTD this year and also attend some of the talks. These are my main takeaways from the conference based on the talks that I was able to make.

My confirmation bias sense is tingling with this but…  

The future of testers is not in automation or testing it will play a part but not as big as – helping teams build quality-in.  

Most teams see testing as either bug hunting or just another cost centre that needs eliminating. Therefore testers (at all levels) need to get much better at communicating the value of testing.

As testers we need to start shifting our skill set from doing the testing to advocating for testing within teams.

The skills we need to develop will take time to build as it’s not a matter of just attending training but having hands on experience of using and applying the skills.   

Otherwise testers risk becoming irrelevant in teams that begin to form without the need for testers if (or when) the next shift happens.

What is working in our favour is the slow shift to adopt new development ideas such as those expressed in Accelerate. But also teams figuring out how to really collaborate and not just cooperate. Think of the dance of passing tickets around that happens in a lot of development teams.

Which talks should you take the time to watch?

So which of these talk further lead me to believe the above. Let me break it down:

The future of testers is not in automation or testing

That is not to say it will go away, but it will not be the main objective of our roles.

Is not automation: Automation Addiction by Huib Schoots and Paul Holland (Day 1)

  • A lot of people’s addiction to automation appears to come from automation tool manufactures marketing (promising the world) and sunk cost fallacy (making it hard for people to stop once they’ve started). I’d also add peoples job spec also asking for automation with no rational as to why they want it
  • It is good for some things, generally things we know how they should behave and especially when we can isolate them from the UI.
    • UI’s can behave in unpredictable ways so not always the best place to put automation that needs to be consistent and reliable
  • So what do you do?
  • Focus on teams and start small: 
    • (Focus on) exploratory testing,
    • (Start small with) a good test strategy that includes what is and is not to be tested
  • Automation should be focused and isolated

Is not testing: Let it go by Nicola Sedgwick (Day 3)

  • We as testers need to let go of testing and start focusing on how we help teams understand what quality is and how they build it in
  • Nicola does this by being a Quality Coach and using Quality Engineers embedded in teams to help them mitigates the risks
  • This was a great talk and something lots of others have been advocating.
  • I think we still need to better define the Quality Coach and Quality Engineer roles but we have to start somewhere
  • I’ve written a little about what testers could do next
  • You can also learn a more about Quality Engineer from my TestBash Manchester talk (paywalled)

Also see

  • Testing is not the goal! By Rob Meaney (See below for more)
  • Beyond the bugs by Rick Tracy (See below for more)

Communicating the value of testing

How to pitch and value testing properly in the age of DevOps by Bjorn Boisschot (Day 1)

  • A fairly simple and affected approach to getting the test team behind a testing vision that they can then use to describe what it is that they do.
  • Rather than the typical dry approach of traditional testing (test scripts, reports, bugs) which all reinforce the bug hunter viewpoint of testing
  • This gets testing focused more towards what the organisation is trying to do (think company mission statement) and focuses the testing vision towards that
  • This helps others understand that it more then just bug hunting but about helping make decisions on the quality of the products and how they affect end users
  • His approach was to create a testing mission based on the company vision statement. With a focus on the why of testing and not the what or the how (see Simon Sinek: Start with why). From there they created a number of goals that would help them achieve that mission. Then they used the goal, question, metrics technique to make it measurable.
  • For some in the org this approach made testing much more accessible and greatly improved their view of it.
    • But for others, well, they still didn’t care 

Beyond the bugs by Rick Tracy (Day 3)

  • As senior members of the test team we need to help our testers understand what value they bring to teams. Then give them the tools (verbal and written communication skills) to make their value relatable to other roles. Otherwise they are very likely to be seen as bug hunters and a cost that can be eliminated.  
  • Really fascinating talk where he showed how everyone outside of testing views our roles (bug hunters that cost money). He then showed how we need to cover three main arguments for others to see the value we bring. These being conceptually (does it make logical sense to them) practically (how can they/others use it) and monetary (what does it cost and what’s the ROI). 
  • He then applied these three arguments to different testing scenarios from doing no testing at all to shifting testing as far left as possible and doing it earlier and earlier in the process. Through this he showed how the initial investment in testing increased but would dramatically decrease the costs later on in the process due to issues being found earlier and therefore easier and cheaper to fix.
  • This all reinforced the idea that testers do much more then add costs to projects and finding bugs. Such as 
    • Manually finding issues that would otherwise affect users 
    • Testing earlier on in the process before code is written by testing requirements and designs to prevent issues entering into the systems earlier
    • Raising levels of team understanding in the product, the processes they use and potential issues that is could be introduced 
    • Types of risk that could affect the team and acting as a type of insurance of that risk 
    • Making testing relatable to non testers 
    • Providing sources of information for innovation and improvements within the teams ways of working 
  • But all of this doesn’t just happen. You have to invest in your testers (and them in themselves!) for them to be able to do this such as their technical skills, improving their awareness, understanding risk, alignment with the org etc. IMO: If you keep testers ‘dumb’ and just bug hunting then that is all you will get 
  • He then linked this investments to potential measures so you can see if your investments was paying off and a way for testers to see improvements. Cycle and lead times where two areas that came up quite often 
  • These measures where then linked to business value. Two main ones being faster time to market and improved customer trust in the product. 

The skills we need to develop

These skills are not limited to just these talk but are great examples of what they are

How to keep your agility as a tester by Ard Kramer (Day 1)

  • Great talk about how he uses the 4 virtues of stoicism to be a better testers. I actually think this would help a lot of people within development teams so if you’ve not heard of it before I recommend checking it out. 
  • This looks like a good resource https://iep.utm.edu/stoiceth/ but this talk focused on just the 4 virtues of wisdom, courage, justice and moderation 

Also see 

  • Extreme learning situations as testers (Day 3)
  • How to keep testers motivated by Federico Toledo (Day 3)
  • Beyond the bugs by Rick Tracy (See above)
  • Testing is not the goal! By Rob Meaney (See below)
  • Introducing psychological safety in a tribe (See below)
  • Growing Quality from Culture in Testing Times by Tom Young (See below)
  • Faster Delivery teams? Kill the Test column by Jit Gosai (See below)

Adopt new development ideas

Testing is not the goal! By Rob Meaney (Day 2)

  • From  testability > operability > observability and his journey with his learning with these techniques and how teams have be able to make use of them.
  • I think one of the really interesting points he made was understanding where your team is in their development life cycle.
    • Are they just starting out or are they an established team and product.
    • Depending on where you on this cycle will affect to what level you will need testability, operability and observability.
    • As the three things are about managing complexity and when you are starting out complexity isn’t the problem, product market fit is. 

Also see

  • Faster Delivery teams? Kill the Test column by Jit Gosai (See below)

How to really collaborate and not just cooperate

Growing Quality from Culture in Testing Times by Tom Young (Day 1)

  • Great story from Tom Young on how the BBC news mobile team have grown over the years and how focusing on their team culture has been one of the best ways to build quality into their product. All they way through the talk Tom shouted out to how the whole team help deliver their product

Faster Delivery teams? Kill the Test column by Jit Gosai (Day 2)

Introducing psychological safety in a tribe by Gitte Klitgaard and Morgan Ahlström (Day 3)

  • Hearing how other people have tried to address psychological safety in organisation was very interesting. There was a lot in the talk that I recognised from Amy Edmundsons work (Teaming and Fearless organisation). They didn’t use Amy definition of what psychological safety is but from what I’ve seen all the definitions are almost the same. Simply put are people willing to take interpersonal risks within group settings. If so they have psychological safety if not then they are considered lacking it. 
  • The things that stood out for me was that all these types of initiatives take time and constant work. They are not things that you run a workshop, take a few questionnaires and you have the safety.
  • Also psychological safety is very personal thing so what one person feels is not the same as another in the same team. 
  • There is also a lot of misconception around psychological safety in that people feel that in psychologically safe environments will no longer have any conflicts and is all about everyone being comfortable. This is not the case.
    • PS environments are about being able to share your thoughts and ideas without the worry that it could be used against you in some way.
    • The main reason for PS is to establish environments conducive to learning from each other – which is what is needed for the knowledge work that we do 
    • But to learn effectively your need some level of discomfort
    • Too much discomfort and it can tip into fear which causes the flight or flight response  
      • and you’re not learning anything other then self protection 
      • The best way to protect yourself? Don’t say anything that could lead to a situation that causes conflict… 
    • So PS environments are about people being able to work through conflict productively that can lead to new insights and ideas

There was many, many more talks at the conference (perhaps too many) that I wasn’t able to make and thats not including the workshops so it is worth looking through the programme and seeing what stands out for you.

Think I missed a talk that should be in the list above let me know in the comments.

What is it about that particular talk that makes you think it should be included?

What above do you disagree with?

August – Toread


31st August

📻 How Can You Stop Comparing Yourself With Other People? If you manage people then this podcast is worth a listen. Having a better understanding of why we compare ourselves to others (social creatures living in hierarchical structures) and what issues it can cause (de-motivation, decreased self-esteem and confidence) can help you from stop doing it but also help you help your reports from falling into the trap. They also cover some biases that can lead to it such as casual inference and narrative fallacy.  

💪 How Resilience Works  The post calls out three traits for resilience: 1. A grasp of reality 2. Life has a purpose (for you) and 3. An ability to improvise. I’ve not see resilience called out like this before but there are some good anecdotal stories in there and in broad strokes I agree with it. But like a lot of things with the mind its easier said than done. Especially when you’re in the thick of things going wrong. (Book to add to the reading list: Mans search for meaning)

🍏 How Apple controls the App Store and therefore the end users How Ben explains the App Store Integration in stages is really interesting and key to understanding how Apple has so much control over developers and users. This is a long read but worth it to understand Apple’s almost unbelievable control of developers and users. If you want to access Apple’s users then you almost have no choice but to do as they say otherwise they can revoke your certificates and cut you off in a instant. The thing is this integration is so complicated most people are either not going to understand it or take the time to figure it out. This is very different to how Microsoft controlled Windows. 

17th August

🗺  Things Jobs said I’m no Steve Jobs fan (in the literal sense of the word) but no one can deny he helped create some incredible products. Every so often I read these quotes from him and depending on what’s going on in my work life they take on a different meaning. But one that always stays with me is this one: “You can’t connect the dots looking forward; you can only connect them looking backwards. So you have to trust that the dots will somehow connect in your future. You have to trust in something–your gut, destiny, life, karma, whatever. This approach has never let me down, and it has made all the difference in my life.” 


🏌️‍♀️The Beginner’s Guide to Deliberate Practice What is it? Deliberate practice requires focused attention and is conducted with the specific goal of improving performance, while regular practice might include mindless repetitions. But its not that simple deliberate practice requires that you break down the task into small sub-sections and practice each one till you get better. This is easy if the domain you’re trying to learn is well known, but if it doesn’t have any existing training that you can make use of or you don’t have access to trainers that can help (e.g. mentors or coaches) than you might struggle. I believe this is why it is always a good idea to learn from multiple sources when skilling up in something new that pushes you out of your comfort zone in different ways. If you’re learning something just from one source then keep in mind that it might be one sided…  

10th August

🎼 What software teams can learn from music masterclasses Via twitter from ‪@katrinaclokie.‬ Feedback is by far one of the best ways you can learn and Helen makes a great point in that software teams can learn a lot from music masterclasses and studio classes too. Both are great ways to get feedback from more established artists, peers and teachers. But also from peers in different disciplines who can give a viewpoint that your own peer group might not be able to. Another key point Helen makes is that giving and receiving feedback is a skill and as such needs to be practiced to really help people. I don’t think we do this enough in software teams and when we do it’s not always the best. There is a lot we can learn from artistic masterclasses as an industry which I guess reflects the maturity of their professions and the relative youth of ours. 


🚽 Code Coverage Best Practices This post from Google testing on the toilet series makes some great points on how code (or test) coverage can be a useful metric for teams to use. The biggest one being about how it highlights code that isn’t covered by tests. This is the perfect opportunity for teams to discuss if it should or shouldn’t. Also the advice on using it to inform on conversations topic for code reviews per commit it also a really good idea. But as the articles points out going straight in with “We should use code coverage!” is probably not going to get you very far. Most engineering teams have been burnt pretty badly by it in the past with developers just trying to hit numbers or it being used to measure the effectiveness of them. Both of which lead to the wrong incentives of number gaming rather than productive conversation starters on what are good and bad tests for your context


🐦🧵 Everything you needed to know about 2+2=5 Kareem makes a great point that it’s all about context. If you’re thinking just about raw numbers then 2+2 =4 but if the context was say a male cat and a female cat give it some time then could quite easily be 1+1=8. Numbers are an abstraction of the underlaying reality therefore context matters when you’re looking at numbers. One to ponder the next time you’re looking at statistics 🤔


📹  What is white privilege? Via BBC Bitesize from psychologist John Amaechi. This short 3 minute video does a really good job of explain what privilege is and what white privilege in particular means.It’s not that white people have it easy or struggle any less than people of other races. It’s that their struggles are not going to be about their race where as race can be an additional limiting factor from people in the BAME community. In short white privilege means your skin colour will not be used against you.  

At the intersection of software, technology and people 

Things I’ve been reading this week that I’ve found interesting or intriguing.

July – Toread

At the intersection of software, technology and people 

What is this?

Things I’ve been reading this week that I’ve found interesting or intriguing. Sharing because I thought you might like them too. Most of the links will revolve at the intersection between software, technology and people – with the occasional testing slant. I aim to update them weekly, with some commentary on my thoughts and findings. Feedback always welcome 😁


📬 Latest post what do testers do next if the risks mitigated by manual testing can be reduced through other means? Is it about moving more towards creating a quality culture and if so what do you need to know?

📝 My notes on Kind and Wicked learning environments and how they affect your ability to pick up skill.

Some more notes on a really interesting idea from Eugen Wei on Invisible asymptote. See July 10th below for more or head over to my notes on the article that pull out some of the bits I found interesting.


31st July

Four-Level Training Evaluation Model some useful ideas on what to look for when trying to get feedback on your training or other presentations. Another question that comes to mind: Is the training for the learners or for you to accomplish/be recognised for something… 🤔


💭 10 signs you’re an over thinker While thinking is obviously a good thing overthinking isn’t. But how do you know when you’re doing the good type of thinking? Simple rule: overthinking is focusing on the problems (by either ruminating about the past or worrying about the future). Good thinking is problem solving by focusing on the solutions and self-reflective thinking is looking at situations from a different perspective and finding new insights. 

3 things that motivates us to work

👷‍♀️  3 things that motivates us to work From Dan Pink’s RSA lecture based on his book Drive. The three things being autonomy, mastery and purpose. Autonomy is about being self directed over what and how you do something. Mastery is having the ability to get better at something that challenges us and making a contribution. Purpose is the reason for being or why are we doing the thing we do. The interesting thing is this is about individual motivation to work. Does it still apply when working in teams as we do in software? 


27th July

A model of what could happen if you dropped the ‘In Test’ column…

👷‍♀️ From ‘In Testing’ to ‘In Progress’ columns on team boards: This has a very narrow focus on just dev and test relationship. This model helps illustrate how improving their relationship and getting them to actively collaborate to improve confidence that the code changes work as intended is going to start having an affect on work in progress (WIP). Which as @johncutlefish shows high WIP can lead to a whole host of other problems. The grey lines are what it was previously with the ‘In Testing’ column broken out into it’s own section.

👯‍♂️ Don’t Mock Types You Don’t Own This happens more often then you realise and leads to lots of other problems the main one being you now have to maintain a mock of a service you don’t own or fully understand how it works. Therefore you’re testing against your assumptions of that service you’re mocking. This could lead to a false sense of confidence that everything will work when you go to production. Ideally you want to be using a stub with little to no logic e.g. little to no assumptions and any made are obvious to other developers. Contract and consumer driven contract testing particularly can help here. The other issue is people use the word mock to mean a whole host of other types of Test doubles (fakes, stubs, spies etc) which leads to more confusion so check what they mean when they say mock before assuming you’re talking about the same thing.

🎓 Accountability vs Responsibility This has been really useful when thinking about who is accountable within teams for tasks and who is responsible. I found others (and myself included!) mix these up. Accountability can not be shared and means you are answerable for your actions where as responsibility can be shared and you must respond when someone questions your actions. Having these distinctions can be really helpful in making sure people understand what they are accountable and responsible for. The comments are worth a read too…

20th July

😱 Programming is not a craft from Dan North in 2011 and I have to say I agree with his take even from back then this still stands. I think this really sums it up “Non-programmers don’t care about the aesthetics of software in the same way non-plumbers don’t care about the aesthetics of plumbing – they just want their information in the right place or their hot water to work”. By putting programming at the centre (by treating it as a traditional craft) and not the value you are delivering you risk building what you want and not what users want/need/care/value. Thats not to say the that the code can be shoddy far from it, but just like the plumbing it needs to work but does it need to be gold plated with silver fixings? 

🐦 Learning How to Learn thread from Jez Humble calling out a book: Learning how to learn: A guide for kids and teens. The book aims to help you talk to younger people about how to learn. It covers a really interesting topics called focused and diffuse mode of learning that I hadn’t come across before. There is also a free coursera course by the author on the mental tools of learning covered in the book. 

📻  How to make your own luck (podcast) the frame with which you look at world (people, events, things that happen etc) are going to have a big impact on the opportunities that you’re going to find. So what frame are you using when making decisions? The world is a wicked learning environment (slow feedback hard to tell which variable caused the outcome) while poker can be kind learning environment (fast environment, low number of variables, easier to identify mistakes and learning from them) therefore helps you to understand your decision making easier and then possibly translate over to the real world.

You can find more about wicked and kind learning environments from How Falling Behind Can Get You Ahead:
Kind Learning environments  

  • Kind give lots of feedback as you progress which aid deliberate learning 
  • The rule of the system don’t change either so what it is today is the same tomorrow 
  • Golf, chess and poker are such environments 

Wicked learning environments  

  • Mixed levels of feedback as you progress
  • Rules of the system keep changing 
  • I think software engineering maybe a wicked environment

13th July

🤩  Invisible asymptote (AKA The Invisible glass ceiling of testing) Excellent (and long) read from Eugen Wei and a must read for anyone working in product and software development in general. Brilliantly articulates that all products have an invisible glass ceiling and that by recognising your total addressable market it can help you understand when you’re going to hit it and actually do something about it.  

Why should testers care?

This is a great way to understand how your product owners might be thinking (or should be if they are not). In terms of product quality this could be one of the lenses from which you should look at your products to understand what is valuable to product owners. It’s also a great way to start understanding what value your product is potentially bringing to your users and what cohort that it is and isn’t addressing. My notes on the article pull out some of the bits I found interesting.   

In terms of analysis this hits two of the three domains that testers should have a grasp of: business and users. From that angle we can help the third domain (teams) understand how this affects them.

Remember testing doesn’t always look like testing


🔈 How do you handle criticism Getting feedback is by far the best way to get better but not all feedback is equal. You need to filter out the valuable parts from the things that sting the ego. One way to get better at receiving feedback is to rate yourself on how you respond to it. 5 being excellent and 1 being poor. Did you respond positively and thank them (4 out of 5) or did you try and talk them out of their opinion (2 out of 5). This will help you get better at hearing feedback but also more likely to do something about it. 


👩‍💻 Develop your culture like its software Interesting post from 2017 from the ex-engineering manager of The New York times. They used a google doc to make it collaborative and to start iterating on it. Culture is something that either just happens and evolves in a direction out of your control or you try and be deliberate about it. My preference is towards deliberate because then if it starts heading in a direction you don’t want you’re in a position to do something about it. Otherwise you find out when something hits the headlines. At which point its too late to do something…


🏚 Extreme testing Cool video of what IBM do to make sure their mainframes can handle earthquakes. Makes you wonder what type of testing AWS/Asura/GCC do for all their server farms


6th July

👩‍🏫 Professionalism is not enough via Ten things I’ve learned by Milton Glaser 

when you are doing something in a recurring way to diminish risk or doing it in the same way as you have done it before, it is clear why professionalism is not enough. After all, what is required in our field, more than anything else, is continuous transgression. Professionalism does not allow for that because transgression has to encompass the possibility of failure and if you are professional your instinct is not to fail, it is to repeat success. So professionalism as a lifetime aspiration is a limited goal.

🥾  New employee bootcamp really interesting approach to getting people (product owners in this case) up to speed quickly and productive within their work. I really like the concept of “put your own gas mask on first before helping others”  in terms of helping them figure out their own career paths. What would this look like for on boarding new testers in a team? 

🧫 What is culture? I was doing some research on this and it turns out (unsurprisingly) that its not that easy of a question to answer but the Centre for Applied Linguistics at the University of Warwick (UK) has some really good resources. In particularly this doc which tries to answer that very question in a way that is approachable and can actually help you understand what it is. They break it down into 12 key characteristics but I think this explanation from Spencer-Oatey (2008) does a pretty good job:

“Culture is a fuzzy set of basic assumptions and values, orientations to life, beliefs, policies, procedures and behavioural conventions that are shared by a group of people, and that influence (but do not determine) each member’s behaviour and his/her interpretations of the ‘meaning’ of other people’s behaviour

🤑 What is value? Interesting way of thinking about what value means. In this model there are two focusing areas: revenue and costs. How does something sustain revenue, increase revenue, avoid cost and/or reduce cost. By applying a monetary number to these  you can then discuss them in a way that everyone understands and can hopefully agree on. The other reason for relating this back to a number is having a discussion on what assumptions people are making about those numbers. 

Thanks to Duncan Nisbet for his intriguing blog series on cost of delay Vs cost of poor quality which linked me to the above post. In Duncans post he does a really good job of showing why trying to answer that question is really difficult and is setting up a framework in trying to do just that. I’m looking forward to seeing how this works out!

Future of testers: somewhere between users, teams and businesses

3 minute read

I’ve been thinking a lot recently about what the future of the testers role could look like. Especially in teams that not only fully embrace CI/CD or DevOps but actually get some way towards implementing the ideas behind the approaches.

I’ve broken up my thinking into two posts with the first about whats could happen and this follow up on what testers could do next…

Raising quality awareness in teams

Raising quality in teams isn’t about banging the drum of “We need to make this a quality product” or showing how the product failed some quality criteria e.g. raising a defect. It’s about helping the team understand what quality is and what that means for their system. To be able to do this testers need to be able to articulate what quality means and then apply this to their teams context.

Not sure what quality awareness is then see my earlier post on Building a Quality Culture: Is it Quality Assurance or Quality Awareness

For each tester this context will be unique to their environment. But in a large part, this will be based on how their team works, what their organisation expect that team to produce and who their end users are. Simply put, testers will sit at this intersection of teams, business and users.

Teams

This view point is all about understanding how the team works through their combination of tools, processes, technology, the people involved and what the resulting output is. How does the team do what it does but also why does it do it the way it does.

Business domain

This viewpoint is about understanding the organisation that the team works in. Why does the business exist? What is it trying to accomplish? Yes you can argue that pretty much all businesses are trying to make a profit but how exactly is it trying to do that? Is it by selling advertising, software as a service, access to some physical world good? What is your companies unique selling point? How does it compete against other companies? What external factors affects its ability to achieve its mission? How does the organisation expect the team to contribute to its mission?

Users

This viewpoint is probably the one most familiar to testers as this has been a view they’ve almost always considered. Who are the systems users and what do they expect from it? But they should take this view and expended it further. Why do the users take their time to use your product over others? What do they find valuable about it? Why do some users stay but others leave? Are these the intended people that your organisation expected to use it? Are the users the actual people who pay for the product or does someone else? Who is that someone else and why do they pay for it? Testers should take their time to build, broaden and deepen their understanding of the users, intended users and future users.

At the intersection of teams, business and users

Due to the subjective nature of quality, tester need to understand the reasons behind others views of what quality means to them. These three core areas give the testers the foundational knowledge to do just that.

Now armed with this knowledge and the why of their stakeholders quality measures they can begin to translate this into something their teams can understand and incorporate into their daily work. It’s within this translation/incorporate that you can begin to create a quality culture that isn’t about demanding quality, but creating the understanding of what it means within your teams context.

Context contains teams, business & users
Context contains teams, business & users

What do testers do next?

2 minute read

I’ve been thinking a lot recently about what the future of the testers role could look like. Especially in teams that not only fully embrace CI/CD or DevOps but actually get some way towards implementing the ideas behind the approaches.

I’ve broken up my thinking into two posts with the first about whats could happen and the follow up on what they could do next. So what could happen to testers?

For a lot of software teams the testers role is to assess the quality of the work being produced by the developers. This is usually done by manually testing the software using techniques such as exploratory testing to find any issues that may impact the end users. Any issues found will be raised with the developer who did the work (as defects or an informal chat) to fix if the team deems necessary.

Some teams found this process to be one of their biggest bottlenecks in releasing software so attempted to automate more of this type of testing using UI tests with varying levels of success. Others meanwhile started to look at the testability of what they where producing and started to build quality in.

In both of those scenarios the testers role looks obsolete. With the first supposedly replacing the work they did through automation and the second by removing the role as issues are mitigated before they have end user impact. In both cases what they alway assume is that a testers role is just that, to test.

If the perceived value of testers is just to test the changes made by the development team then the future of the testers role looks bleak but could it be more than this?

Some testers are starting to repurpose the old QA acronym to mean Quality Awareness. What they are doing is shifting the perceived value of their work from purely a testing activity, which was to assure the quality of the system to that of raising awareness of the quality of the system.

This may, at first, look as if they are still doing the same work under a different name, but on closer inspection it is a vastly different role.

Post continues here: FUTURE OF TESTERS: SOMEWHERE BETWEEN USERS, TEAMS AND BUSINESSES

Building a quality culture: Is it quality assurance or quality awareness?

5 minute read

If you ask testers what does QA stand for most are likely to say its Quality Assurance and is typically described as providing confidence that some quality criteria will be fulfilled. Some people in the testing community believe that it’s actually Quality Awareness*. The thinking goes how can tester assure the quality of something if they never built it in the first place? All they can do is make your team aware of the quality. I agree with both explanations and believe that they are different sides of the same quality coin.

Read on to see what this means for testers and how it’s a useful to consider both in teams.

*I’ve also heard Quality Advocate and my favourite Question Askers, both fit this model of Quality Awareness.

Firstly some definitions:

What is Quality?

Quality is value to someone – Gerald Winberg

But who is that someone and what does value mean?

From the Lenses of quality on who that someone is:

…that someone could be their Products Owners (PO), the organisation they work for, the team they work with and their end users and all these groups of people could have very different views on what value means to them and even contradicting in some cases.

https://www.jitgo.uk/lenses-of-quality/

Again from the same post on what value is:

Each of these groups of people view quality with a different lens therefore see the same system differently to one another. We as testers should help our teams to see quality through these different lenses by helping them identify these groups and what their measures of quality are.

https://www.jitgo.uk/lenses-of-quality/

Simply put value will depend on the viewpoint of the person. Identify whats viewpoint that person sees the product/system through then you’re halfway there to working out what’s valuable to that person. If you take this a step further and see what incentives drive that persons viewpoint then you might identify whats valuable to them too. But thats not as easy as it sounds.

For any given team there are multiple members and stakeholders therefore they are likely to be a number of unique and overlapping quality attributes too. Identifying the key ones for each cohort of people is a valuable exercise for any team. It might help explain why some people are never happy no matter what you deliver.

What is Quality Assurance?

Firstly what does assurance mean from the Oxford dictionary:

a statement that something will certainly be true or will certainly happen, particularly when there has been doubt about it

One way to think about assurance is that it’s a promise that an outcome will happen so as to give others confidence. In this case it’s quality and as mentioned earlier quality means value to someone. Therefore QA or Quality Assurance means providing confidence in the quality of the product to stakeholders. This it is about certainty that an outcome will happen not a guarantee. You are saying that best efforts will be made and this is an assurances in making that happen.

What is Quality Awareness?

What does awareness mean from the Oxford dictionary:

knowing something; knowing that something exists and is important

interest in and concern about a particular situation or area of interest

This would lead to QA or Quality Awareness to be a person who is interested in quality (value), understands its importance and has knowledge about the quality of a product or system. To take this a step further someone who works in Quality Awareness understands that quality is value to someone, knows who those people are, what viewpoints they hold and possibly what incentives drive those views. They are then able to apply this knowledge to the system and subsequently increases their teams awareness of overall system quality. Essentially Quality Awareness is about building awareness of what quality is in a team, how it is affected and who it matters too.

Quality Awareness sits at the intersection of the team that produces the system, the domain in which the system operates in and users of the system.

Two sides of the same coin

Both are focused on quality but one is about improving the teams understanding of what quality means for their stakeholders. While the other is focused on maintaining (and hopefully improving) the quality of the product.

In this scenario Quality Awareness can improve Quality Assurance by giving it the metrics with which quality is being assessed on. In this model Quality Assurance can actually fulfil it’s job of providing confidence that the quality of the product is being upheld. Why, because it is taking into account who the stakeholders are and what is valuable to them. Then converting that value into a measurable metric which the engineering team can either:

  1. assess themselves against to make sure they are doing what they said they would do or
  2. provide the stakeholders the metrics to improve their confidence that not only is the engineering team doing their job but maintaining and perhaps even improving it.

The thing to keep in mind is not all quality values can be converted into an easy to measure metric. You can use proxy measures to give you an idea but some measures are always inherently subjective. On top of the this the systems we build are interdependent on other systems which are out of our control and can affect the quality of our systems. Therefore techniques such as exploratory testing can be very beneficial as it can help build a fuller awareness of what quality means for your product.

Back to Quality Assurance?

Does this mean we should go back to using QA again and naming ourselves the QA Team? No, we’ve come a long way in some areas of our industry and going back to QA teams might bring back all those old problems. Such as test team silos, testers being the gatekeepers and why didn’t we catch that bug?

How is this helpful?

Where this can be useful is giving teams another lens through which to look at their testing approach and think is this heading towards building assurance of quality or is this about raising awareness of quality. By separating out into these two camps you can see the value it’s actually going to bring and if its worth the investment. It might also help clear up who should be doing what and when.

Using this model of awareness and assurance could be helpful for testers trying to figuring out what it is that they want to do with their careers. Do you want to learn more about building team confidence in quality through test automation (Quality Assurance) or building a quality culture within teams (Quality Awareness)?

The unintended consequences of automated UI tests

Whenever I see people talking about automated testing I always wonder what type of testing they actually mean? Eventually someone will mention the framework they are using and all too often it’s a UI based automation tool that allows tests to be written end-to-end (A-E2E-UI). 
They are usually very good at articulating what they think these tests will give them: fast automated tests that they no longer need to run manually, amongst other reasons.

But what they fail to look at is the types of behaviours these A-E2E-UI tests encourage and discourage within teams. 

They have a tendency to encourage  

  • Writing more integrated testing with the full stack rather then isolated tests 
    • Isolated behaviour tests (e.g. unit, integration, contract tests etc) run faster and help pinpoint where issues could be
    • A-E2E-UI test will just indicate that a specific user journey is not working. While useful from an end user prospective someone still needs to investigate why. This can lead to just re-running it to see if it’s an intermittent error. Which is only made worse by tests giving false negatives which full stack tests are more likely to because of having more moving parts 
  • Testing becomes someone else responsibility 
    • This is more apparent when the A-E2E-UI test are done by somebody else in the team and not the pair developing the code 
    • Notice ‘pair’ if you’re not a one-person development army then why are you working alone? 
      • Pairs tend to produce better code of higher quality with instant feedback from a real person 
      • It might be slower at first but it’s worth it to go faster later 
      • This is really important for established businesses with paying customers 
      • A research paper called The Costs and Benefits of Pair Programming backs this up but it’s nearly 20 years old now so if you know of anything more recent let me know in the comments.
  • Pushing testing towards the end of the development life cycle 
    • The only way A-E2E-UI tests work is through a fully integrated system therefore testing gets pushed later into the development cycle 
    • You could use Test doubles for parts but then that is not an end-to-end test.
  • Slower feedback loops for development teams 
    • Due to testing being pushed to the later stages of development developers go longer without feedback into how their work is progressing 
    • This problem is increased further when the A-E2E-UI tools are not familiar to the developers who subsequently wait for the development pipeline to run their tests instead of doing it locally
  • Duplication of testing 
    • As the A-E2E-UI test suits get bigger and bigger it becomes hard and harder to see what is and isn’t covered by automation 
    • This leads to teams starting to test things at other levels (code and most likely exploratory testing ) which all add to the development time 

These are just some of the behaviours I’ve observed A-E2E-UI tests encourage, but they also discourage other behaviours which maybe desirable. 

They can discourage development teams from

  • Building testability into the design of the systems 
    • Why would you if you know you can “easily” tests something end-to-end with an automation tool? 
  • Maintainability of the code base
    • By limiting the opportunities to build a more testable design you decrease the maintainability of the code though tests 
    • If you need to make a change it’s harder to see what the change in the code affects
    • By having more fine grained tests you can pinpoint where issues exist
    • A-E2E-UI tests just indicate that a journey has broken and how it could affect the end users
    • Not where the problem was actually introduced  
  • Building quality at the source 
    • You are deferring testing towards the end of the development pipeline when everything has been integrated.  Instead of when you are actively developing the code.
    • Are you really going to go back and add in the tests especially if you know an end-to-end test is going to cover it?
  • The responsibility to test your work 
    • With the “safety net” of the A-E2E-UI tests you send the message that it’s ok if something slips though development 
    • If it affects anything the A-E2E-UI tests will catch it
    • What we should be encouraging is that it’s the developers responsibility to build AND test their work
    • They should be confidant that once they have finished that piece of code it can be shipped 
    • The A-E2E-UI tests should acts as another layer to build on your teams confidence that nothing catastrophic will impact the end users. Think of them as a canary in the coal mine. If it stops chirping then something is really wrong…   
  • More granular feedback loops
    • By having A-E2E-UI tests you’re less likely to write unit and integration tests which give you fast feedback on how that part of the code behaves 
    • Remember code level tests should be testing behaviour not implementation details 

If A-E2E-UI tests cause undesirable behaviours in teams should we stop writing them? While they are valuable at demonstrating end users journeys we shouldn’t be putting so much of our confidence that our system works as intended into them. They should be another layer which helps build the teams confidence that the system hangs together. 

If we put the vast majority of our effort and confidence into these automated end-to-end tests than we risk losing one of the teams greatest abilities: building testability into the design of our systems. But just like the automated UI tests building in testability takes conscious effort. This will take time, patients and experience for the whole team to understand and benefit from.

What is quality? 

tl:dr: Lenses of quality is a way to think about what quality means to software development teams.
A while back someone on twitter asked what does quality mean to you. To which I responded:
https://twitter.com/JitGo/status/1010031582395224064

Which not realising at the time but fitted in nicely with Gerald Weinberg’s description of what is quality:
   Quality is value to someone

For most teams that someone could be their Products Owners (PO), the organisation they work for, the team they work with and their end users. All these groups of people could have very different views on what value means to them and even contradicting views in some cases.

For your organisation quality could be whatever helps them reach their targets for that quarter or year.

For your Product Managers (PM) or Owners their measure of a quality product could be a system or feature released on time.

For your team it could be a system that they can build, deploy, maintain and add to easily.

For your end users, well it could be something as simple sounding as it just works.

Also it’s not that the testers or developers don’t care about shipping early (what the PM wants), it’s more that they might care about maintainability or it does what we said it would do more than shipping early.

All of this could be just the tip of the iceberg and there could be many other people and views on what quality means to them.

As testers we need to help development teams understand that quality is measured by people in different ways.

Lenses of Quality

One of the ways I’ve started to help teams understand this is via the idea of lenses of quality.

Each of these groups of people view quality with a different lens therefore see the same system differently to one another. We as testers should help our teams to see quality through these different lenses by helping them identify these groups and what their measures of quality are.

This would help teams to start thinking about who their stakeholders are and how they are likely to perceive the systems that they build.

Would it possible to line up all the different lenses and be able to focus on one common quality metric?

If so would this be more like a microscope pulling into focus the hidden details or more like a telescope and allow you to see far into the distance?

Got an opinion then say so in the comments.