The courage to supercharge your testability

Testability is all about building quality-in. It’s about identifying known issues before they become a problem while coding. Pairing testers into this process can supercharge the testability feedback loop. It can allow you to pick up known and unknown issues.

But pairing devs and testers together needs courage. Courage so that both disciplines can take interpersonal risks and share hard things such as what they don’t know, don’t understand or mistakes they’ve made. This will need both groups to listen, understand and ask questions to help each other through the process. Both groups will need to show curiosity, humility and empathy for one another. You will not only feel uncomfortable during the process but it will take time too. The temptation to go back to inspecting for quality – dev and test handing work off to each other – will be hard to resist.

Pairing for testability is not just pair programming but working together to understand what the behaviour of the code being written should and shouldn’t do.

Devs and testers should work together to leverage the skills that each have, not get hung up about the skills they lack. If your pair is more exploratory focused identify ways that allow you to make the best use of those skills. If they are more technically inclined then focus there.

Remember the key is to build quality-in not inspect for quality. So what can you do now that helps your team move in that direction?

Why do we respond the way we do in social situations?

A colleague recently shared an article with me called Managing with the brain in mind. It argues that the workplace is experienced by employees as social structure first then a workplace and that keeping this mind might lead to better employee engagement.

It uses (some) neuroscience research to help explain how people are likely to respond in certain social situations and identifies that people’s brains tend to view situations in terms of threats and rewards* .

It then goes on to detail 5 social qualities abbreviated to SCARF (status, certainty, autonomy, relatedness and fairness) that you could use to help you understand how you could apply some of the research to yourself and your teams.

I found some of the ideas quite compelling and wondered if it would be easier to consume modelled as a mind map.

People perceive social interaction in different ways. The research carried out over the years suggests that they may view it as a threat or reward. If the threat response is too serve then this is likely to limit their brain’s ability to function and therefore limit behaviours that can lead to rational outcomes. If it is a mild threat response then it might be enough to provoke curiosity, free up brain resources and motivate them towards rational behaviours. A reward response is the most likely to lead towards a rational behaviours as they have more brain capacity to take on additional information.

The key point is you will never know fully to what extent someone is experiencing a situation – they may not even be able to articulate how they are feeling themselves. Unless they are responding in a way that is very clear e.g. angry/fearful most likely a threat response, happy/joyful most likely reward response, but most work situations are likely to cause a neutral response with no obvious outward emotion.

Therefore approaching a situation that is more likely to cause a reward response in an individual is most likely to produce behaviours that can lead to better outcomes.

What do you think about the ideas behind the SCARF model?

How would you use the model?

Is there another way in which we can can help people in social situations?

Let me know what you think in the comments below.

*I have to admit this point is a little tenuous as they make this connection by viewing brain scans of how people respond to pain and how they respond to social situations. They found that similar pathways in the brain where invoked whether it was a negative social situation or physical discomfort. But a lot of the ideas expressed in the article fitted in with what I’ve seen.

Building Quality in via Testability

7 minute read

Back in March 2018 I visited The Design Museum in London and came across the above installation.

What you can see is technology design classics all the way from the first transistor radios on one side to the very first digital clocks on the other. With everything else in-between.

If you stand back far enough you begin to see that they are not just randomly placed on the wall but in a particular order. As each piece of technology progresses in its evolution you begin to notice that it starts acquiring functionality from the technology around it. Not only that but they start to shrink in size at the same time. Eventually you realise that all of that technology has been assumed into one device: The mobile phone which is placed right in the centre of the wall.

With the older technology its size and its complexity was on show for all to see. The mobile phone however is different. It actually looks quite simple on the outside with only a screen and a few buttons. But once you turn it on you begin to realise that this is something quite different to what has come before. It can not only provide all of the functionality from the technology that came before it but much more through the use of the internet. This isn’t just limited to mobile phones but pretty much all technology that comes after. From TV’s, speakers and wrist watches everything is slowly being interconnected via the internet.

The interesting thing about a lot of this new technology is that it is actually been developed and controlled by only a handful of companies. Who on average have more resources than a lot of other more traditional companies combined. On top of that they have oriented themselves around the users unlike any other company before always working to provide them with best experience they can come up with. It’s almost like they know every users is a click a way from moving onto the next thing but something keeps those users coming back. It sometimes look hopeless competing against them, so what do we do?

Software is eating the world

Marc Andreessen back in 2011 wrote that “Software is eating the world” which actually gives us some hope. Software allows us to compete again and perhaps tempt those users away. Remember just as the competition we are only a click a way too. But what is going to get those users to click something new?

We need to be able to try different ideas and get them in-front of our users to start seeing what works and what doesn’t based on real data and not just what people think is working.

Leadership to build Collaboration and Purpose

However to be able to start doing that we need to start working better together as software teams. Simply having the best developers is not going to cut it. Research from Google’s Aristotle project showed that this wasn’t the case but 5 other team dynamics where better predicators of well functioning teams. These being psychological safety, dependability, structure & clarity and meaning & impact.

Side note: Psychological safety is all about leadership and interpersonal risk taking and not just saying this is a safe space. Read The Fearless Organisation to learn more.

Once we can collaborate more effectively we can build psychological safety, dependability and structure into the team. From there we can start working on the teams purpose. What is the teams reason for being, what are they trying to accomplish, how will this help the organisation? Purpose is all about providing the team clarity, meaning and impact. But simply asking people to collaborate and giving them a purpose isn’t going to build the team dynamics set out earlier. It’s going to need leadership to build the type of collaboration we need that has those characteristics. Leaders will need to be more hands on demonstrating interpersonal risk, building dependability between team members and setting up what the initial structure to the team is.

What is quality?

For arguments sake let say you’ve been able to get someway to doing that. Now what? Do the user of your systems just magically start appearing? Team collaboration is only one part, now you need to start iterating on the system. You could just get the team to build whatever they think is a good idea and get them to do it as fast as they can. The risk is releasing half-baked systems that end up causing you or worse, your users more problems then before. The thing is users tend to want a quality product, but quality is subjective and so means different things depending on your view point. From the lenses of quality :

For your Organisation quality could be whatever helps them reach their targets for that quarter or year.

For your Product owner their measure of a quality product could be a system or feature released on time.

For your Team it could be a system that they can build, deploy, maintain easily.

For your Users, well it could be something as simple as it just works. – Lenses of quality

Building Quality in via Testability

If quality means different things to different people how can you build quality into a product? By building in testability instead. What testability does is start to make your system objective. Meaning that instead of people saying the system feels easier to work with or they think it works correctly you use tests to back up those feelings. Those tests have to be built into the system during development. It is not something that can be added on very easily after the fact and especially by people who haven’t built the system in the first place. Testability is not about testing the system end-to-end but piece-by-piece. Each piece being a specific type of behaviour the system provides and tested in isolation from the other pieces. The scope and definition of the behaviours should be decided on by the team collaboratively. Unit testing can help with testing like this but everyone has a different opinion on what a unit is and therefore have very different approaches to testing a unit:

What do the unit test test?
4\ Everyone seems to have a different opinion on what makes a unit but also what makes a a good and bad unit tests

Which is why I have a problem with calling them unit tests and outlined how you could define them by calling them code tests first and then building a team understanding of what they are.

This type of testing is what I think gets us towards what W Edward Deeming (1900-1993) known in his time as the leading management thinker on quality when he said we should

“Cease dependence on mass inspection. Build quality into the product from the start” – W Edward Deeming

So do we just need to work better together and building in testability to solve all our quality issues?

Software ate the world, so all the worlds problems get expressed in software

It’s been 9 years since Marc Andreessen wrote Software is eating the world. Ben Evans (a business analyst who worked for Andreessen) recently said in his presentation Standing on the shoulders of giants

“Software ate the world, so all the worlds problems get expressed in software” – Ben Evans

You can build in all the quality measures you want but that doesn’t address any of the problems we’ve intentionally encoded into the system. You are going to need someone who understands how the team works (and how the problems are encoded into the system), knows how the system is deployed into the real world (and the domains in which it is used) and who those users are (and what they expect of it). That someone already exists within teams but most teams have simply been using them as a safety net to check their work and to channel my inner Deeming “Carry out mass inspections of our systems”. We’ve called them Testers but maybe it’s time we start to think of them as something else?

Software levels the playing field again and allows us to innovate in ways that no other tool before it has ever allowed. However to do so we need to work collaboratively as teams to build testability into our software systems and testers to raise awareness of what quality is for our products. From this foundation we can begin to compete again and really start offering ours users that temptation to click something new.

Junaid Valimulla: Are we nearly there yet?

Guest post from Junaid Valimulla currently working as a Senior Test Engineer who is continually learning and honing his craft. I first met Junaid back in my Sony Ericssion days as a Test Analyst and was always impressed with his ability to pick things up quickly and to start helping teams deliver on their goals but most of all his great sense of humour.

I hope you enjoy this post as much as I did as I think his metaphor so aptly describes what a lot of us as testers have felt.

Are we nearly there yet?

Every parent can attest to hearing these words at least once in their life. It is often said by a frustrated child who has has been jailed in the back of the car for what seems like a lifetime to them whilst on a journey to their grandparent’s house.

Now if we look at this from a tester perspective, it become obvious that it is not too dissimilar to the question uttered by many a tester all too often:

“Are we ready to test yet?”

Except in this case the tester is the frustrated child, the developer is the parent driving the car and the grandparent’s house is the ready to test column!

This is the age-old problem, which must change if we are to work in a truly agile environment. Testing has to be seen less as the thing that happens at the end and more of the thing that happens throughout! Testers need to be collaborated with, right from the kick off. The old cumbersome approach encourages the ‘throw over the wall’ mentality, which often leads to a build up of tickets and testing tasks and reaffirms the notion that testing is a bottleneck.

This change can be achieved but requires a number of things to happen. First and foremost their needs to be a mind set change from all parties. This is not just the testers, but developers, designers etc. All those that are involved with the creation of the product need to buy in to the collective responsibility mantra. This however is not something a tester can change and possibly influence alone.

Another thing that can be done and is something I believe the tester can drive, is make changes to the scrum board. Every team I have worked in has had a ‘Ready for test’ or ‘Testing’ column. A column, which ALWAYS comes after the ‘Development’ column. This has to change. This column, this simple yet powerful column undoes all that we are trying achieve. This column maintains the old tired way of thinking.

So, for testing to really be recognised on the board it must be removed off it? Really?? This may sound contradictory but it is the only way that testing will be seen as an on going task performed throughout rather than the bottleneck ‘bit at the end’! No longer will tickets be in ‘Development’ for developers and ‘Testing’ for testers. They will be ‘In Progress’ for all to work together and collaborate. This will encourage conversation between testers and developers, encourage faster feedback and ultimately require less testing at the end! Which is what we all want right?

To conclude what columns would I have then? Simple:

  • Ready for Sprint/Ready to pick up
  • In Progress
  • Done

 

More about Junaid:

Senior Test Engineer (BBC Sport App) with over 13 years experience working in the testing discipline. Testing software for a number of companies ranging from Mobile OS’s and Apps, Websites, TV STB’s, Wireless Routers and everything in-between!

You can find him on Twitter

How to break the rules?

Dan North gave a really interesting talk at last years GoTo Conference called How to break the rules.

He takes Eliyahu Goldratt 4 steps of how technology is adopted which he presented in a series of lectures he gave on why people didn’t take on his ideas of theory of constraints (TOC) from his book The Goal.

Eliyahu stated that organisation need to go through 4 steps before a new technology can be successfully adopted:

  1. What is the power of the technology
    • What does it do for you?
  2. What limitation does the technology diminish?
    • What will it make better
  3. What rules enable us to manage that limitation presently?
    • And how much are we wedded to those rules
  4. What new rules will we need?
    • For the technology to succeed

Dan than takes these 4 rules and applies them to real companies that either succeeded or failed to take advantage of new technologies. The interesting part is for the companies that failed what rules (part 3) they needed to break to make use of the new technology.

I’ve made my rough notes on the talk available below but I highly recommend watching it.

 


My notes from the talk:

Talks about The Goal (book) and Beyond the Goal – lectures by Goldratt

  • Series of lectures about 20 years after The Goal was released
  • He tries to attempt to explain why people didn’t apply his Theory of constraints successfully if at all.

First two lectures are: How to adopt Technology?

What is technology: The application of knowledge

For it to be adopted then answer these questions:

  1. What is the power of the technology
    • What does it do for you?
  2. What limitation does the tech diminish?
    • What will it make better
  3. What rules enable us to manage that limitation ?
    • How much are we wedded to those rules
  4. What new rules will we need?
    • For the technology to succeed
He then goes on to apply these questions to real companies.
  • MRP – first application of computers in a business situation
    • Calculated cost of materials for manufacture DuPoint
    • Which allowed them to ship faster then others
    • But when the competition tried it they couldn’t compete as to take advantage of the system you need to change the rest of your business
  • Goes on to apply to ERP and
  • Cloud computing (orgs moving to it)
  • But also Continuose delivery

Interesting: Dell becomes as big as it did because it could work in smaller batches then the competition . All the others companies offered you a fixed machine. Dell changed this by going online only and allowing you to customise your PC and could ship faster than the competition.

Swedish  saying: When talking to farmer use farmers words
  • Commenting on how we collaborate across divisions
  • If you want to collaborate successfully you need to either be speaking the same language or use words the other understands

A lot of organisations are failing with Agile/Kanban/Scrum etc because they still have all the exiting rules in place (see point 3 earlier).

To be able to adopt the new technology you need to move to the new rules (point 4) otherwise you’re still doing the same thing.

Used the example of Amazon not doing Cost accounting and moving to throughput accounting and flow of value

  • Cost accounting is what each department cost
  • Through put accounting is what is being produced by the department and what value does that provide

The problem with step 3 is that it eventually becomes the culture of the organisation and trying to unpick it then is really hard.

So How to break the rules?

  • Understand the power of technology
  • Recognise the limitation the technology will diminish
  • identify the existing rules we use to manage the limitation
  • Identify and implement the new rules

Summary

Another great talk from Dan and goes to show a lot of our problems in software development have been issues in other industries.

Not only that they have been solved  but we tend to overlook them as they don’t look like our industry. What has manufacturing  got to do with writing software? In all don’t take my word for it go watch the talk and start having a look at Eliyahu Goldratt body work. He was onto something…

Automating testing for BBC iPlayer mobile part two: automation

Originally posted on the BBC website 30 June 2014

This is the second part of a three post series exploring how the BBC iPlayer Mobile testing team has integrated automated user interface (UI) testing into their development practice.

This post will deal with automation.

By creating collaborative feature files through the “3 Amigos” sessions and setting up a robust system for creating and disseminating them, the natural next step is to begin automating them to increase productivity and quality.

To make the tests as easy as possible to write we implemented the page object pattern so that the developers were clear about how to write more maintainable and less flaky tests. This also meant that test were written more consistently and allowed for more code reuse.

In addition to the page object pattern, we created helper modules that contained all the commands that they would need to drive the app, so it was easier for developers to quickly look up what commands are available, and demonstrated how to use the inbuilt debug tools to query the app to find the screen elements.

Although we explored many different options, we decided to use Calabash and Ruby as the predominant tools to automate our tests as they worked cleanly with Cucumber (which is our test runner) and because Calabash had support for both iOS and Android. To help everyone get to grips with the new systems, internal workshops are held to step developers through real life examples, aiding them to organise the feature folders, creating page objects and types of Calabash commands available to drive the app. By providing step by step guidance, everyone is able to get a strong understanding of the process and where they come into it.

Initially creating the automated UI tests is a slow process as you are required to create a fair amount of support code (including page objects, working out how to access elements on screen and working around timing issues with the app) but once these foundational aspects are set in place, automating tests gets faster and faster.

If a developer ever gets into difficulty Developers in Test are available to pair up to help iron out any problems.

There are many advantages to developers writing the automation tests. Ownership creates a sense of responsibility and a smoother process for delivering and testing the products. It also drives the developers to look at the results and take advantage of the benefits of faster feedback.

With developers using the feature files to write their tests, it ensures that the product is as intended, rather than based on an assumption, which speeds up the development process. The benefits of this is that everyone takes mutual responsibility for automation and prevents testing being pushed to manual when a DiT is absent or unavailable, which keeps the process moving more succinctly and effectively.

Running Android tests

Another benefit of using Calabash is that it uses the accessibility labels to access on screen elements. If the developers build the tests they have to enable the labels therefore helping to make the app more accessible. For more information on accessibility practices see Senior Accessibility Specialist Henny Swan’s blog posts. 

You may be wondering what the DiT’s are doing if the developers are creating all the automation code?

DiT’s remain embedded in the team and available for pairing to help automate tests that are not straightforward. They help build up tools to aid automation e.g. worker methods to carry out complex interactions or how a feature could be automated if not immediately obvious. They help keep Continuous Integration (CI) jobs running and investigate brittle tests. DiT’s also tend to be the experts with the automation frameworks so advise if a feature is worth automating or it’s better to test it manually.

Once the feature file has been automated the tests are pushed into the main build pipeline. They will be run approximately 4 times a day and a subset on each check in of code. We have our build jobs status displayed on large screens (one of the advantages of working near the TV platforms team is that they have a lot of reference TV’s that we can use when not being tested on) so if anything fails the whole team know straightway.

Build monitors

In the final post of this series I’ll tell you we handle legacy and new features and what the future holds for our team.

Originally posted on the BBC website 06 August 2014

 

Automating testing for BBC iPlayer mobile part one: 3 Amigos

Originally posted on the BBC website 30 June 2014

In this three part series of blog posts I will be exploring how the BBC iPlayer Mobile team has integrated automated user interface (UI) integration testing into their development practice.

I’m a Senior Developer in Test (DiT) working in Mobile Platforms, BBC Future Media. I work with the BBC iPlayer Mobile team to help them automate their testing, investigate new tools and advising how best to use them in their everyday work, sharing this with other teams across the BBC. In the 16 months that I have been with the BBC I have seen a great deal of change in development practice, which I will be sharing in this series of posts.

I was initially brought onto the team to identify how to automate a greater number of tests in order to increase the speed of release without risking the quality of the end product.

When I first joined the team it was apparent that the developers had all individually started to automate some of the tests, however it became clear that there was no continuity to the test scripts, with each developer using their own styles. Inevitably, when a script broke, if it wasn’t investigated by the developer who wrote the test, it would take a long time to identify the problem and to repair the issue. Because of this, it would usually result in adding a simple patch to keep it running or by disabling it. Because of the issues around automation, the team began to lose confidence in the testing method and reverted back to manual.

The lack of systems within the process was problematic in itself with some features having a lot of automation testing carried out and others receiving little or none, with no-one taking responsibility for ensuring that the testing was happening. This meant that each test was insular with only the designated developer having access to the results.

From the outset, it was decided to take things slowly and begin with the area that would give the most value with the least amount of effort. The team understood that feature files are a great way to describe how the systems should work and that a collaborative approach was needed for successful implementation. It was here that we decided to use the idea of the ‘3 Amigos’ to write the features.

3 Amigos

To set up the ‘3 Amigos’ we needed to recruit a developer from each platform (iOS and Android), a tester, a product owner/business analyst and a DiT. Now this is obviously more than three “amigos” however we needed to have a representative from each area of the process and the DiT to lead the sessions until everyone felt comfortable with the process and able to run them independently.

The advantage of having a DiT, or anyone experienced in writing feature files, is to act as chair and mentor. They are able to guide the team to write concise sceanrios and ensure conversation stays on track. They also help to make sure that everyone in the meeting contributes and is comfortable with what the features where specifying.

Ordinarily, the process would start with the user story, created earlier by the Business Analyst (BA) working with the Product Owner. This will help to identify each scenario to cover the feature and only entering into the given/when/then steps if it wasn’t immediately clear how a scenario would play or if there was confusion amongst the team. Once the sessions are over the DiT or BA will flesh out the remaining given/when/then, attaching it to the user story in Jira.

3 Amigos gather around a BBC iPlayer screen
3 Amigos gather around a BBC iPlayer screen

Because BBC iPlayer is available on iOS and Android we only ever had one feature file that both products would use. This would make sure that we kept feature parity and aided us to start delivering features on both platforms at the same time.

‘3 Amigos’ helped everyone involved develop a strong understanding of a feature and how it may need to be altered to work on each platform. This also helped to foster a more collaborative approach to creating feature files and to develop a better understanding of what the Product Owners wanted without the solution being prescriptive on the team, letting them decide how it should work.

Anyone not involved with the 3 Amigos session could read the feature file or speak with any of the developers or testers present to get a heads up. We try to make sure that different developers and testers attend the ‘3 Amigos’ to make sure everyone can run a session without a particular person becoming a bottle neck.

Once the developer has picked up the ticket to develop, they will submit the feature file into our source control system removing any reference to the feature apart from the user story and any acceptance criteria leaving only a link to the location of the feature file for future access. This ensured there was only ever one version of the truth and if any changes were required then there would be an audit trail to identify who made the alteration.

In my next post I will expand on how we use the feature files to automate our testing.

Originally posted on the BBC website 30 June 2014