Why do we respond the way we do in social situations?

A colleague recently shared an article with me called Managing with the brain in mind. It argues that the workplace is experienced by employees as social structure first then a workplace and that keeping this mind might lead to better employee engagement.

It uses (some) neuroscience research to help explain how people are likely to respond in certain social situations and identifies that people’s brains tend to view situations in terms of threats and rewards* .

It then goes on to detail 5 social qualities abbreviated to SCARF (status, certainty, autonomy, relatedness and fairness) that you could use to help you understand how you could apply some of the research to yourself and your teams.

I found some of the ideas quite compelling and wondered if it would be easier to consume modelled as a mind map.

People perceive social interaction in different ways. The research carried out over the years suggests that they may view it as a threat or reward. If the threat response is too serve then this is likely to limit their brain’s ability to function and therefore limit behaviours that can lead to rational outcomes. If it is a mild threat response then it might be enough to provoke curiosity, free up brain resources and motivate them towards rational behaviours. A reward response is the most likely to lead towards a rational behaviours as they have more brain capacity to take on additional information.

The key point is you will never know fully to what extent someone is experiencing a situation – they may not even be able to articulate how they are feeling themselves. Unless they are responding in a way that is very clear e.g. angry/fearful most likely a threat response, happy/joyful most likely reward response, but most work situations are likely to cause a neutral response with no obvious outward emotion.

Therefore approaching a situation that is more likely to cause a reward response in an individual is most likely to produce behaviours that can lead to better outcomes.

What do you think about the ideas behind the SCARF model?

How would you use the model?

Is there another way in which we can can help people in social situations?

Let me know what you think in the comments below.

*I have to admit this point is a little tenuous as they make this connection by viewing brain scans of how people respond to pain and how they respond to social situations. They found that similar pathways in the brain where invoked whether it was a negative social situation or physical discomfort. But a lot of the ideas expressed in the article fitted in with what I’ve seen.

How to learn from failure

Reading time 13 minutes

Below are my personal notes from Amy Edmundson excellent article Strategies for learning from failure. It’s a long read but I highly recommend it over my notes as it goes into a lot more detail then I have covered.

Summary

Not all failures are the same and categorisation of failures can make a big difference in enabling learning from them.

Why should testers care?

Considering we deal with software failure all the time we have a tendency to forget the human cost of failures. Especially in terms of how that failure occurred (the team), how that failure affects the users and the outcome for the business. This article is a great introduction in how we can learn from failure first and then how we could enable our teams and business to learn from them by reframing errors as different types of failure.

[Organisations] that catch, correct, and learn from failure before others do will succeed

Amy Edmundson

Amy classifies failure into three types of categories

  • Preventable
  • Complex
  • Intelligent

But we have a tendency to view all failures as one type. In software testing we group them into different levels of risk but generally all failures are error. Which means something isn’t right and should be avoided. We’ve started to try and learn from them but the need for interdisciplinary teams to do so is a cost that is often too high to pay so doesn’t happen very often. I think if we focused our efforts to investigate complex failures we can use the learnings to start minimising preventable issues and stop some of the them happening altogether.

How should we respond to failure?

Some people believe that respond constructively to failures could give rise to an anything-goes attitude. They think that If people aren’t blamed for failures, then how else will they try as hard as possible to do their best work? But this has a tendency to try and avoid failure and in some cases cover it up.

What we actually need is culture that makes it safe to admit and report on failure (so we can learn from them) which coexist with high standards for performance (to make use of that learning to get better).

The blame game

If people see failure as something to be avoid you end up in the blame game. Which has a spectrum of reasons for failure from blameworthy to praiseworthy:

Blame game

🤔Notice how things that are blameworthy are about individuals but praiseworthy are all about the things.

I wonder how many time people don’t blame others but themselves for the failure and hence keeping quiet or downplaying issues when they occur?

To embrace failure we need to classify it better then the catch all term that failure encourages. Amy Edmundson suggest these three categories: preventable, complex and intelligent failures.

Preventable

  • These are usually found in routine tasks that are well defined and the outcomes are well understood
  • Preventable failures tend to occur when we deviate from this routine
  • In software engineering certain routine task can and should be automated. Such as build processes and specific types of checks
  • If they do need to be performed manually then tasks lists and check lists are well suited to these types of tasks
    • Note: exploratory testing falls under intelligent failures
  • Failures which result from these types of tasks can usually be mitigate through better understanding of the work we do, how we do it but most importantly why
  • When we spot these types of failures (deviation from the routine) we should immediately address them
  • This is in part about stopping errors from being passed down the process and building quality in

Complex failures

  • Many systems we work in are complex and too big for any one person and in most cases even groups of people to fully understand
  • This means complex systems can be unpredictable and ambiguous and fail in ways we could not have anticipated
  • The way in which complex failures occur can in some cases be traced to things all happening in just the right way
  • But assuming failures will never occur can be counter productive and we should build into the process to handle what happens when things go wrong
  • When complex failures do occur we should recognise them as such and investigated them in a praiseworthy way to understand all the components that led to the failure and identify if any of the smaller issues that resulted in the failure can be made preventable
    • For example
    • Most accidents in hospitals result from a series of small failures that went unnoticed and unfortunately lined up in just the wrong way.

Intelligent failures

  • Named by the Duke University professor of management Sim Sitkin as intelligent failures
  • These are the failures that occur during experimentation
  • They help you understand what works and what doesn’t
    • And importantly quickly
  • These are situations where the answers are not knowable in advanced
  • The only way you can find out is to actually do it
  • Exploratory testing is all about raising awareness of intelligent failures
  • As Amy Edmondson calls them they are failures at the frontier
    • Situations that haven’t happened before
    • Or maybe won’t happen again
  • For software engineering this is a lot of the work that we are doing
    • Hence agile software development so we can adapt to the changing environment
    • To do things in a way that helps you learn from your work
    • We should be producing lots of intelligent failures that help us learn about the system we’re building , the people that use it and the domain in which it used
    • Exploratory testing is all about exploring a system and seeing in what ways it can fail to better understand how it works

Small experiments over Big Bang experiments

At the frontier, the right kind of experimentation produces good failures quickly. Managers who practice it can avoid the unintelligent failure of conducting experiments at a larger scale than necessary.

Trail and failure?

“Trial and error” is a common term for the kind of experimentation needed in these settings, but it is a misnomer, because “error” implies that there was a “right” outcome in the first place.

Tolerance of failure

We need to be able to accept complex and intelligent failures and understand that doing so does not mean mediocrity. Tolerance is actually something that we need in order to be able to learn from these types of failures. The problem with failure is that there is almost always an emotional element to it and so needs leadership to enable the learning that needs to happen.

How do you learn from failure?

Leaders should insist that their organizations develop a clear understanding of what happened—not of “who did it”—when things go wrong.

This requires consistently:

  • reporting failures, small and large;
  • systematically analysing them; and
  • proactively searching for opportunities to experiment.

Anyone working on experimental work needs to clearly know that the faster we fail the faster we will succeed but most people don’t understand this subtle but important concept.

  • The quicker things fail the quicker you can pivot or try another idea that can succeed
  • But the longer that failure takes the longer you are executing on an idea that will not help your objective
  • What is the opportunity cost of working on one thing and not the other?

Some people may approach experimental work as if it’s well defined and understood such as production line style of work where you need to produce the same thing over and over.

For example, statistical processes control, which uses data analysis to assess unwarranted variances, is not good for catching and correcting random invisible glitches such as software bugs.


In a typical software team this would be predefined test cases or automated checks

There are three main ways to learn from failure: detection, analysis, and experimentation.

Detection

We need to detect and make issues visible earlier on in our processes before they become bigger issues later on

Don’t shoot the messenger

Unfortunately a lot of people are reluctant to raise issues early on in the process for all manor of reasons. The biggest culprit being people unwilling to take interpersonal risks in raising issues.

One of the best ways to combat this is for management to lead by example and not only encourage the raising of issues earlier on in the process no matter how small but also applauding the people that do and having a system in place to make something happen about it.

Another issue is a human tendency to not admit failure due to the stigma attached to it “it failed therefore I’ve failed”. Therefore people keep going hoping that things will get better when they should have admitted failure or worse they haven’t realised they’ve failed due to inadequate measures or goal when starting out.

Changing the stigma around failure is one way to improve the situation such as failure parties to encourage the reporting of failures and help people look at the situation in another way.

Example of how other organisations detect errors

Through speaking up supported by management from Amy Edmundson:

In researching errors and other failures in hospitals, I discovered substantial differences across patient-care units in nurses’ willingness to speak up about them. It turned out that the behavior of midlevel managers—how they responded to failures and whether they encouraged open discussion of them, welcomed questions, and displayed humility and curiosity—was the cause. I have seen the same pattern in a wide range of organizations.

Building quality in

The idea of the andon cord from the Toyota production system is doing just this; noticing small deviations in process and correcting them there and then to constantly improve the system.

For software engineering this is all about building quality into the process instead of inspecting it at the end. Inspecting at the end is almost too late to make difference due to the increased cost in time and cognitive load to make the change. This usually ends in discussion such as /users are never going to notice X/, /no one is ever going to do Y/ or /let’s see if it’s going to become a problem first/.

Analysis

Once failures have been detected it is important to not just look at the symptoms of the problem and move on but to dig into the root cause of the issues.

Unfortunately we tend to not want to do this as it can be painful to admit that something went wrong especially if we are the cause of it and can negatively affect our self esteem and confidence. There is also an element of interpersonal risk associated with admitting failure that can add towards people not wanting to spend too long looking at issues too deeply. “What if people think I’m incompetent?”

Culture is another aspect that needs to be in place for inquire into failure to occur. Digging into failures needs:

inquiry and openness, patience, and a tolerance for causal ambiguity

But a lot of organisational cultures are geared towards actions and results not reflection as needed for learning from failure.

We are also highly susceptible to fundamental attributes error. This is where we downplay our responsibility and blame external factors when we fail and do the opposite when others do.

Amy research back in 2010 showed that failure analysis is often limited and ineffective – sadly I think this is still the case for a lot of organisations.

Analysing complex failures is difficult as they tend to occur across teams and departments and due to the reason listed above most people only focus on the symptoms rather then getting at the underlying causes of the failures. Therefore it’s best to use multidisciplinary teams to carry out the investigation with the support of management that you are looking at what happened not what someone did or didn’t do.

From the NASA Colembine disaster

  • A team of leading physicists, engineers, aviation experts, naval leaders, and even astronauts devoted months to an analysis of the Columbia disaster.
  • They conclusively established not only the first-order cause: (symptom)
    • a piece of foam had hit the shuttle’s leading edge during launch—but also
  • second-order causes: (underlying reason)
    • A rigid hierarchy and schedule-obsessed culture at NASA made it especially difficult for engineers to speak up about anything but the most rock-solid concerns.

Experimentation

  • A critical activity for effective learning is strategically producing failures—in the right places, at the right times—through systematic experimentation.

For scientists
* 70% of experiments will fail
* They recognise that failure is not optional but a part of the process
* And that Failure holds valuable information that they need to extract and learn from /before the competition/ 🤔

In contrast when product companies design new products they plan for success. So they setup the product for optimal conditions that work instead of representative ones that they can actually learn from. Therefore the pilot only produced information about what does work not what doesn’t.

From Amy Edmundson:

  • A small and extremely successful suburban pilot had lulled Telco executives into a misguided confidence.
  • The problem was that the pilot did not resemble real service conditions: It was staffed with unusually personable, expert service reps and took place in a community of educated, tech-savvy customers.
  • But DSL was a brand-new technology and, unlike traditional telephony, had to interface with customers’ highly variable home computers and technical skills.
  • This added complexity and unpredictability to the service-delivery challenge in ways that Telco had not fully appreciated before the launch.
  • A more useful pilot at Telco would have tested the technology with limited support, unsophisticated customers, and old computers.
  • It would have been designed to discover everything that could go wrong—instead of proving that under the best of conditions everything would go right.
  • Of course, the managers in charge would have to have understood that they were going to be rewarded not for success but, rather, for producing intelligent failures as quickly as possible.
  • What incentives are you setting up for your employees? The things you reward are the things you will get.

What makes exceptional organisations?

exceptional organisations are those that go beyond detecting and analysing failures and try to generate intelligent ones for the express purpose of learning and innovating.

Can you think of any organisation that purposely inject failures into their system to see how they behave? Hint they named the tool after monkeys 🐒 and in the process created a whole new discipline: Chaos engineering. These experiments don’t have to be that big either:

[you] don’t have to do dramatic experiments with large budgets. Often a small pilot, a dry run of a new technique, or a simulation will suffice.

recognise the inevitability of failure in today’s complex work organizations. Those that catch, correct, and learn from failure before others do will succeed

Amy Edmundson

Building Quality in via Testability

7 minute read

Back in March 2018 I visited The Design Museum in London and came across the above installation.

What you can see is technology design classics all the way from the first transistor radios on one side to the very first digital clocks on the other. With everything else in-between.

If you stand back far enough you begin to see that they are not just randomly placed on the wall but in a particular order. As each piece of technology progresses in its evolution you begin to notice that it starts acquiring functionality from the technology around it. Not only that but they start to shrink in size at the same time. Eventually you realise that all of that technology has been assumed into one device: The mobile phone which is placed right in the centre of the wall.

With the older technology its size and its complexity was on show for all to see. The mobile phone however is different. It actually looks quite simple on the outside with only a screen and a few buttons. But once you turn it on you begin to realise that this is something quite different to what has come before. It can not only provide all of the functionality from the technology that came before it but much more through the use of the internet. This isn’t just limited to mobile phones but pretty much all technology that comes after. From TV’s, speakers and wrist watches everything is slowly being interconnected via the internet.

The interesting thing about a lot of this new technology is that it is actually been developed and controlled by only a handful of companies. Who on average have more resources than a lot of other more traditional companies combined. On top of that they have oriented themselves around the users unlike any other company before always working to provide them with best experience they can come up with. It’s almost like they know every users is a click a way from moving onto the next thing but something keeps those users coming back. It sometimes look hopeless competing against them, so what do we do?

Software is eating the world

Marc Andreessen back in 2011 wrote that “Software is eating the world” which actually gives us some hope. Software allows us to compete again and perhaps tempt those users away. Remember just as the competition we are only a click a way too. But what is going to get those users to click something new?

We need to be able to try different ideas and get them in-front of our users to start seeing what works and what doesn’t based on real data and not just what people think is working.

Leadership to build Collaboration and Purpose

However to be able to start doing that we need to start working better together as software teams. Simply having the best developers is not going to cut it. Research from Google’s Aristotle project showed that this wasn’t the case but 5 other team dynamics where better predicators of well functioning teams. These being psychological safety, dependability, structure & clarity and meaning & impact.

Side note: Psychological safety is all about leadership and interpersonal risk taking and not just saying this is a safe space. Read The Fearless Organisation to learn more.

Once we can collaborate more effectively we can build psychological safety, dependability and structure into the team. From there we can start working on the teams purpose. What is the teams reason for being, what are they trying to accomplish, how will this help the organisation? Purpose is all about providing the team clarity, meaning and impact. But simply asking people to collaborate and giving them a purpose isn’t going to build the team dynamics set out earlier. It’s going to need leadership to build the type of collaboration we need that has those characteristics. Leaders will need to be more hands on demonstrating interpersonal risk, building dependability between team members and setting up what the initial structure to the team is.

What is quality?

For arguments sake let say you’ve been able to get someway to doing that. Now what? Do the user of your systems just magically start appearing? Team collaboration is only one part, now you need to start iterating on the system. You could just get the team to build whatever they think is a good idea and get them to do it as fast as they can. The risk is releasing half-baked systems that end up causing you or worse, your users more problems then before. The thing is users tend to want a quality product, but quality is subjective and so means different things depending on your view point. From the lenses of quality :

For your Organisation quality could be whatever helps them reach their targets for that quarter or year.

For your Product owner their measure of a quality product could be a system or feature released on time.

For your Team it could be a system that they can build, deploy, maintain easily.

For your Users, well it could be something as simple as it just works. – Lenses of quality

Building Quality in via Testability

If quality means different things to different people how can you build quality into a product? By building in testability instead. What testability does is start to make your system objective. Meaning that instead of people saying the system feels easier to work with or they think it works correctly you use tests to back up those feelings. Those tests have to be built into the system during development. It is not something that can be added on very easily after the fact and especially by people who haven’t built the system in the first place. Testability is not about testing the system end-to-end but piece-by-piece. Each piece being a specific type of behaviour the system provides and tested in isolation from the other pieces. The scope and definition of the behaviours should be decided on by the team collaboratively. Unit testing can help with testing like this but everyone has a different opinion on what a unit is and therefore have very different approaches to testing a unit:

What do the unit test test?
4\ Everyone seems to have a different opinion on what makes a unit but also what makes a a good and bad unit tests

Which is why I have a problem with calling them unit tests and outlined how you could define them by calling them code tests first and then building a team understanding of what they are.

This type of testing is what I think gets us towards what W Edward Deeming (1900-1993) known in his time as the leading management thinker on quality when he said we should

“Cease dependence on mass inspection. Build quality into the product from the start” – W Edward Deeming

So do we just need to work better together and building in testability to solve all our quality issues?

Software ate the world, so all the worlds problems get expressed in software

It’s been 9 years since Marc Andreessen wrote Software is eating the world. Ben Evans (a business analyst who worked for Andreessen) recently said in his presentation Standing on the shoulders of giants

“Software ate the world, so all the worlds problems get expressed in software” – Ben Evans

You can build in all the quality measures you want but that doesn’t address any of the problems we’ve intentionally encoded into the system. You are going to need someone who understands how the team works (and how the problems are encoded into the system), knows how the system is deployed into the real world (and the domains in which it is used) and who those users are (and what they expect of it). That someone already exists within teams but most teams have simply been using them as a safety net to check their work and to channel my inner Deeming “Carry out mass inspections of our systems”. We’ve called them Testers but maybe it’s time we start to think of them as something else?

Software levels the playing field again and allows us to innovate in ways that no other tool before it has ever allowed. However to do so we need to work collaboratively as teams to build testability into our software systems and testers to raise awareness of what quality is for our products. From this foundation we can begin to compete again and really start offering ours users that temptation to click something new.