This is a transcript of episode 140 of the Troubleshooting Agile podcast with Jeffrey Fredrick and Douglas Squirrel.

Following last week’s praise of Taylorism, a suitable approach for making complicated problems simple, this week we look at a similar revolution in democratising innovation and creating a learning organisation—by uncovering hidden complexity and removing status as a barrier to discovery of error.

Show links:

Listen to the episode on SoundCloud or Apple Podcasts.

Introduction

Listen to this section at 00:14

Squirrel: Welcome back to Troubleshooting Agile. Hi there, Jeffrey.

Jeffrey: Hi Squirrel.

Squirrel: Last week we praised somebody we don’t often–Frederick Taylor–who invented the idea that you could take something complicated and train people to do it in a simple way, which led to software factories. What happens after you’ve run Taylor for a while? What goes wrong?

Jeffrey: You might remember I was inspired to praise Taylor from reading Team of Teams. Looking at Taylorism through the lens of the Cynefin framework, he had a real innovation to take things that were complicated and make them simple. The next natural step in Cynefin is to say, ‘but what about the complex domain?’ Those things that are not the domain of experts. I think people who have come across Cynefin before and have heard about the complex domain go, ‘of course, there’s things that are so complex and have so many interactions, they are not something that anyone in the world knows. And therefore, you need to probe-sense-respond. You need to go out and try things and experiment.’

Squirrel: Things like a self-driving car or the space shuttle with literally billions of moving parts.

Jeffrey: Or from Normal Accidents, things like chemical factories with fluid dynamics and complex chemistries. Inherently complex systems where there’s a lot of feedback loops and you really can’t easily break it apart, or maybe the area of innovation where you can’t go and ask someone ‘how do we create this new thing?’ Because people don’t know. There’s no expert to go ask. That’s the thing, Taylorism as a worldview relies on the expert to break things down to make it simple. If you have something where there is no expertise, where those interactions are not something you can predict in advance, Taylorsim doesn’t apply. But that’s not the negative side effect of Talorism: it’s social status. Taylorism said it’s the job of management to think. In a vertical hierarchy of people, there are thinkers and doers, the people who are designing this lovely machine of a factory, with human cogs in there, and the cogs aren’t expected to think.

Squirrel: We replicate this in software often with offshore teams given maintenance tasks and precise directions, extremely detailed requirements so they will crank out the mechanical work. It usually does not work out.

Just-in-Time Manufacturing and Higher Standards

Listen to this section at 03:54

Jeffrey: Let’s admit, there are times where people get this to work, where they will through great effort make very detailed requirements and send them to an offshore team and bring it back and have someone test it and eventually deliver something. It’s not like it doesn’t work at all, but it doesn’t work very well. There’s better options. This is going from Taylorism to Toyota because the lean manufacturing revolution really exemplified this change in mindset. Because of circumstances in the country post-World War Two, Toyata wasn’t able to implement the sort of Fordist Taylorist factory model that was being done in the U.S., so they had to find a new way. What they found was a way to make better use of all these people: rather than have them in place as unthinking cogs, there was huge gain by bringing them into the process. To get their eyes and hearts and minds engaged, to see problems and help solve them. This idea of distributed learning, pushing the learning out to the periphery to where the work is being done is another innovation as significant as Taylorism.

Squirrel: The thing you have to have for that to be valuable is the need for observation, thinking, and creativity at the edges. There does have to be something for the offshore team to find. In the Toyota world it seemed like there wasn’t anything. ‘People have been making cars for a long time, what could possibly be better?’ It turns out that if you send people into the field and factory with the mission to find things that can be improved, they find them. There’s hidden complexities in the factory. The classic method was your suppliers sent you loads and loads of new wing mirrors and tires and other supplies, and you keep a big warehouse full of them. Toyota’s innovation was to tell the suppliers to move their factory to two doors down the road, and send very small, frequent batches. They saved massively on their supplier costs and their warehouse management and so on. You wouldn’t have guessed that was a problem, it didn’t look like a problem. Ford and others who made cars didn’t think of it as a problem, but Toyota was able to outperform them and out-innovate and improve in quality by finding those kinds of issues, those kinds of hidden complexities.

Jeffrey: That example gets to the heart of one of the weaknesses of Taylorism, which is a focus on efficiency above all, discounting the ability to learn. In a Taylorist view it’s more efficient to have these big batches all at once because we get economies of scale. What just-in-time manufacturing does is optimise not for efficiency, but for learning. If there’s some defect in that batch, then you have a big batch of stuff that’s broken. If you’re doing small batches and you discover a problem, then you can get much faster feedback. You can bring that learning into the system much faster.

Squirrel: You can run two doors down the road and tell them to stop making the tyres with the holes in them.

Jeffrey: We’ve talked before about The Art of Action by Stephen Bungay, and he mentiond von Moltke, the idea of frictions preventing our plans from going the way we expect. The third of these is the effects gap: the gap between what we predict our actions will be and what they actually are. The effects gap can only be seen by the people who are doing that work. In The Art of Action, the overall theme is a mission command structure, where you have clear alignment of purpose that allows people at the edges to adapt based on what they see. They’re not frozen, they don’t need to send back to headquarters, they have enough knowledge to go ahead and work around problems.

Creating Zombies

Listen to this section at 10:07

Jeffrey: This is something slightly different. When we come across these items that are unexpected, our goal is not merely to work around them, it’s also to learn from them. There’s a lesson there, a gap in our knowledge, in our understanding, and in our ability to predict. This is really well articulated by Dr. Steven J. Spear of MIT. He recently had a article and a YouTube video saying ‘don’t be a zombie organisation.’ Typically the classic zombie is a slow moving, shambling thing. It’s not that any given zombie is all that dangerous, it’s the mass of them, so many of them that you’re eventually overwhelmed. In his analogy that’s what happens with these small problems in most organisations: that there’s unexpected things that turn up in these effects gaps and we work around them in the moment, and we miss the opportunity to learn from them. It’s the high velocity organisations that don’t tolerate the unexpected. They see when there’s a deviation from expectation, there’s something they should do differently. How many people have heard ‘this is a flickering test?’

Squirrel: ‘I run it and every fourth time it fails. But we know it’s just something funny about that test. It’s OK.’

Jeffrey: ‘Let’s just run it again. Oh, look it passed. It’s fine. Let’s deploy.’

Squirrel: Normalisation of deviance, to use the phrase that Diane Vaughan used when analysing the Challenger disaster.

Jeffrey: That’s what happens in most organisations. It’s ironic because how humans are better than robots is that they can see these problems as they happen and work around them. Because they want things to succeed, they become very proficient at working around the problems that arise. However, they very often have a focus on delivery, thinking their job is just to get this thing done. This is when the danger comes, as we have so many of these little workarounds out there, and each workaround represents a gap in our knowledge, a gap in our understanding. Eventually all of those gaps align in just the wrong way, and that’s when we have disaster.

Squirrel: We have these interfaces that we’ve created, continuous integration tools that show the tests in beautiful colours, green and red, pass and fail. Doubtless somebody has created an extension to mark ‘flickering.’ This simplification fabricates confidence. ‘All I do is I push this button and then the release goes out. And I can push this other button that says ignore the tests. This is simple. I don’t have to worry and look for errors. I can just push the buttons.’ Except, wait a minute, it didn’t work. Now what do we do? I’m reminded of the movie WALL-E where the humans had gone out in space and created all these systems around themselves that made life completely simple. Then the system breaks down and everyone says, ‘well, I don’t know, where’s the button that you push to make the system work again?’ They have to suddenly operate in the hidden complexity of the system they’ve created. That happens to us a lot as well, especially in these complex systems for deployments and creating software that we think we can hand off as being very simple and operate in a Taylorism way, when we can’t actually.

Jeffrey: Disaster can happen. At the same time, disaster doesn’t usually happen.

Squirrel: That’s the danger of it. Most of the time when you release with a flickering test, it is fine.

Annoyance as a Signal

Listen to this section at 15:42

Jeffrey: In the automotive world there are disasters: the Ford exploding Pinto, the Chrysler ignition shut-off that would lock the steering column and disable the airbag while driving. People die in crashes as a result of defects that go back to this focus on efficiency. There is that disaster aspect when these problems happen. However, what is of more interest to me is not just the ability to avoid disaster. It’s so much better! It’s so much more fun if we remove these frictions from our day-to-day life. As the worker, we have to live with these frictions all the time. They’re annoying, they get in the way of our flow and our enjoyment of our work, every time we have to work around them.

Squirrel: The annoyance is a useful signal telling us that we should pay attention to it.

Jeffrey: The Taylorism view would say as long as we’re making the delivery, that’s what matters. If the workers have been indoctrinated in that, they learn to live with those frictions. ‘Our job is to get stuff out. That’s good enough.’ And I think we can aim higher. If we want to be a really high-performing organisation, it isn’t good enough. The status of the workers needs to be higher so they can say, ‘this is annoying. This gets in my way.’ We can’t allow anyone in our organisation to suffer routine friction, routine pain. Look at each other and say ‘there’s something here we don’t understand. If we understood it, this would all be smooth, closer to effortless.’ That should be our goal. If we want to be high-performing, if we want to be like Toyota, a world recognised leader in excellence, then it’s about having higher standards. To achieve those higher standards means elevating the status of the people doing the work so that their experiences matter. What they see matters. We’re willing to learn from them and they’re empowered to speak up. When we can tap into that, then we have a transformation in productivity that’s on the scale of what happened with Taylorism. This is when we talk about going from Taylorism to Toyota, when we get beyond Taylorism: getting everyone’s brain involved. Famously people in a Taylorist system hate it. Wherever Taylorism was introduced, there was labour unrest. It’s classic alienation of the worker from their work. We can do better and that’s what I hope you will have as their aim: to look at the human element here and say, ‘let’s use our annoyance and our ability to spot friction and solve problems, to give ourselves a much better results and a much better workplace to live in.’

Squirrel: Thanks, Jeffrey.

Jeffrey: Thanks, Squirrel.