The Three Types of Innovation Failure That Are Intolerable - Scott Anthony - Harvard Business Review
I read with interest David Simms' recent post about the power of positive failure. I of course agree with the general perspective — given the probabilistic nature of innovation, failure isn't always a bad thing, and all things being equal, you'd support someone who has tried, failed, and learned over someone who has never tried.
The interesting thing to me is that this isn't a particularly new perspective. Failure has long been a badge of honor in Silicon Valley; thought leaders like Henry Mitnzberg, Rita McGrath, and Tim Brown note how failure is an essential part of successful innovation. Yet, in most organizations a fear of failure persists.
I've argued that part of this is an incentives problem. Too frequently people reward (or punish) outcomes when they should reward (or punish) behaviors. I suspect another part of the problem is that we just don't have a good way to categorize "failure."
In reality, there are three types of failures that bother me:
- When someone knowingly does the wrong thing. For example, intentionally hiding negative market research data to get senior management approval for a pet project. This is more common than you might know.
- When someone could have easily discovered that they were doing the wrong thing. I see this happen in corporations all too frequently, particularly those that are trying to create new revenue streams or use unique go-to-market approaches. The right phone call or the right research could have quickly highlighted a flaw in a plan. But an internal bias led to action without investigation. I remember a couple of years ago counseling a team that had built a seemingly solid plan to sell to academic universities. My guidance was pretty simple — pick up the phone and call some people who had sold to those universities. That simple activity highlighted how the team had a flawed assumption about the speed of the sales cycle. Had the team executed without that simple research it would be a punishable mistake.
- When someone spent a lot of time and money researching something that could only be learned experientially. Some things are just unknown and unknowable before the fact. How would people use a service like eBay? Would people change their cleaning habit if they had a simple, easy-to-use product like Swiffer? What would people pay for an "app?" These are questions that only could have been answered through action. When people spend forever researching these questions, it is a critical mistake because they've wasted resources and, more importantly, time.
On the other hand, the tolerable mistake is learning in a resource-efficient manner where what my colleagues term "deal killing" assumption is false. In fact, I wonder how different things would be if leaders asked innovators to frame negative hypotheses. In other words, imagine setting a metric such as: "We will shut this project down in 90 days because we will have sold to fewer than 100 customers."
If it turns out you were "right," then you celebrate effective learning. If you are positively surprised, you celebrate as well.
I'm not a scientist by training, but it seems like this approach of trying to disprove a "null hypothesis" might be a way to change the tenor of the debate by making what we might have historically viewed as failure a good result. What do you think?
More on: Disruptive innovation, Innovation, Leadersh
Reader Comments