When it comes to innovation and growth, there are two fundamental questions:
- "What should we create?"
- "How do we know it will succeed?"
In large companies, the first question is increasingly answered by designdriven innovation'most often referred to as 'design thinking'. The process boils down to a fairly common set of methods executed in a roughly consistent order:
> Conduct qualitative user research to build empathy
> Synthesize observations into insights
> (Re)frame opportunity areas
> Brainstorm with crossfunctional teams
> Prototype ideas and refine them through user feedback
> Define vision for a new offering
> Pitch it to company leadership
Using this now widely-adopted process, large organizations have radically improved their ability to come up with compelling answers to Innovation Question #1: 'What should we create?'.
Although this is a fantastic stride forward, far too many promising ideas fail to make it into the market. Of those that do, the vast majority fall short of expectations (some estimates place the new product failure rate at 90 percent or more). Why is successful innovation so rare?
First, we have to acknowledge that innovation is inherently uncertain and that design-driven methods are not a panacea. But there's another problem lurking in plain view'a fatal but mostly overlooked bug in the operating system of corporate innovation. We believe that companies have no idea how to answer Innovation Question # 2. When project sponsors inevitably ask, 'How do we know this idea will succeed?' today's most widely accepted validation tools simply can't provide good answers.
The problem is that these tools were not designed for uncertain and ambiguous situations. By definition, this is what you're facing if you're trying to do something innovative. Bold new ideas without clear precedent are inherently unpredictable. If the success of a new idea can be predicted based on what we already know (i.e., extrapolated from the past), then traditional validation techniques are great. If not, these tools are totally inappropriate.
'You can't put into a spreadsheet how people are going to behave around a new product.' - Jeff Bezos
In order to validate new ideas and mitigate risk, large companies employ a number of methods, including sales forecasts, IRR, NPV, and consumer surveys. Often these methods rely on market research and data modeling (e.g. BASES) to predict a new product's odds of success. These predictions are fairly reliable, given two specific conditions:
- The new product is clearly related to something consumers already know.
- It delivers tangible benefits that are easy to understand without direct experience.
Most new product and service concepts fit these criteria nicely' a new flavor of potato chip, a softer paper towel, a faster way to withdraw cash at the ATM (Figure A). For these types of ideas, consumer surveys are great tools for answering the 'How do we know?' question. But when a new concept is substantially different from offerings already on the market and/or a big part of its value is tied up in the experience of using it, traditional validation tools are woefully inadequate.
The Swiffers of the world can't be accurately simulated with quantitative models and other traditional validation tools because they measure what people say, not what they do. To many in the corporate innovation space, this fact is obvious but irrelevant: 'Of course we use imperfect tools'the product doesn't exist yet!' And therein lies the root of the problem; the new product does not yet exist. It's just an idea, and asking people what they think about an idea is a lot different from seeing how they react to a new item on the shelf or a new service experience.
It's a catch 22. We can't know how people will react to a new innovation until it's real, and we don't want to invest millions of dollars to make it real until we know how people will react. This is why the 'How do we know?' question has remained such a thorny issue for innovators.
This approach has gained momentum in the startup world'spreading through the vernacular of Lean Startup, minimum viable product, and customer development'but is still relatively unknown in most large companies. At gravitytank, we refer to these real-world product experiments as Micro Pilots.
A Micro Pilot is a quick, inexpensive experiment that allows us to validate ideas and business models with real consumer behavior. Every Micro Pilot is custom-built to test a very specific hypothesis. It is a fast and efficient way to pressure-test crucial elements of a new offering'typically the most risky and uncertain elements'before committing to a bigger investment. At gravitytank we still use traditional concept validation techniques when appropriate, but we have found that Micro Pilots are an extremely valuable addition to our toolset, and a better way to de-risk innovation in many situations.
A handful of large companies have started to transition towards a culture of experimentation. But the vast majority of corporate innovation teams are shackled to techniques that snuff out the most ambitious and potentially groundbreaking ideas. In the typical corporate innovation process, the final pitch marks the end of prototyping and iteration'a handoff from the 'idea people' to the 'development people'. According to this philosophy, idea validation is all or nothing (pass = invest; fail = kill it). But the notion that you can simply run one test to see if something is a good idea is deeply flawed when it comes to innovation. Just because an idea fails its first test doesn't mean it's not a good idea. Many of the biggest innovations of our time went through major iterations before finding just the right mix of features, benefits, positioning, and pricing.
'We're a collection of dozens of internal startups. This is now the standard practice' How many weeks after having the idea can you get a version into users' hands that tests key hypotheses? We call it leadership by experiment.' - Intuit Founder Scott Cook
'Make a little. Sell a little. Learn a lot, and fail cheap.' - P&G Chairman-CEO Durk Jager
The first incarnation of Starbucks had no chairs, baristas in bowties, menus written mostly in Italian, and nonstop opera music. Twitter started out as a platform for creating and sharing podcasts. YouTube was originally intended to be a video-dating site!
Successful innovations are most often the result of continued evolution' multiple iterations that gradually nudge a promising idea toward success.
Micro Piloting allows innovation teams to systematically refine and validate their hypotheses until they are confident that they have a sustainable, scalable business model. With Micro Pilots there is no final pitch, no politicking, no committee that decides the fate of an idea on a whim. For teams empowered to use Micro Pilots to test their ideas 'in the wild', the only objective is to learn how consumers actually behave, iterating and shaping their way to success over a series of small experiments. And when sponsors ask Innovation Question #2, 'How do we know it will succeed?' there's no need to speculate. We know because we've seen it work, and we have the data to prove it.
Several forms of experiments are starting to coalesce into a preliminary toolkit for startups and progressive corporate innovation teams. The following Micro Pilot examples are not about efficiency, scale, or profitability. They are targeted experiments that test the underlying business model hypotheses about a new product or service concept.
Platforms like Kickstarter have revolutionized the way inventors and entrepreneurs secure funding to pursue their vision. Crowdfunding has also become a fantastic way for startup teams to test the waters to see if there really is demand for a new product. With a few images, a description of your idea, and maybe a video, you can ask consumers to back your project with real money. These campaigns are a great proxy for consumer demand because you're asking people to vote with their wallets.
Chicago-based startup Scout Alarm is disrupting the home security market with a simple, customizable, and designforward security system. Founders Dan Roberts and Dave Shapiro ran a crowdfunding campaign to see if they could get customers to express interest in buying their system. With a rough prototype and an amateur product video, the team secured more than 1,500 pre-orders. This was enough to give the team (and their would-be investors) the confidence they needed to move into final development and production.
You've probably been part of a 'false door' test without knowing it. A false door is typically a simple web page that describes a new product or service and involves some kind of call to action'often a button that asks us to 'buy now' or 'sign up for the free beta'. When we click that button, we get a message that politely thanks us for our interest but informs us that the product is not yet available. Behind the scenes, the people running the test are tracking how many people are finding their way to the page and how many of them are clicking that button. Many false doors also collect email addresses to build a prospect list for when the product goes live.
One early example of a false door was Redfin, the innovative online real estate brokerage. When the company was in its infancy, most (possibly all) people who clicked the 'I'm interested' button on Redfin's landing page were told that the service was not yet available in their area. Whether it was available in any area was irrelevant. The team was able to measure interest across the entire country before investing millions to build and scale the business.
The term 'concierge MVP' (minimum viable product) comes straight from the Lean Startup movement. The basic idea is that you deliver, by hand, whatever the ultimate product or service would do. Simulating the experience allows you to test the central value proposition with customers without having to build something automated and scalable.
The founders of dresssharing service Rent the Runway developed three different concierge MVPs to test the central hypothesis of their business model: that women would rent a dress, if trying it on was not an option. First, they purchased several dresses at retail and offered them in-person to Harvard undergrads. Women could try them on and, if they wanted, rent a dress for the night. This test helped the team gauge acceptance of the rental model as well as get a sense for preferences around color, cut, brand, and price point. The founders ran a second test as a trunk show, but eliminated the try-on option. For their third test, they took orders from a PDF email showing dress options.
Wizard of Oz
Like a concierge MVP, a Wizard of Oz experiment simulates the experience of a new offering without automating it. But in this case, the customer has no idea that the service is not automated. In fact, there are people behind the curtain pulling levers and pushing buttons to make the service work.
Facts over opinions
Don't get hung up debating the merits of an idea. Replace opinions with facts by moving quickly from concept to experiment.
Focus on the most uncertain and riskiest parts of the business model. For instance, if your model assumes a specific user acquisition cost and conversion rate, conduct a small marketing campaign to see if you can hit the required numbers.
Potential users need to react to something tangible. Build lightweight versions of the offering to test with real consumers 'in the wild' either by making it (build something that you intend to use later) or faking it (use a smoke-and-mirrors approach to simulate part of the experience).
Build as little as possible
Don't build new assets just because you can. Prototypes are wonderful, but only if they help you learn.
Skip the accounting
Focus solely on the amount of learning your test will produce for the investment of time and money. There's no reason to optimize for cost, scalability, or profit margin before you know if people want what you are selling.
Put it out in the world
Behavioral Economics research has demonstrated that people are really bad at predicting their own behavior'so don't ask this of them. Put your offering out into the world and see what happens, gathering empirical evidence along the way. Someone choosing to buy your product is the ultimate feedback.
One of the most important parts of your experiment design is defining what metrics will validate your hypotheses (e.g. trial rate, viral coefficient, user acquisition cost), and also how to capture those metrics (e.g. web clicks, in-store interactions, behavior over a period of time).
Rinse and repeat
Don't expect to answer all of your questions with a single Micro Pilot. Use what you learn from each Micro Pilot to refine your hypotheses and run new experiments.