Before you can design a good experiment you need to decide on what to test first, by extracting the riskiest assumptions your Plan A is based on. This includes assumptions related to:
During workshops we’ve helped many entrepreneurs to set priorities and discover their riskiest assumptions by using the Dan Toma’s HiLo diagram and exercise. It is a quick and effective way to get a visual overview of what assumptions pose the most risk to your plan, and thus what to test first. It looks as following in our tool, Dispatch:
Here’s a quick step-by-step process on how to use the Hi-Lo grid to discover your riskiest assumptions and know what to test first.
One of the core elements in the Lean Startup is the Build-Measure-Learn loop (BML). This is what the original loop looks like:
Eric Ries’ Build-Measure-Learn loop
But in order to effectively run a BML-loop you need to plan for it in **reverse **order. Assumptions on your Plan A is what you need to get going:
As such the first real step to prepare your startup experiments is to formulate assumptions and hypotheses. And the best way to do so is to pull them from the canvas you’re using for business modelling:
Note: if you haven’t filled in any canvasses, this exercise becomes substantially harder to do. We’d recommend you to produce a Business Model Canvas or even multiple ones in case you have several options. Additionally, it really helps to design a value proposition for your early adopters. You can use the Value Proposition Canvas to do so.
The next step is to formulate a hypothesis you want to test. These are more specific assumptions which you can test with an (online) experiment and a measurable outcome.
Something we recommend is to formulate these hypotheses in a falsifiable manner, so it’s possible to measure if an experiment passes or fails. Here’s a template:
We believe that [target group] are [showing behaviour / display interest in x]
[for this reason].
Edited example from Eric Ries’ The Leader’s Guide community:
“One hundred percent of American parents between the ages of 25 and 30 who have
an annual income over $100K will want to share photos with their parents and siblings , because they lack the time to visit them often. “
An example from Firmhouse itself:
“Entrepreneurs in The Netherlands who have no skills to build their own website
will sign up for Airstrip because they look for website builder tools online as they want to quickly build something to test this new idea and turn it into a business.”
For this experiment we built a landing page and advertised our service on Google Adwords. We measured how many sign-ups we’d get for 500 euro of advertising spent in a week, so we could calculate and compare the cost of customer acquisition and compare it to the price they were willing to pay for the service. Our fail condition would be if the number of sign-ups would be too low to justify the marketing expense.
By the way, a falsifiable hypothesis can test multiple assumptions at once. Check out how Buffer tested their riskiest assumptions. They tested the value proposition, channel and willingness to pay.
Disclaimer: Don’t force yourself into this process. If you can’t do this in a logical and natural way, it probably means you need to do more generative research. Consider interviewing customers / early adopters for a discovery process. Such experiments with a less quantifiable learning goal can also be logged in Dispatch.
The next step is to put your list of hypotheses in order of priority. There are a couple of ways to do this, but the bottom line is that you want to focus on your riskiest assumptions first: the assumptions you know little about, but have a big impact on your plan A if (not) true.
This is one of the hardest things to tackle. More often than not, these assumptions are almost impossible to spot by founders themselves, because they’re in their blind spots. It can be because of:
Also, the assessment of risk is very hard. However, a simple but effective exercise is to make a table and score your assumptions based on:
It will look a bit like this:
The outcome of this exercise might very well be a long list of equally important looking assumptions. Also, you might overlook one which is obvious to an outsider or industry expert. Yet, by stacking them as pictured above, you will have some guidance on what to validate first: the assumptions on the top-right, i.e. high-impact assumptions you’re unsure about.
The next step is to take your riskiest assumptions, hypotheses and design experiments to start testing them.