Experiment your way to success
You should run experiments to assess the value of an idea. An idea often comes as a solution to a user’s pain/need.
Why experiment? 🔮
In all companies I have ever been in (from startups to enterprises), there have been plenty of ideas. The problem is to figure out what the best idea is, and spend the time on that.
Why can’t we just ask people or interview them? Well, people often do not behave the way they say they do…Instead validate your idea and learn in the fastest, cheapest way possible.
The traditional way of assessing ideas
Idea > Business Case > Senior Decision > Build > Measure success
In this model, a committee has to decide (with their knowledge and assumptions) if an idea is good after an innovation team presents their desktop research.
We don’t know if the committee members have sufficient knowledge. And neither do they. It is all based on their assumptions and experiences. As markets move fast, their knowledge and assumptions might be obsolete.
Working on a business case early in an idea assessment process is based on pure guesses. While it is beneficial to bear costs and benefits in the back of your mind, I have done many long and impressive business cases in my time to satisfy a need for numbers in a scenario about which we know almost nothing
The result is that the people working on an idea often get attached to it, making it harder to kill.
The experimental approach
In the experimental approach, we try to run simple experiments as fast as possible in order to learn if something is a good idea. Here the focus is on testing / failing fast
Idea > Measure (experiment) > Senior decision > Build
When you have run experiments and gained knowledge about an idea, this is the time to include senior management. Now they have real market data on which to base their decisions.
Focus on outcomes, not outputs
The way we measure performance should reflect the experimental approach: The focus of product development team should be on the outcome and not deliveries. If the team delivers something on time, budget and quality, but nobody wants to use it, we have wasted time and money.
A development team’s responsibility is not only to deliver code, but to run experiments to understand if something is a good idea; hereby identifying new opportunities and ultimately, ship a product that satisfy their customers’s needs.
How to experiment
The outside world often has a much larger effect on metrics than product changes do. Users can behave very differently depending on the day of week, the time of year, the weather (especially in the case of a travel company like Airbnb), or whether they learned about the website through an online ad or found the site organically.
Controlled experiments isolate the impact of the product change while controlling for the aforementioned external factors.
Pro tip: The Sample Size Calculator gives you a way to determine what size audience you need for your tests.
Different types of experiments
Running online experiments is very cheap, so running them early in a discovery process is an advantage. Run a kill experiment as fast as possible to validate your idea without getting too attached to it.
Will users buy it?
This experiment focuses on one thing only: Will users buy it?
To know if customers will buy it (use it), the experiment must follow certain principles to work:
- Users have to believe the product/feature exists (users cannot know they are part of an experiment).
- Users have to commit to buying the product/feature (e.g., by giving their email or clicking a payment method).
Example: a simple website with minimal functionality, that mimics the full experience.
You can start with a simple proof-of-concept experiment (PoC). These types of experiments are allowed to be low fidelity; You don’t need polished a design, and it’s okay to have some friction in the user journey. The point is not to test a flashy feature or idea. The point is to see if users do what we expect them to (measuring behavior).
Build the door for the feature without actually building the feature and measure the interest. If enought users click (engage) then it’s worth building the feature.
A/B or split testing
The regular version of a product (called Control) is compared against a modified version (the B Variant or the Challenger) and checked for effects on a company’s guiding metrics (or Overall Evaluation Criteria, OEC). Test with a small percentage of users, usually 1-5%
A concise document that paints a clear picture of the problem space and what we’re trying to accomplish.
1. The Business Objective
Describe the business goal / problem / need / opportunity you’re looking to solve.
- Who are we solving for? Who will benefit from this and how?
- KPIs. How will we know if this experiment is successful? Define “Success”: What are the key metrics that we expect this to improve? (e.g. songs streamed, number of downloads, etc.)
- Why is this business objective important? This is where you tie your project to the larger context of the company.
List your hypothesis and make it measurable, e.g. adding a buy button next to the product will increase conversion by 20%.
Pro tip: Test multiple different versions to find the version by focusing on breadth rather than depth.
3. Experiment design
The description of the solution/implementation. How will you run the experiement?
- What’s in / out of scope?
- Who needs to be involved?
- What problems do we need to solve? Identify unknowns, areas of risk, and known challenges that need to be resolved before development can begin.
- What is already built? A short description of existing and related features that give context to the new project.
- What future considerations need to be accounted for? Are there future features or business goals that will build on top of this feature? Goal is to not design ourselves into a corner now if we know about something in the future.
Pro tip: The experiement should be fast and inexpensive to run, e.g. “I’ll pick up my phone and call three of them right now to see what they think about it.”, “I’ll add a fake door to see if people are interested in the new feature”
List the key findings.
Experimentation is typically run when teams have enough traffic to empirically and scientifically test whether a change will produce the intended effects with a cohort of real users. Teams using experimentation should ensure they have adequate levels of traffic, they understand the problem they are trying to solve and they have ensured the impact of their changes can and should be measured through an experiment.