The Minimum Viable Testing Process for Evaluating Startup Ideas
Starting Up

The Minimum Viable Testing Process for Evaluating Startup Ideas

The traditional approach is to do some customer research, throw an MVP out there as fast as possible, and hope it hits. After being early at three startups that achieved over $1M in run-rate in their first six months of going live, Gagan Biyani has landed on an approach that’s quite different.

This article is written by Gagan Biyani, co-founder and CEO at Maven, a company that empowers the world’s experts to offer cohort-based courses directly to their audience. Previously, he was a founder of Udemy and Sprig. For more of his advice on running a startup, subscribe to his newsletter.

The traditional dogma in the startup ecosystem is that you can’t predict whether people will want your product. Instead, you do some customer research, throw an MVP out there as fast as possible, and hope it hits. That’s not my approach.

So far in my career, I’ve been early at four startups: Udemy, Lyft, Sprig and Maven. Three of them achieved over $1M in run-rate in their first six months of going live. I don’t think this is an accident. I generally think this early success could’ve been predicted before a single line of code was written.

That’s because we didn’t start by trying to build the product and test it in the market.

Instead, we started by testing specific hypotheses that we had about a market. We evaluated the veracity of those hypotheses individually using Minimum Viable Tests. Collectively, these tests allowed us to predict whether a market was going to appreciate our product before we even launched an MVP.

There are a lot of definitions of MVPs out there, but I’ll suggest one: An MVP is a basic early version of a product that looks and feels like a simplified version of the eventual vision. An MVT, on the other hand, does not attempt to look like the eventual product. It’s rather a specific test of an assumption that must be true for the business to succeed.

In an MVP, you try to simulate the entire car. In an MVT, you are just testing whether the drivetrain is more powerful with an electric engine or a gas one.

The MVT process has a major impact on how you build a company. MVP methodology says you build an MVP, see how it goes and slowly iterate upon it until it hits product/market fit. Instead, I believe you can more efficiently run a number of MVTs, create a vision for a product that fits a market and then go into a “build phase.”

In the MVP strategy, you have no strategy: You throw things at a wall until it sticks. In the MVT world, you take your time to discover a strategy but once you have one, you move forward with conviction. By moving forward with conviction, you more appropriately match the realities of startups: It takes two to four years to know if you’re right.

Below, I’ll discuss why traditional MVPs can lead founders astray, outline my 3-step framework for developing a Minimum Viable Test, and share examples from my career. I hope you’ll find this useful, whether you have a startup idea right now or a dream of starting a company someday.

Photo of Gagan Biyani
Gagan Biyani, co-founder & CEO of Maven

THE CASE AGAINST THE MINIMUM VIABLE PRODUCT:

I like the idea of a Minimum Viable Test because something about the MVP concept leads people to over-build. As an investor or advisor to over 30 companies and someone who has taught a course on how to generate and evaluate startup ideas, I’ve seen founders make dozens of mistakes during the pre-product/market fit phase. Here are a few examples:

First, their vision is bigger than their insight. Product creators like to think of what’s possible: They dream about how introducing their product into the market might change the world. This is an appropriate framing — as long as it comes with a bit of humble pie. New products don’t succeed because of the wide breadth of features they provide. Facebook isn’t successful because it allows people to build groups, host events or post photos of their dogs. Instead, Facebook is successful because of one core insight: People want to connect with their friends and family online. You can’t have 20 insights and be successful — you must have just one.

If you build an MVP, you start to think about the 20 features you might build to make people happy in a market, which takes your eye off the one specific insight that the customer actually cares about. Purity breeds success.

Second, founders over-focus on what the customer says. Customers don’t know what the product should be. It doesn’t matter who they are; this is reality. People (including me!) don’t see themselves clearly — and therefore are blind to what they actually want and how they actually make decisions. The entire field of behavioral economics has been created because of how predictably irrational consumers are. Furthermore, they don’t concern themselves with the future of your industry. They’ll always say they want a “faster horse,” when in reality they may actually want a car. So if you rely on your customers to tell you what to build, you will invariably build incremental improvements instead of delivering a novel breakthrough.

Third, founders get caught up in company-building before nailing product/market fit. Building is secondary to delivering value. It is amazing to me how many people print company swag, come up with a name, hire a team, raise capital or design a logo before they know how they are going to deliver value and to whom. Except for when practically required, you should avoid attaching yourself and your identity to titles such as CXO, “founder” or anything. (Sometimes for the sake of fundraising or hiring, I’ll use the CEO or co-founder title, but only for that purpose. I won’t introduce myself that way in social settings or allow it to infiltrate my personal identity until after the company starts to accomplish something.)

I have a rule: no company swag until the business has at least $250K of revenue or 250k users. Until then, you don’t get to “feel” the benefits of having started a company. You are nothing until you have customers who want your product.

Fourth, the word “product” in MVP implies an experience that has a distinct form. You’ve created the user journey you want your customer to go through, and you’ve narrowed that down to the smallest possible thing you can launch with. In many cases, this smallest possible thing isn’t small. It could entail a login system, a tech stack, a database and sometimes even an admin dashboard. For the user, it involves an onboarding flow and a “customer experience.” This leads to overbuilt MVPs and isn’t really where you should start. You only want to build the login systems and onboarding flows after you’ve proven that you have something you can sell, aka after you have succeeded with a minimum viable test.

Finally, MVPs often make for horrible core products. When you start building a product, you start from a blank command screen. Once you start writing code, you start to add technical and product debt. So many startups I know end up spending half of their engineering cycles paying back this debt in years 2-4. Instead, I suggest you run MVTs and then delete the code (better yet, don’t use code at all!) This allows you to start from a fresh slate when you are actually building the longer-term vision.

The point is to focus on product/market fit in the pre-product/market fit phase and then move with conviction when you enter the building phase.

ADD MINIMUM VIABLE TESTS TO THE PROCESS OF CREATING A STARTUP:

If I look back on my previous companies, I’ve always started with the same few steps:

  • Immerse yourself in a new industry.
  • Use customer development to determine your user’s jobs-to-be-done and how they currently accomplish those jobs.
  • Identify the promise you think you can make to help a user with their jobs-to-be-done.

After this, many people start building an initial version of the product (MVP) to try it out with some users. This is where I think the mistake lies. Instead of building a full-on MVP, I propose going through the MVT framework:

  • List the riskiest assumptions that might lead your business to succeed or fail.
  • Test your assumptions through Minimum Viable Tests.

Repeat these steps until you have learned enough to de-risk your biggest hypotheses. If you do this well and are intellectually honest, you will likely come up with more than one risk. This will require you to run multiple MVTs before you feel confident enough to move to an MVP.

Once you’ve finally tested enough hypotheses to have more confidence about your product viability, then go to the next steps:

  • Build an initial product to bring all of your insights together and test them with your target customer.
  • Iterate on that product until you have nailed your product offering. AKA “Get to Product/Market Fit”
  • Scale, baby, scale!

The rest of this article will dive into the MVT strategy, explaining how and when to use it.

WHAT’S A MINIMUM VIABLE TEST?

An MVT is a test of an essential hypothesis — something you must be right about, or else the company won’t stand a chance. For example, with my current company Maven, it’s essential that people find 10X more value for a cohort-based course than for a self-directed asynchronous course. (A bit further down, I’ll explain how we articulated and tested this hypothesis — for now, stay high level with me.)

Minimum Viable Testing involves identifying hypotheses you have about a market and creating tests that only focus on those hypotheses, not the long-term vision, the customer’s opinions, company or product building. This method forces you to be even more minimal in your initial tests, so that you can save time and have higher accuracy on your eventual initial product.

This philosophy works for technical founders, non-technical founders, and even non-startup companies. Perhaps most attractive: It means you can build a successful company without being technical. In fact, I’ve almost always tested out my ideas before bringing on my technical co-founder. This is valuable because technical team members are extremely hard to attract, and it is far easier when you can say: “I have already run tests and proven that there is demand for my product” instead of “I have a vision for something big.” Engineers like data and proof, not pie-in-the-sky vaporware.

Can you predict success before you launch?

Below are the nuts and bolts of how I design and execute these tests.

THE 3-STEP MINIMUM VIABLE TEST PROCESS

So, you’ve gone through steps 1-3 above: You’ve immersed yourself in a new industry and identified an opportunity. You’ve become best friends with your target customers. You dream like them; you think like them. You know their problems inside and out. Great. Now you’re ideating on a specific solution you think will work to help solve their problem. How do you know if this opportunity is “the one”?

Simple: test it. That’s what I’m talking about in steps 4 and 5 above, which is where my approach differs from what most people expect. The following three steps are a more detailed explanation of how I think about this part of the process:

Find your value proposition.

Determine the promise of your idea. Why would users want it? What are you promising them?

  • Focus on actions. This is often driven by customer development, but remember that customers do not always know nor are they forthcoming about their desires and needs. Their actions, however, speak volumes. Find a value proposition that speaks to their actions: What are they already trying to do? How can you help them achieve their goals better than they know they can?
  • Stay away from ideas that are too complicated. Think about Stripe, AirBnB, Dropbox, Uber. They each had a ridiculously simple value proposition. The solution might have been complex or controversial, but the value to the consumer was not. Who wouldn’t want a taxi that arrives on demand in <5 minutes? Who wouldn’t want one line of code to replace the days of implementing complex payment processing systems? Find a value proposition that’s a no-brainer.

List your risky Assumptions.

List the primary risks: why might this not work? What breaks your system?

  • The #1 riskiest assumption is building something people don’t want. Everyone knows this; it is the Y Combinator motto. Yet somehow, about half of the founders I meet do not list “people want this” as a top-3 assumption they’re making about their business.
  • Execution risk is real. Lots of great ideas die because they simply don’t work in reality. I remember hearing a pitch about a cloud storage solution that was 1/10 cheaper than Dropbox. If it worked, the company would’ve been a massive success. Unfortunately, it was vaporware. It’s important to identify risks related to the feasibility of execution.
  • Marketing. So many founders have a great idea but can’t figure out how to sell it. Second-time founders know that they shouldn’t even bother with an idea if it is not sell-able. Marketing risks force you to face the truth: Do you know enough about your market to know how to sell it and who will buy it? Is there even a go-to-market strategy that can work or is that the most difficult part of this business (and therefore the part I need to de-risk in my MVT’s).
  • Market size. This is almost impossible to guess at and so many people hand wave their potential market size and put in fuzzy numbers. Low confidence is still better than no confidence. I strongly believe you should have a clear understanding of what you would want to see in order to believe there’s a big enough market for what you’re doing. If your product is narrow and you believe it is extensible, then put that on your list of risks.
  • Profit. Almost all startups start with upside down profit margins. That’s okay, but some companies will never get to positive margins. Giving something away at a third of the cost of delivery is a fast way to burn a lot of cash and eventually shut down your company. Giving something away at 80-90% of the cost of delivery is more palatable. Force yourself to figure out what price consumers are willing to pay relative to the cost it is for you to deliver your solution.

Test the atomic unit.

Determine whether your idea actually works. Focus only on the “atomic unit” of what you plan to sell. For Google, the atomic unit is a search query. For Amazon, it’s ordering a book online. For Coinbase, it’s an easier way to buy and sell crypto.

  • Pick your risky assumption and test just one at a time. You will almost always get 2-3 risky assumptions tested in one go, but there should always be a primary. If there isn’t, you won’t get conclusive results.
  • Devise a test for that specific assumption. If your riskiest hypothesis is execution risk: test out execution by actually trying to deliver the goods or services in as hack-y a way as possible. Remember in those cases to evaluate the profit ratio. You’ll learn what is going to be really tough and what is easier than you expected. From there, you can often devise second and third tests to dive even deeper to specific areas of concern. If your riskiest hypothesis is whether people will want your product, do not ask them. Force them to pay for it with their time or their money. If they don’t, then be honest with yourself about why and iterate until you find something people are absolutely in love with.
  • When devising a test, do not build out everything. Focus only on the hypothesis. In the case of Amazon, you don’t need to build a web ordering system, a warehouse and a delivery system to evaluate whether people want eCommerce. Instead, identify your risky assumption: is it whether people actually want to buy books online? Then test just that by building a web page for book buyers. Your solution will help you learn whether your instincts are right. If you build a massive list of books, and the customers hate it — then you know that isn’t the right solution. If instead you build a search form where they can search for a book and customers don’t know what to put in, you know this is a discovery-based business rather than a search-based business. There are so many insights to be had that will provide nuance to any future product you end up building.
  • Pick a clear and specific atomic unit. The more niche the better in this case. You are looking for the smallest possible item that you could distill your product down to. This unit is important because consumers rarely ever buy the value proposition of a company, they buy a specific item that you are selling. Take Amazon. In 1994, nobody said, “Oh I wish there was a massive store on the Internet where I could go and buy anything I want.” Business people might’ve had that idea, but no consumer did. Instead, consumers said, “I am interested in buying X book that I can’t find in any bookstore. Where can I find it?” In this case, the consumer doesn’t even care whether it’s on the Internet! So the atomic unit test for Amazon could even just be a phone service where you call and they help you find any book you may want.

EXAMPLES IN ACTION: HOW I’VE USED MINIMUM VIABLE TESTS AS A FOUNDER

Let’s dive into some examples of how I’ve used the MVT process.

Maven:

  • Value proposition: The promise of Maven is that a platform for cohort-based courses would dramatically improve the quality of education on the internet.
  • Risky assumptions: People may not be willing to pay 10x more for a cohort-based course than for an asynchronous course.
  • Atomic unit test: The atomic unit is a cohort-based course. How can we test whether cohort-based courses work as quickly as possible?

Wow, that’s a big value proposition. How could you possibly test a marketplace model, a technology product, and a new format for learning all at once?

Instead of trying to test everything with an MVP, I picked just one risk to start. I could have picked so many others: Will people adopt a rev-share business model for cohort-based courses? Will consumers find it valuable to have a centralized library of courses? As I wrote above, your test should always have a primary risk. In this case, the primary risk is a profit question: on a per-seat basis, cohort-based courses are more expensive to produce than video-based ones. So my first MVT was to figure out the revenue-profit ratio of a course: Will consumers be satisfied with buying a cohort-based course for a significantly higher price point than video-based courses?

I’m not just looking for a binary answer here. Instead, I’m expecting a nuanced result that would help influence future go-to-market decisions. For example, I may learn that a specific type of customer loves these courses more than others. Or I might find out that the value is in one major part of the course (say, the community) instead of others (say, the quality of the content). This is an art, not a science!

Remember that the goal of Maven is to build software for cohort-based course creators. However, in this test, we chose not to focus on software at all. The risk was about cohort-based courses and we decided to run a test that evaluates the course itself, not the software to run the course. This is critical since it helped us dramatically reduce the scope of our initial test.

Solution: I decided to run just one course. I picked a specific area that I knew well and then tried to run a course on that subject. I found a partner who already had a big adjacent business (Sam Parr at TheHustle) and asked him if he would co-teach a course with me. This allowed me to test a course without having to build a marketing machine from scratch. It was a hyper-narrow test that achieved the exact result I was looking for: The course had a 9/10 rating from its students and made over $150,000 in revenue in its first cohort.

I learned a ton, which shaped the future product. Building a community is the hardest and highest leverage part (we failed at it in this cohort).The price point was a no-brainer, and access to the instructor and the energy in the room was a huge value-add. Student behavior was widely variable across the student body.

Perhaps most importantly, I realized that there was a certain art to community-building and course design that I personally did not have an aptitude for. That’s one of the major reasons I wanted to work with Wes Kao, who is an expert here. I had many other potential candidates for co-founder at Maven, but Wes had the unique capabilities that I lacked. I would never have known I lacked these capabilities if it weren’t for the tests I ran.

Sprig:

  • Value proposition: The promise of Sprig was that a fast, healthy food delivery company would be lightyears better than existing food delivery.
  • Risky assumptions: The risk was that the operations of delivering food would quickly become a nightmare.
  • Atomic unit test: The atomic unit is a delivered meal. How can we test whether we can deliver meals to customers quickly without building a restaurant?

Solution: Use a private chef. We found one on Craigslist, then emailed our friends saying we were going to open up a special dinner service for one night. We asked them to order via Eventbrite and used a map on my living room table to do the dispatch. We recruited drivers from our significant others, friends and a few TaskRabbits. I tracked the drivers via Settlers of Catan pieces that I placed on the map and moved around. Then, I used text messages to communicate the directions to the driver and send customer confirmations. Voilà — we started a restaurant in about 2 weeks.

The goal of this test was to evaluate the operations. It was a success — we understood very quickly that it was both doable but also extremely complicated to run a delivery service like this. We knew that the unit economics were tight but could likely work. Notice that this test did not evaluate many other potential hypotheses. We had no idea if consumers liked it (we were mostly focused on our friends after all). We didn’t know how to market it. We also didn’t worry about the ordering system, the potential delivery algorithms, and more. So many things were left on the table. The goal was just to prove one thing: that the operations of food delivery could be done via a distributed fleet of cars. We viewed it as a success: in one night, we delivered 40+ meals with just two weeks of preparation. But it doesn’t end there.

For each MVT you run, you should ask yourself again: Now that I’ve proven or disproven that risk, what are other risks I should be considering and testing against?

For Maven, we ran five different MVTs over nine months before we finally shipped v1 of our MVP. The MVTs tested things like: what would it be like to help someone else teach a course (instead of teaching one ourselves), what is the value proposition to instructors to teach courses, and how do we build a community in a more concerted fashion. This enabled us to be incredibly confident that we were onto something. Within four months, we did $1 million in sales.

At Sprig, we ran three different MVTs over six months before we finally launched our MVP. We knew what our offering was and had made dramatic changes before we launched as a result of our MVTs. This helped us feel confident that we could invest in things like a kitchen and a full-time chef. Within six months, we did $1 million in sales.

WHAT COMES NEXT?

The MVTs provide knowledge about your market that help influence what you do next. In some cases, the next step is to build an MVP and launch. In others, it can be to focus on building out only one part of the product and nail that. I’ll use our two examples to show what I mean.

Path #1: Nail what your customers care about most

In the Maven example, the old MVP dogma would’ve said to build an instructor-facing product: a landing page builder, payment processing, syllabus designer, etc. However, after running our MVTs we no longer needed to assess whether instructors would use such a product. In fact, we knew that they would as long as we could show them that cohort-based courses would earn them money! Surprisingly, most instructors we pitched did not care about the product. They were mostly satisfied with their current setup.

Instead, we learned that instructors cared about three things: 1) students loving their course, 2) attracting more students, 3) being in good company (social proof). Since creators aren’t product builders or engineers, they don’t think about or care about what software we might build; they care about how that software solves problems in their lives.

Now that we felt we were confident in the business, we didn’t need to go and build an instructor-facing MVP. Instead, we realized that if we just got the right instructors on the platform and showed that they could be successful, we could attract other instructors.

Our next step was to launch successful courses and unify them under one platform so people could see what we were doing and want to be a part of it. We still didn’t have a name, website or instructor-side onboarding. To attract our first instructor (Anthony Pompliano), we did launch the basics of a course platform. Students could sign up, pay for the course, and then had access to a student portal with links out to the other products we used (Slack for community, Zoom for live video calls, Google Calendar for the invites).

Instead of shipping a full-fledged product, we focused on adding more instructors onto the platform — and six months later, we’re now working with 50+ instructors.

So far, we haven’t shipped an instructor onboarding system nor a landing page builder. We’re creating those things now and are gearing up for a private and public beta of our product. In my view, this is far more full-featured than any MVP. We skipped the MVP entirely and went from our MVT straight into company-building.

Path #2: Ship the first version of your product

In the Sprig example, we took a more traditional approach. After running our MVTs, we realized that it was necessary to give customers a basic product for them to touch and feel. We needed to test their behavior and see if our hunch that ordering food from Sprig could become a daily habit was right. Also, building a restaurant is hard to do on a one-off basis. Spinning up and shutting down production for our MVTs was a serious cost and we needed to see what a fully-featured product would look like.

So we geared up for a public launch, duct-taping together a first version of the iOS app, building a very simple routing system and spinning up a kitchen that was ready to do a hundred meals per day. The rest is history — the company had incredible uptick and grew to a $6M run-rate in its first year. Its early success was very much a result of our MVT process; we knew what customers wanted and delivered it.

USING A MINIMUM VIABLE TEST TO SAY “NO”

Some of you might be thinking: Have you ever tried an MVT and then not moved forward? Loads of times. A specific example is a travel startup idea I had. I was considering building a travel advice service where you could connect with locals around the world and have them plan your trip for you.

  • Value proposition: The promise was that consumers would want to connect with a travel advisor to help them plan trips based on local knowledge.
  • Risky assumption: The risk was operational. What would it look like to match travel advisors with customers and how would we scale a business like this? I felt this was the hardest part: there are so many different countries and places that I wanted to see if there was a viable path to building liquidity in the marketplace.
  • Atomic unit test: The atomic unit was a planned trip. I decided I would test the operations by trying to find an advisor and plan a trip myself.

I was going to Southeast Asia and found a travel advisor based in Thailand who seemed like an incredible fit. He planned the perfect trip for me and my girlfriend. We honestly had a magical time. Everything seemed great, except one problem: He did not enjoy it. I saw the job he had to deal with: the logistics and the pain of dealing with us and it felt like there was a lot less leverage than I thought there might be here. Every person is so different and unique, and travel advisors are always going to have a bias toward certain types of activities and people. I thought about this idea for months, and talked to many other companies in the space.

Ultimately, I felt like this could be a strong business, but that A) I wouldn’t enjoy it, B) it would be much harder to match advisors than I expected and C) the advisor pool was extremely small and relatively hard to find. Simply put, I couldn’t find a path to success after the initial MVT and gave up on the idea.

Obviously, anyone could run the same test and see promise. There is no perfect way to run this process: the goal is for you to see if you can find a path. If you find a path, you keep going. If you feel like it isn’t your cup of tea, or you can’t see the vision, that’s okay. Someone else might come around and invent a billion-dollar company, but it wasn’t meant to be you. Without my co-founders Wes and Shreyans, I wouldn’t be the right person to start Maven.

It’s common for people to start companies they aren’t a good fit for, and it’s painful to realize this after investing years of your life building it. In cases like those, an unsuccessful MVT can be a great thing.

CLOSING THOUGHTS:

This process is art, not science. People often want to know exactly the right answer to questions like: When do I know I’m done with the MVT process? How do I know if my test is successful? This is where judgement comes in and what separates the successful from the unsuccessful. It requires intellectual honesty, rigorous thinking and some dumb luck.

What’s absolutely crazy about startups is that you can run all the MVTs you want, build a great MVP, and still fail. I learned that with Sprig, which took off like a rocket ship only to peter out in year 4. One of the flaws of the MVT system is that you can’t predict how a market will evolve as other competitors and companies enter the mix. In Sprig’s case, we simply did not foresee how competitive we were going to be with the delivery applications like Doordash, Postmates and, most importantly, UberEats. In the beginning, we were out-delivering (pun intended) the competition and customers preferred us more than paying $10 for delivery and waiting one hour for their food. In the long run, however, as the network effects kicked in, these services got faster and cheaper. Eventually, they ate our lunch (sorry I really couldn’t help myself).

It is in fact because startups are so risky that doing the MVT process makes sense. Once you have an MVT that’s successful, it is important to avoid “The Traction Treadmill.” Zero out the revenue you made in the MVT. Use the success to fundraise, but don’t use it for “Month over Month” growth. Instead, start again from zero and focus on launching a product that has product/market fit. You can easily get trapped into a vicious cycle where all you’re doing is focusing on taking an initial MVT or MVP and then trying to grow it Month over Month.

The whole point of the MVT strategy is to give you more confidence so that you can forego short-term growth for long-term growth. Run a number of MVTs, create a product vision, and then execute on that vision while getting feedback from your customers.

The goal of this framework is not to prevent failure. That’s impossible. The goal is to increase your chances of success. You’ll still face long odds, but if you run the above process well, you’ll be able to tip the scales more in your favor. Best of luck and happy testing!

Thank you to my writing coach Ellen Fishbein, First Round editor Jessi Craige Shikman and Maven investors Neeraj Berry and Todd Goldberg for their help in making this piece a reality.

Cover image by Getty Images / the_burtons.