If your MVP feels like hard work, you’re probably doing it wrong

Bryce York
5 min readOct 24, 2018

--

There are many situations where building an MVP is hard, but when you’re working as a product manager trying to validate a new feature — it shouldn’t be so tough.

As product managers, it’s essentially our job to build MVPs all the time.

In my own projects, my job, and the companies/PMs I advise, I’m noticing a pattern. The term MVP seems to be leading us down an overly challenging path. I actually think we’re getting way ahead of ourselves half the time we build an MVP.

If we go back to first principles, the idea of an MVP is to create the minimum viable product that validates our business model. But, we also have to remind ourselves that the MVP isn’t the first step. Before we get to the stage of building any features or products, we’re supposed to validate our risky hypotheses.

I propose we focus less on building MVPs and more on building MVEs — Minimum Viable Experiments.

I like to think of the MVE as the simplest possible way to check my assumptions. I think people will do X, how can I check that without building out a huge feature nobody wants.

In practice, I’ve found you need a whole lot of MVEs before you get started on your MVP and even though it sounds scientific, there’s a real art to finding the balance. You could test and validate every single piece of your proposed user experience, but that would take longer than just building it and testing it in the market!

So instead, we need to think through which parts of the user experience, product or business model is riskiest. Which piece of the puzzle are we most unsure of?

From what I’ve found, often the riskiest part is whether anybody cares enough to do the work. In my experience, there’s a big difference between what someone should do and what somebody wants to do. In many cases, if the two don’t match, people will choose to do what they want.

So I tend to start with how to design an MVE (minimum viable experiment) that determines if people will actually bother. I’ve got two favorite strategies:

1. The red door test is an experiment where you build the call to action and simply see if anyone clicks it. So instead of building out a whole feature (or integrating an external tool) for adding a product to a private wish list — we can try adding a button labeled “add to wish list” and track whether people even click it. Then we just have to let our users know what happened after they click it (such as saying it’s coming soon, or we’re seeing whether our users would like to use this feature).

2. HamsterWare (aka ‘Flinstoning’). The idea of ‘flinstoning’ refers to the foot-powered car from the TV show, and HamsterWare implies building software that’s powered by a hamster wheel behind the scenes (i.e., manual work by real people). The idea here is to make most of the process manual in the beginning without necessarily letting the user know.

I’m not sure if you’ve noticed, but MVPs kind of have a bad name, and I think building more MVEs is the answer.

I’ve even heard marketing execs specifically say they hate MVPs! BUT, it’s easy to hate on them though when they’re so often over-scoped and/or built prematurely (i.e., without proper validation).

I propose we shake things up by advocating for more MVEs!

One of the challenges when building an MVP is that you’re laser-focused on stripping away as much as possible to minimize the scope without compromising the value it brings your users. The beauty of an MVE is that you can keep things simple in ways that don’t involve minimizing user value in the process. HamsterWare is a particularly great way to give users real value, even if you have to keep things contained by only showing it to some of them.

MVP vs. MVE in the wild (case study)

I recently reviewed a proposal for a product feature MVP that I think serves as a great example of how an MVP and MVE can both be valid approaches.

The proposed feature focused on the Referral stage of the Pirate Metrics. The original proposal was to build an MVP that concentrated on a single user flow rather than building the whole feature out for the full use case, a solid example of MVP scoping.

This MVP’s scope included purchasing a SaaS product (rather than building everything in-house), designing user flows for users to engage with the SaaS tool, building out the integration, building out the user flow, and then QA’ing and releasing the MVP.

Without question, this is a very reduced scope vs. what it could be.

BUT, what if we applied MVE thinking to this problem?

After running over the proposed MVP, I suggested we look for an easier way that validated whether the product they wanted to have people recommend was ready for a Referral-stage focus.

The MVE I ended up suggesting was to find a way to create an email campaign that proposed the same Referral mechanism to the user except the heavy lifting would be done manually in the background (classic HamsterWare).

By using an email campaign to present the functionality to the user, they could massively reduce the development and design time.

By using UTM tracking, they could keep track of everything without any complicated analytics/workflow builds.

And, the only notable compromises they made were: 1) only running the MVE with their email list, and 2) having to manually to the heavy lifting themselves based on what we saw in the analytics.

To me this was a huge win, they were saving a massive amount of resources up front and could quickly validate whether this particular product was ready for a Referral mechanism to be built and rolled out.

I can’t specifically share the results, but I can say the proposed MVP was what inspired me to write this article and formalize my thinking about the value of the MVE!

--

--

Bryce York
Bryce York

Written by Bryce York

Startup Product Leader • Product (management) geek sharing hard-learned lessons from 10+ yrs in early-stage & growth-stage startups

No responses yet