Founder Guide
Minimum Viable SaaS: What to Build First (And What to Skip Entirely)
MNB Research TeamFebruary 12, 2026
<article>
<h1>Minimum Viable SaaS: What to Build First (And What to Skip Entirely)</h1>
<p>Most first-time SaaS founders ship a product that is 60% wrong. Not because they lack technical skill, but because they spend months building features nobody asked for while neglecting the three or four things that actually determine whether a customer pays.</p>
<p>This is not a screed against over-engineering. It is a practical, data-backed framework for deciding — before you write a single line of code — what belongs in v1, what belongs in v3, and what belongs in the trash.</p>
<p>If you are a solo founder or a two-person team staring at a backlog of 47 feature ideas, this guide is for you.</p>
<hr/>
<h2>The MVS Problem: Why "Minimum Viable" Fails Most Founders</h2>
<p>The term "minimum viable product" was coined by Frank Robinson in 2001 and popularized by Eric Ries in <em>The Lean Startup</em>. The concept is sound: ship the smallest thing that delivers real value, learn from real users, iterate. In theory, every founder knows this.</p>
<p>In practice, two failure modes dominate.</p>
<p><strong>Failure Mode 1: The MVP That Is Not Minimum.</strong> The founder convinces themselves that feature X is "core," that the onboarding flow needs to be polished, that the integrations are essential. What they ship is not an MVP — it is a half-finished v2. It takes seven months instead of seven weeks. By the time it launches, they have burned through savings, their energy is depleted, and they have zero validation data.</p>
<p><strong>Failure Mode 2: The MVP That Is Not Viable.</strong> The founder goes too lean. They ship a Google Form and a spreadsheet. Users sign up once, encounter the friction, and never return. The founder interprets low retention as "product-market fit failure" and pivots — when the actual problem was that the product was too rough to give a fair test.</p>
<p>The goal is the zone between these two failure modes: minimal enough to ship in weeks, viable enough to generate honest signal.</p>
<p>Here is how to find that zone.</p>
<hr/>
<h2>Step 1: Define the One Job Your Product Gets Hired To Do</h2>
<p>Clayton Christensen's "jobs to be done" framework is overused as a buzzword and underused as an actual scoping tool. For the purpose of building an MVS, it is the most useful question you can ask.</p>
<p><em>What specific, concrete job does your customer hire your product to do?</em></p>
<p>Not a vague job like "save time" or "grow revenue." A specific job: "reconcile Stripe payouts against my bank account without opening three tabs," or "remind my freelance clients to pay their invoices without me having to chase them manually," or "tell me when a competitor changes their pricing page."</p>
<p>The job should be:</p>
<ul>
<li><strong>Concrete.</strong> You can describe it in one sentence without using the word "help."</li>
<li><strong>Recurring.</strong> The customer faces this situation at least once a week. Daily is better.</li>
<li><strong>Painful.</strong> The customer is currently solving it with a workaround they hate — a spreadsheet, a manual process, a combination of three tools that do not talk to each other.</li>
<li><strong>Bounded.</strong> You can tell when the job is done. There is a clear output.</li>
</ul>
<p>If you cannot pass all four criteria, you do not have a job — you have a vague aspiration. Keep narrowing until you do.</p>
<p>Once you have the job, every feature decision becomes a test: <em>Does this feature help the customer complete this specific job, or does it do something else?</em> If it does something else, it does not belong in v1.</p>
<hr/>
<h2>Step 2: The Three-Layer Stack — Core, Shell, and Polish</h2>
<p>Think of your product in three concentric layers.</p>
<h3>Layer 1: Core (Must Ship in V1)</h3>
<p>The core is the mechanism that actually performs the job. It is the algorithm, the integration, the data transformation, the automation — whatever makes your product work at all. Without the core, the product is a landing page, not a product.</p>
<p>If your product reconciles Stripe payouts, the core is the reconciliation engine. Everything else is scaffolding around it.</p>
<p>Rule: The core must work reliably for at least 80% of your target use cases before you ship anything else. A core that fails half the time destroys trust faster than anything you can do in your onboarding flow.</p>
<h3>Layer 2: Shell (Must Ship in V1, But Simple Is Fine)</h3>
<p>The shell is the minimum interface and infrastructure required for a user to actually use the core. It includes:</p>
<ul>
<li>Authentication (sign up, log in, password reset)</li>
<li>Basic UI that surfaces the core's output</li>
<li>The minimal data input mechanism (connect an account, paste a URL, upload a file)</li>
<li>Email delivery for anything time-sensitive</li>
</ul>
<p>The shell does not need to be beautiful. It needs to be functional and trustworthy. Functional means it works without errors. Trustworthy means it does not look like it was built in an afternoon — a small amount of polish on the UI goes a long way toward making users believe their data is safe.</p>
<h3>Layer 3: Polish (V2 and Beyond)</h3>
<p>Everything else. This includes:</p>
<ul>
<li>Team accounts and permissions</li>
<li>Advanced filtering, sorting, search</li>
<li>Integrations beyond the one required to do the core job</li>
<li>Dashboard analytics and reporting</li>
<li>Custom branding or white-labeling</li>
<li>API access</li>
<li>Mobile apps</li>
<li>Bulk operations</li>
<li>Notification preferences beyond basic email</li>
</ul>
<p>These are real features that real customers will eventually want. They are not fake. But they do not belong in v1 because they do not change whether the core job gets done. They belong in the roadmap, clearly prioritized, scheduled for later.</p>
<hr/>
<h2>Step 3: The Feature Scoring Matrix</h2>
<p>When founders argue about what belongs in v1, the argument is usually emotional rather than analytical. The Feature Scoring Matrix makes it analytical.</p>
<p>Score each proposed feature on four dimensions, each on a scale of 1-3:</p>
<table>
<thead>
<tr>
<th>Dimension</th>
<th>1 (Low)</th>
<th>2 (Medium)</th>
<th>3 (High)</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Job Relevance</strong></td>
<td>Tangentially related to the core job</td>
<td>Supports the core job indirectly</td>
<td>Directly required to complete the core job</td>
</tr>
<tr>
<td><strong>Blocking Frequency</strong></td>
<td>Users can work around it easily</td>
<td>Workaround exists but is annoying</td>
<td>No workaround — product unusable without it</td>
</tr>
<tr>
<td><strong>Build Cost</strong></td>
<td>More than 2 weeks of dev time</td>
<td>1-2 weeks of dev time</td>
<td>Less than 1 week of dev time</td>
</tr>
<tr>
<td><strong>Signal Value</strong></td>
<td>Tells you nothing new about whether the product works</td>
<td>Provides some learning</td>
<td>Directly tests a core assumption</td>
</tr>
</tbody>
</table>
<p>Add the scores. Features with a total of 10-12 go in v1. Features scoring 7-9 go in v2. Features scoring below 7 go in the backlog indefinitely.</p>
<p>The genius of this matrix is that it forces you to be honest about build cost. Founders routinely underestimate development time, which inflates the apparent value of every feature. When you score build cost honestly — and "honestly" means multiplying your initial estimate by 1.5 — many "obviously necessary" v1 features drop out of the scoring.</p>
<hr/>
<h2>Step 4: The Anti-Features — What to Explicitly Not Build</h2>
<p>The most important scoping decisions are often the things you choose <em>not</em> to build. Here is a list of features that appear in almost every early-stage SaaS roadmap and almost never belong in v1.</p>
<h3>Team Collaboration Features</h3>
<p>Multi-user accounts, permissions, sharing, comment threads, activity feeds. These are essential for horizontal SaaS tools targeting teams. They are almost always premature for a v1 that you are selling to individual users or solo operators first. Build the single-user product, validate that it works, then add multi-user as a growth lever.</p>
<p>The exception: if your core job literally cannot be done by one person — if it requires two or more collaborators by definition — then yes, multi-user is core. Most of the time it is not.</p>
<h3>Advanced Reporting and Dashboards</h3>
<p>The temptation here is understandable. Everyone wants to see graphs. Graphs look impressive in demos. The problem is that a dashboard is only valuable when there is enough data to be interesting, which means it is useless for your first ten customers. Build a table that shows the data. Graphs come later.</p>
<h3>Integrations Beyond One</h3>
<p>You might need one integration for your product to work at all — the Stripe connection, the Google Analytics hook, the Slack webhook. You do not need six integrations. Every integration you add is a maintenance burden, a potential source of API breakage, and a feature that dilutes your positioning. Ship one. Add more when customers ask and when you understand which ones they actually use.</p>
<h3>White-Labeling and Custom Branding</h3>
<p>This is almost always a request from enterprise prospects who are not going to buy v1 anyway. It is a distraction. Build it when you have a contract that requires it.</p>
<h3>Mobile Apps</h3>
<p>Unless the core job is inherently mobile (a field service app, a location-based tool), the mobile app does not belong in v1. Your early users are likely technical people or professionals at desks. Ship a web app that works on mobile browsers. A native app can come later.</p>
<h3>CSV Import and Export</h3>
<p>This is a productivity feature, not a core feature. In the early days, you can handle data migration manually for your first customers. Build the import wizard when you have enough customers that doing it manually is no longer feasible.</p>
<hr/>
<h2>Step 5: The V1 Build Checklist</h2>
<p>Once you have scored your features and stripped out the anti-features, run through this checklist before you start building. Every item should have a clear answer.</p>
<h3>The Core Job</h3>
<ul>
<li>Can you state the core job in one sentence without using the word "help"?</li>
<li>Have you talked to at least five people who currently do this job manually and confirmed they find it painful?</li>
<li>Is the core job something a user does at least weekly?</li>
<li>Is there a clear, observable output that tells the user the job is done?</li>
</ul>
<h3>The Core Mechanism</h3>
<ul>
<li>Do you have a working prototype of the core mechanism? Even a script counts.</li>
<li>Have you tested it against real data from at least one potential customer?</li>
<li>Does it produce the right output at least 80% of the time?</li>
<li>Do you understand the failure modes — when it breaks and why?</li>
</ul>
<h3>The Shell</h3>
<ul>
<li>Is authentication in place (sign up, log in, password reset)?</li>
<li>Is there a basic UI that shows the core output?</li>
<li>Is there an email path for anything time-sensitive?</li>
<li>Can a user complete the core job end-to-end without asking you for help?</li>
</ul>
<h3>The Business</h3>
<ul>
<li>Is there a way to take payment? Even a Stripe link is fine.</li>
<li>Is there at least one person committed to paying before you launch?</li>
<li>Do you have a way to capture contact information from interested users?</li>
</ul>
<p>If you have checked every item on this list, you are ready to ship. If you have not, figure out what is blocking you and solve it — do not add features instead.</p>
<hr/>
<h2>Step 6: The One-Week Sprint Framework</h2>
<p>The best MVS teams time-box their builds in one-week sprints with explicit, immovable deliverables. Here is how a disciplined eight-week MVS build looks for a solo technical founder:</p>
<p><strong>Week 1:</strong> Build the core mechanism as a standalone script or API. No UI. Input: a sample data file or API credentials. Output: the result in a terminal or a JSON file. Test it against five real examples.</p>
<p><strong>Week 2:</strong> Add persistence. Wrap the core in a database so results are stored and retrievable. Add the ability to run the core on a schedule (cron job, background worker). Still no UI.</p>
<p><strong>Week 3:</strong> Build authentication. Use a library (Auth.js, Supabase Auth, Clerk). Do not build auth from scratch. Get sign up, log in, and password reset working.</p>
<p><strong>Week 4:</strong> Build the minimal UI. A single dashboard page that shows the output of the core mechanism for the logged-in user. No styling system required — a functional layout is enough.</p>
<p><strong>Week 5:</strong> Add the input mechanism. The form, the OAuth connection, the file upload — whatever is required for the user to configure the core mechanism. Test the full end-to-end flow: sign up, configure, see output.</p>
<p><strong>Week 6:</strong> Add billing. Stripe Checkout is the fastest path. One plan, one price, monthly billing. Do not build a tiered pricing system yet. Test that payment works and that access is gated correctly.</p>
<p><strong>Week 7:</strong> Polish and harden. Fix the three or four most obvious bugs from your internal testing. Add basic error handling so the app does not show stack traces to users. Add transactional emails for sign-up confirmation and payment receipt. Write a basic landing page.</p>
<p><strong>Week 8:</strong> Soft launch. Share with the five to ten people you have been talking to during discovery. Charge them. Watch them use it. Take notes. Do not build anything new until you have feedback from real users.</p>
<p>Eight weeks from zero to paying customers. This is achievable. Founders who take longer almost always do so because they expanded scope, not because the core was genuinely more complex.</p>
<hr/>
<h2>Step 7: Common Scoping Mistakes and How to Catch Them Early</h2>
<h3>Mistake 1: Building for the Demo, Not the Daily User</h3>
<p>There is a category of features that look great in a demo but are never used after day one. Animated transitions, interactive onboarding tours, a "smart" onboarding wizard that asks ten questions and then recommends a configuration. These features consume enormous build time relative to their actual user value.</p>
<p>The test: imagine a user who has been using your product for 90 days. Does this feature still matter to them? If the honest answer is "probably not," it does not belong in v1.</p>
<h3>Mistake 2: Solving the Exception Instead of the Rule</h3>
<p>Every time you talk to a potential customer, they will mention edge cases. "What if I have two Stripe accounts?" "What if I need to import data from Quickbooks?" "What if my team is in three time zones?"</p>
<p>These are real problems — for some users. The question is whether they are real problems for the majority of your target customers. Build for the rule, not the exception. Document the exceptions. Revisit them in v2 if they keep coming up.</p>
<h3>Mistake 3: Copying Competitors' Feature Sets</h3>
<p>When you look at a well-funded SaaS competitor and see their feature list, you are looking at years of accumulated product decisions, many of which were wrong at the time. You are not competing with their feature set in v1. You are competing with their ability to get a specific job done for a specific kind of customer.</p>
<p>The question is not "does my product have everything Competitor X has?" The question is "does my product do the core job better than Competitor X for my target customer segment?" You can win on one dimension with a fraction of their feature set.</p>
<h3>Mistake 4: Treating "Founder Intuition" as a Reason to Build</h3>
<p>Intuition is valuable. It is also frequently wrong about specific features. When you find yourself saying "I just know users will want this," ask yourself whether you have evidence. Evidence means: at least one person you have talked to explicitly said "I wish it did X." If you do not have that, intuition is not a reason to build — it is a reason to validate first.</p>
<hr/>
<h2>The Rule of Three Paying Customers</h2>
<p>Here is the most practical heuristic for MVP scope that we have seen work consistently: build only what you need to charge three people.</p>
<p>Not sign up. Not "interested." Not waitlisted. Paying, with a credit card on file.</p>
<p>This constraint is useful because it forces you to think about the minimum set of features that would make someone open their wallet. Most of the features you are arguing about in your backlog are not in that set. Most users will pay for a product that reliably solves one painful problem, even if everything else about it is rough.</p>
<p>When you hit three paying customers, you have proof that the core job is real, that your solution addresses it well enough, and that your pricing is not completely wrong. Everything built after that point should be informed by what those three customers tell you they actually need next.</p>
<hr/>
<h2>What "Viable" Actually Means</h2>
<p>The word "viable" in MVP is often interpreted as "technically functional." That is not quite right. A product is viable if it passes a higher bar: a user can complete the core job without asking you for help, and they are willing to pay for the ability to do so.</p>
<p>The "without asking for help" part matters. If your early users need a 30-minute onboarding call before they can use the product, the product is not yet viable — it is a service with software attached. You should offer that call anyway, because it is how you learn what the product is missing. But the goal is to get to a state where the product itself guides the user through the job.</p>
<p>The "willing to pay" part matters because free users and paying users behave differently. Paying users have made a commitment. They will push through friction that free users abandon. They will tell you when something is broken because they have skin in the game. The signal from paying users is worth ten times the signal from free users.</p>
<hr/>
<h2>Tools That Speed Up MVS Development Without Compromising Quality</h2>
<p>In 2026, there is no excuse for building authentication, billing, or email infrastructure from scratch. These are solved problems with production-grade libraries. Using them is not "cutting corners" — it is smart resource allocation.</p>
<ul>
<li><strong>Authentication:</strong> Clerk, Supabase Auth, or Auth.js. Pick one and ship it. Do not build your own.</li>
<li><strong>Billing:</strong> Stripe. Use Stripe Checkout for v1. Set up webhooks. Wrap it in a subscription guard middleware. Done.</li>
<li><strong>Email:</strong> Resend or Postmark. Transactional emails should take hours, not weeks.</li>
<li><strong>Database:</strong> Supabase (Postgres) or PlanetScale (MySQL). Managed, scalable, cheap at low volume.</li>
<li><strong>Background jobs:</strong> Inngest, Trigger.dev, or a simple cron on Vercel/Railway. Do not run background jobs on a server you have to manage in v1.</li>
<li><strong>Deployment:</strong> Vercel for Next.js, Railway for everything else. Do not spend time on DevOps in v1.</li>
</ul>
<p>Using these tools, a solo technical founder can have the shell of a production-grade SaaS application in under a week. The remaining weeks go into the core mechanism — which is the only place your unique value actually lives.</p>
<hr/>
<h2>The Decision Framework in Summary</h2>
<p>Before you build any feature, ask these four questions in order:</p>
<ol>
<li><strong>Is this required to complete the core job?</strong> If no, it does not belong in v1.</li>
<li><strong>Is this blocking users from completing the core job?</strong> If users have a workable alternative, it can wait.</li>
<li><strong>Will building this generate learning that changes how you build the core?</strong> If no, defer it.</li>
<li><strong>Can you charge three people without this feature?</strong> If yes, ship without it and add it later.</li>
</ol>
<p>If a feature passes all four questions, build it. If it fails any of them, put it in the v2 backlog and move on.</p>
<p>The founders who build great products are not the ones with the most features. They are the ones who said no to the most good ideas. Every feature you do not build in v1 is a week of your life you get back, a bug you do not have to fix, and a conversation with a user that teaches you something instead of confirming what you already assumed.</p>
<hr/>
<h2>Conclusion: The Discipline to Ship Small</h2>
<p>Building an MVS is not a technical challenge. It is a discipline challenge. The code is often the easy part. The hard part is resisting the pull toward completeness — the feeling that shipping an incomplete product reflects poorly on you, that real founders ship polished products, that one more feature will make the difference.</p>
<p>It will not. The difference is made by shipping something real, watching real users struggle with it, and iterating based on what you learn. Every week spent building features that do not get tested is a week of learning delayed.</p>
<p>Define the job. Build the core. Wrap it in the minimum viable shell. Charge three people. Then — and only then — start building the features that your actual customers have told you they need.</p>
<p>That is what building first means.</p>
</article>
Every niche score on MicroNicheBrowser uses data from 11 live platforms. See our scoring methodology →