The Vibe Coding Rescue Playbook: How to Salvage Your AI-Built MVP

Practical insights on using AI tools to build production-ready software faster.

Your AI-built app looked great in the demo. Now it is breaking in production. Here is the step-by-step rescue playbook — from triage to working product.

You built something with AI tools. It looked great in the demo. Maybe you showed it to early users, maybe you even took some payments. And then things started breaking.

Features that worked last week don't work now. New bugs appear every time you fix old ones. Performance is getting worse, not better. You've been "nearly finished" for months.

You're not alone. This is the most common pattern I see from founders who built with Cursor, Replit, Bolt, or Lovable. The prototype was brilliant. The production journey has been painful.

Here's the playbook for rescuing what you've built — from triage through to a working product.

Step 1: Triage — understand what you actually have

Before deciding what to do, you need an honest assessment of what exists. Not what you think exists. Not what the AI tool told you it built. What actually works.

The 30-minute triage checklist:

Can a new user sign up, complete the core action, and get value without your help? If not, the product isn't production-ready regardless of how complete it looks.

Does the same action produce the same result every time? Inconsistency in AI-generated code often hides under the surface — it works most of the time but fails unpredictably.

What happens when you deliberately break things? Enter invalid data. Navigate backwards unexpectedly. Open the same page in two tabs. Disconnect from the internet mid-action. If the app doesn't handle these gracefully, it's not ready for real users.

How does it perform with more than a handful of concurrent users? Load the app in ten browser tabs simultaneously and watch what happens.

Check your error logs (if you have them — many vibe-coded apps don't). The volume and type of errors tell you more about the codebase health than any code review.

Step 2: Decide — fix, rebuild, or hybrid

Based on triage, you're looking at one of three paths.

Fix (the codebase is fundamentally sound):
The core architecture works. Authentication is solid. The data model makes sense. The problems are in specific features, edge cases, or performance optimisation. This is the cheapest path — typically 2-4 weeks of targeted work.

Signs you can fix: Problems are localised to specific features. The app works correctly for the main user journey. Performance is acceptable for current user numbers. Security basics (authentication, HTTPS, input validation) are in place.

Rebuild (the codebase is structurally compromised):
The architecture has fundamental problems. Different parts of the app use inconsistent patterns. The database schema doesn't support the actual data model. Authentication is bolted on rather than designed in. This isn't a failure of effort — it's a consequence of AI tools generating local solutions without global coherence.

Signs you need to rebuild: The same bugs keep reappearing after fixes. Changes in one area consistently break others. Multiple conflicting approaches to the same problem (three different state management patterns, two authentication systems). Performance degrades with each new feature.

A rebuild doesn't mean starting from zero. The existing prototype is valuable as a specification — it shows exactly what the product should do. The rebuild uses that specification with proper architecture underneath. With production-grade tooling and methodology, a structured rebuild typically takes 3-4 weeks.

Hybrid (sound core, rotten edges):
The most common scenario. The core user journey works but surrounding features are unreliable. The approach: stabilise the core, rebuild the problematic features, then systematically work outward.

Step 3: Stabilise what works

Before fixing or rebuilding anything, stabilise the working parts. This means:

Add error monitoring. If your app doesn't have error tracking (Sentry, LogRocket, or similar), add it immediately. You need to know what's failing before you can fix it. Most vibe-coded apps have zero visibility into production errors — users encounter problems and you never know.

Add basic logging. Track the critical user actions: signup, core feature usage, payment events. This tells you which paths real users actually take, which may be different from the paths you built for.

Lock the database. If your database schema is changing with each deployment, stop. Define the schema explicitly and migrate deliberately rather than letting AI tools modify it implicitly.

Pin your dependencies. AI tools often install packages without version pinning. One upstream update can break your entire app. Lock every dependency to a specific version.

Step 4: Fix the critical path first

The critical path is the minimum journey a user takes to get value. For most products, it's: sign up → complete core action → see result. Everything else is secondary.

Map this path explicitly. Then test every step under every condition you can think of. Fix issues in order of severity along this path before touching any secondary features.

This is where the production-ready checklist becomes essential. Work through it for the critical path only. Don't try to production-harden the entire app simultaneously — that's how rescue projects stall.

Step 5: Handle security before scaling

The most dangerous aspect of vibe-coded apps is often security. Research shows that 45% of AI-generated code contains vulnerabilities, and these aren't minor issues — they're authentication bypasses, injection vulnerabilities, and exposed sensitive data.

Priority security fixes:

Authentication. Is it actually checking credentials on every protected route? AI tools frequently create "authentication" that checks on the login page but not on subsequent requests. The authentication deep-dive covers the specific patterns AI tools get wrong.

Input validation. Is every form input sanitised before reaching the database? SQL injection and XSS vulnerabilities are among the most common AI-generated security holes.

API protection. Are your API endpoints actually checking that the requesting user has permission to access the data they're requesting? AI tools often build CRUD endpoints that return data to anyone who asks.

Secrets management. Are API keys, database credentials, and payment processing keys properly secured? Check your frontend code and Git history — AI tools frequently hardcode secrets in places users can see them.

Step 6: Decide what to cut

Most vibe-coded MVPs have too many features. The AI made it easy to add them, so the founder kept adding. The result is a sprawling product where nothing works reliably rather than a focused product where the core works perfectly.

Look at your feature list and ask: which features have real users actually used? Analytics will tell you. Usually, 20% of features account for 80% of usage. Consider cutting or deprioritising the other 80% of features until the core 20% is bulletproof.

This feels counterintuitive — it seems like removing features is going backwards. But a product that does three things perfectly will retain users. A product that does fifteen things unreliably won't.

Step 7: Set up proper development workflow

One reason vibe-coded apps accumulate problems is the absence of basic development practices. These aren't bureaucratic overhead — they're the minimum structure needed to maintain code quality over time.

Version control with branches. Stop deploying directly from the AI tool to production. Use Git branches, review changes before merging, and maintain a stable production branch.

Staging environment. Test changes somewhere that isn't production. Real users shouldn't be your QA team.

Automated tests for the critical path. You don't need 100% test coverage. You need tests that verify your core user journey works after every change. If the signup flow, the main feature, and the payment flow all pass automated tests, you can deploy with confidence.

Database migrations. Every schema change should be an explicit, reversible migration — not an ad-hoc modification.

When to call for help

The DIY rescue works when: problems are clearly identifiable, the core architecture is sound, and you have (or can hire) someone who can evaluate AI-generated code critically.

It doesn't work when: the codebase has grown beyond your ability to understand it, security issues run deep, or you've already spent months trying to fix things without progress.

If you're in the second category, a structured assessment from someone who's seen dozens of these situations can save you months. I typically spend a day evaluating a vibe-coded codebase and produce a clear recommendation: here's what's salvageable, here's what needs rebuilding, here's the realistic timeline and cost.

The Discovery Sprint works for rescue projects too — instead of discovering what to build, we're discovering what to save.

For the full picture of what separates prototype-quality from production-quality software, The Final 10%: What AI Can't Build covers the judgment calls that determine whether a product generates revenue or gathers dust.

---

Tom Crossman builds and rescues production-ready software at Hello Crossman. 18 years in product development. 100+ products shipped. If your vibe-coded MVP needs help, start here →