From Spreadsheet to Platform: The Anatomy of a Service Business Software Build
Turning service business expertise into scalable software products.
How does a 30-day service-business-to-software build actually work? Week-by-week breakdown of what the client sees versus what is happening under the hood.
People ask me how it's possible to build production-ready software in 30 days. It's a fair question — traditional agencies quote 6-12 months for similar scope.
The answer isn't magic, and it isn't cutting corners. It's a different methodology — one designed specifically for turning service business processes into software, using AI-accelerated development tools in a deliberate, structured way.
This post walks through the anatomy of a real service-business-to-software build. I've composited details from several projects (including elements of RiskPod, PulseIQ, and SetWise) to show what happens week by week — both what the client sees and what's happening under the hood.
Before Week 1: Discovery Has Already Happened
This build assumes the Discovery Sprint is complete. That one-week process has already produced a methodology map, a clickable prototype, a data model, and a build recommendation. The client has seen what the product could look like and agreed to proceed.
Without Discovery, you're building blind. With it, Week 1 starts at full speed because the fundamental questions — what to build, for whom, and why — are already answered.
Week 1: Frontend First
What the client sees: A working prototype that looks and feels like a real product. They can click through the core user journeys, see their methodology reflected in the interface, and provide feedback on specific screens.
What's actually happening: I build the entire frontend first. Every screen, every user flow, every state. This isn't a wireframe or a mockup — it's a working React application with realistic data. Navigation works. Forms respond. The experience feels complete even though there's no real backend yet.
Why frontend first? Three reasons.
First, it forces every product decision to be made upfront. You can't build a screen without deciding what data it shows, what actions users can take, and what happens next. This is where 18 years of product experience matters most — knowing which decisions to make now and which to defer.
Second, it gives the client something tangible to react to within days, not months. Service business owners aren't used to abstract technical specifications. They need to see their methodology reflected in an interface to know if it's right.
Third, it creates the specification for the backend. Instead of writing a 50-page requirements document that nobody reads, the frontend is the specification. Every screen tells the developer (in this case, me and my AI tools) exactly what data is needed and how it should behave.
The technology: React, TypeScript, Tailwind. Production-grade tools, not prototype frameworks. What's built in Week 1 is the actual frontend that ships to users — not something that gets thrown away and rebuilt.
Week 2: The Engine Room
What the client sees: The prototype from Week 1 now has real data. They can create actual entries, run real assessments through their methodology, and see results persist. It's starting to feel like software they could use tomorrow.
What's actually happening: This is where the methodology becomes code.
The database gets built — PostgreSQL via Supabase, with a schema designed around the client's data model from Discovery. User authentication goes in. Role-based access (because service businesses always have different user types — admin, consultant, client). The API layer connects frontend to backend.
Most importantly, the business logic gets encoded. This is the heart of any service-business-to-software build. The assessment criteria. The scoring algorithms. The workflow rules. The conditional logic that senior consultants carry in their heads. All of it gets translated into code that executes consistently every time.
This is also where AI-accelerated development earns its keep. I use BuildKits to generate structured specifications that AI coding tools can execute precisely. Not "vibe coding" — not loose prompts hoping for the best. Deliberate engineering where I've already made every product decision and the AI handles execution at speed.
The difference between AI-accelerated and traditional development isn't quality — it's velocity. The same decisions get made. The same edge cases get handled. The same production standards apply. But the implementation happens in days rather than weeks because the specification is structured for AI execution rather than human interpretation.
Week 3: The 2am Test
What the client sees: The product is starting to look polished. Onboarding flows work. Emails send. The dashboard shows real metrics. It feels like something their clients would pay for.
What's actually happening: This is what I call the "delight engineering" week. The features that separate a prototype from a product.
The "2am test" is my benchmark: can a new user — someone who's never spoken to the client's team — sign up, understand what the product does, complete a core action, and get value? If yes, the product works independently. If no, it's just a digitised version of the client's manual process, and that's not good enough.
This week includes onboarding flows (because first impressions determine retention), email notifications (because users need to know when things happen), data visualisation (because service business methodologies produce insights that need to be communicated clearly), and the small touches that make software feel professional rather than amateur.
It also includes integrations. Service businesses don't operate in isolation — their software needs to connect to the tools they already use. Stripe for payments. Email services for notifications. Sometimes CRM integrations or API connections to third-party data sources.
Week 4: Production Hardening
What the client sees: The product is live. Real users can access it. Performance is fast. Everything works as expected. Launch communications are ready.
What's actually happening: This is the week that separates production-ready products from prototypes, and it's the week that most "build it fast" approaches skip entirely.
Security hardening. Input validation. Error handling for every edge case. Rate limiting. Backup systems. Monitoring. Performance optimisation. SSL certificates and DNS configuration. Admin panels so the client can manage users and content without calling me.
I wrote about why this week matters in The Final 10%: What AI Can't Build. AI gets you 80% of the code quickly, but the final 20% — user psychology, conversion optimisation, production hardening — is what separates revenue-generating products from unused prototypes. That 20% requires product judgment that comes from shipping 100+ products.
The deployment itself: production hosting, CDN configuration, domain setup, analytics. The client gets a deployed, working product with a real URL that they can share with users. Not a staging environment. Not a demo. A live product.
What makes this possible in 30 days
Three things.
Frontend-first methodology. Building the entire frontend before touching the backend means every product decision is made in Week 1, not discovered incrementally over months. There's no "we'll figure it out in sprint 12" — everything is decided before a single API endpoint is written.
AI-accelerated engineering (not vibe coding). I use AI tools to execute implementations at speed, but every decision — data model, user flow, business logic, security approach — is made by someone with 18 years of product experience. The AI is the builder. I'm the architect. The difference matters, and I've written about it in How to Write Specifications That AI Coding Tools Actually Follow.
Pattern recognition from 100+ builds. After building this many products, the common patterns are deeply familiar. Authentication, admin panels, onboarding flows, payment integration, multi-tenancy — I've built all of these dozens of times. The novel part of each build is the client's specific methodology. Everything else is proven architecture applied with speed.
What the client gets at the end
On day 30, the client has:
A production-deployed web application with authentication, role-based access, and a complete admin panel. Their methodology encoded as working business logic that produces consistent results. Client-facing onboarding and user experience that passes the 2am test. Payment integration if the revenue model requires it. Analytics and monitoring so they can see how the product is being used. Full ownership of the codebase — no lock-in, no dependency on any single developer.
This isn't a prototype or an MVP in the "barely works" sense. It's a production-ready product that real users can pay for. Every build I do is designed to generate revenue from day one, not to prove a concept.
What happens after day 30
The build phase creates the foundation. The Grow phase (£1K/month) covers ongoing development, iterations based on real user behaviour, feature additions, and technical support. Most clients stay 6-12 months as the product evolves based on actual usage data rather than assumptions.
The 30-day build isn't the end of the product journey — it's the beginning. But it's a beginning with a live, revenue-capable product rather than a specification document or a deck of slides.
If you want to understand how this fits into the bigger picture of turning a service business into scalable systems and software, the pillar post covers valuation maths, revenue models, and market dynamics. And if you want to see what Discovery looks like before a build starts, read about how we find the product inside your service business.
---
Tom Crossman builds scalable systems and software for service businesses at Hello Crossman. 18 years in product development. Head of Product Engineering at Habito (£3B in mortgages processed). 100+ products shipped. See the case studies →