I had a problem.
We build software all day at The Gnar. High quality software products that handle real traffic, process real payments, serve real users. But every single one belongs to a client. When someone asks "can you show me what your process actually produces?" I'm stuck because that work is behind an NDA.
So I needed a demo. Something I could point to and say: this is what happens when you bring us an idea.
I called the most trusted person I could think of: my mom.
The Idea Nobody Asked For (Except My Mom)
My mom loves puzzles, specifically wooden puzzles. They’re intricate and unique. The problem is they can be expensive and, like other puzzles, once you’re done you get a little bored with them and they just end up taking up space.
She'd been complaining about this for a while. "I've got 30 puzzles in my closet. I've done them all. They're too expensive to just give away, but I don't want to do them again."
That's a real business problem. Expensive products that lose their value after a single use. A community of collectors who'd happily trade or rent if someone made it easy.
She didn't have a business plan. She didn't have wireframes or a pitch deck or a Jira board full of user stories. She had an idea and a lot of opinions about puzzles.
Perfect.
That's exactly the situation many founders and executives are in when they first reach out to us. They've got something in their head. Maybe they've been thinking about it for months. Maybe it came up in a meeting last Tuesday. But it's a conversation, not a specification.
We recorded a one-hour phone call. And then we turned it into a working application.
Here's what that actually looked like, step by step.
From One Conversation to a Full Set of Requirements
Most people assume the hard part of building software is the building. Writing code. Picking the right tech stack. Getting the database schema right.
It's not.
The hard part is figuring out what to build. And figuring it out in enough detail that the people doing the building (whether humans, AI agents, or both) aren't guessing.
Some people will read this and think “can’t you just do this with Lovable/Replit/Base44/v0/Bolt/TheNewOneThatCameOutToday?”. In theory, maybe, but the core value to this process is the experienced human who steers the process. Engaging in the stakeholder conversation, challenging direction, understanding what is impossible vs hard vs easy, seeing the vision and how to land the plane at the first stop and re-aligning for takeoff to the next. And that’s just in the source document creation.
This isn't just my opinion. InfoWorld's analysis of AI breakthroughs for 2026 put it plainly: the constraint on building new products is no longer the ability to write code, but the ability to creatively shape the product itself. And Anthropic's 2026 Agentic Coding Trends Report found that while developers now use AI in roughly 60% of their work, they report being able to fully delegate only 0-20% of tasks. The rest still requires human judgment and that judgment is most valuable during the planning phase, not the coding phase.
This is the core idea behind what we call Context-Driven Development, or CDD: detailed requirements are the source code of the future. If you get the requirements right, everything downstream moves faster. If you get them wrong, it doesn't matter how fast your team codes. You're building the wrong thing at high speed.
Kief Morris at Thoughtworks, writing on Martin Fowler's site, frames this as the difference between working "in the loop" and working "on the loop." Working in the loop means inspecting every line of code an AI agent generates. Working on the loop means defining the system that guides how agents produce code in the first place. His point: agents can generate code faster than humans can manually inspect it. So you shift your energy upstream to the requirements and architecture that shape the output.
That's exactly what we did with my mom's puzzle library.
After that phone call, the first thing we did was treat the transcript like a source document. The same way we'd treat interview transcripts from a real client engagement.
Here's what happened to that one-hour conversation:
Competitor Analysis
Our agentic workflow ingested the transcript and produced a competitor landscape document. This isn't a list of Google results. It's a structured analysis: who else operates in this space, what they do well, where the gaps are, and how the proposed product fits into the market.
If you're a non-technical founder, think of this as the research phase you'd normally spend weeks on, compressed into something you can review in 20 minutes. It tells you whether your idea has a real opening or whether you're walking into a crowded room.
MoSCoW Feature Prioritization
MoSCoW is a framework for sorting features into four buckets: Must-have, Should-have, Could-have, and Won't-have (for now). It forces you to make decisions early about what actually matters for a first release versus what can wait.
From a single conversation, the workflow extracted every feature my mom mentioned (and several she implied without realizing it) and categorized them. Users need accounts? Must-have. A social feed where people show off their collections? Could-have. Integration with puzzle manufacturers? Won't-have for V1.
This is the document that prevents scope creep before a single line of code is written. Every feature has a home. Every "wouldn't it be cool if..." has a category.
The Full PRD
A Product Requirements Document is exactly what it sounds like: a document that defines what the product needs to do, who it's for, and why it matters. But here's the distinction most people miss. A good PRD defines the what and why, not the how. It's a strategic document, not a technical spec.
Our PRD covered the executive summary, the problem statement, target users, key features with detailed acceptance criteria, and explicit out-of-scope items. All generated from one phone call.
The reason this matters, and the reason I'm walking you through it in this much detail, is that this is the step most teams skip. They hear an idea, start building, and figure out the details as they go. That works fine until week six when someone says "wait, I thought it would do this" and the team says "nobody told us that."
The bottleneck in modern software development isn't writing code. AI has genuinely accelerated that part. The bottleneck is defining what to build clearly enough that AI agents or humans can execute without ambiguity.
We invest heavily in this phase because it's where the real speed gains come from. Not from typing faster. From thinking more clearly upfront.
From Requirements to a Real Application
Once requirements are locked, the build phase changes character. You're not exploring anymore. You're executing.
This is the Ignite phase of our process, and it's where AI-assisted development really shows what it can do.
Forrester recently defined agentic software development as the next phase of AI-driven engineering. A model where meaningful development work gets delegated to AI agents while humans stay accountable for intent, review, and outcomes. That's a pretty good description of what our next 48 hours looked like.
I served as the architect. My job was making structural decisions: how the data model should work, how the rental queue logic should flow, where to draw the line between features. These are judgment calls that require understanding the business, not just the code. AI agents are bad at this. They're pattern matchers, not product strategists.
The AI agents handled implementation. Given clear specifications from the Ideate phase, they generated the code, wrote tests against the requirements, and built out feature after feature. The context from those detailed requirements is what made this possible. Without it, the agents would have been guessing. With it, they were executing a plan.
MIT Technology Review's reporting on AI coding captures both sides of this honestly. Their interviews with 30+ developers found broad agreement on where AI tools excel: producing reusable boilerplate code, writing tests, fixing bugs, and explaining unfamiliar code. But for complex problems, the kind where engineers really earn their bread, the tools face significant hurdles. Context window limitations mean agents struggle to parse large codebases and can forget what they're doing on longer tasks.
This is exactly why the human-in-the-loop role matters. Atlassian's engineering team built an entire framework around this called HULA (Human-in-the-Loop Agents), and their findings from 950,000+ pull requests validated the same pattern: AI agents handle repetitive, well-specified tasks while engineers focus on complex judgment calls. Their framework was accepted by the IEEE/ACM International Conference on Software Engineering. It's not just us saying this.
And I want to be clear about something: what we built isn't a prototype.
When people hear "built in 48 hours," they picture a clickable mockup. A Figma file with some fake data. Something you show at a pitch meeting and then rebuild from scratch.
That's not what we built.
The wooden puzzle library application has:
- Real user accounts with authentication and profile management
- Stripe billing for rental subscriptions and payments
- A rental queue system where users can browse available puzzles, request rentals, and manage their queue
- An admin dashboard for managing inventory, tracking rentals, and handling operations
- A real database with proper schema design, not a spreadsheet pretending to be a backend
This is production-grade software. The kind of thing you could hand to actual users tomorrow.
I wasn't reviewing every line of code (that's what the automated test suite is for). I was making sure the architecture wouldn't collapse under real usage, that the business logic matched what my mom actually described, and that the technical decisions would scale if someone wanted to take this further.
That's the model. Experienced engineers guiding AI agents through a well-specified plan. The AI handles volume. The humans handle judgment.
What This Means for You
OK. Here's where I need to be honest.
This demo was streamlined. On purpose. I had one stakeholder (my mom). One source document (one phone call). No design phase. No brand guidelines. No existing systems to integrate with. No compliance requirements. No committee of six people who all need to approve the color of the buttons.
Real client projects are more complex. They involve multiple stakeholder interviews, sometimes weeks of them. There's a design phase where we build user flows, information architecture, and clickable prototypes before writing a line of production code. Timelines are measured in weeks and months, not hours. Because the stakes are higher.
But the process is identical.
Every Gnar project starts with deep requirements gathering. Detailed, specific, leave-nothing-to-assumption requirements. We call that phase Ideate, and it typically takes 2-4 weeks. The output is a validated prototype, a complete PRD, and a set of implementation milestones with clear definitions of done. Guaranteed price. Guaranteed outcomes. You know what you're getting before we write a line of code.
Then we build. That's Ignite. Our team of senior engineers, armed with those detailed requirements, uses AI-enhanced workflows to move at a pace that would have seemed impossible even a year ago. We're seeing 35-40% velocity improvements on implementation when the context is well-defined.
The speed gains are real. But they come from the planning, not from the coding.
CIO magazine's coverage of agentic AI in engineering describes the shift as engineers moving from creators to curators, and the core skill becoming systems thinking rather than syntax. That tracks with everything we're seeing. The teams that thrive with AI-assisted development aren't the ones with the fastest coders. They're the ones with the clearest thinkers.
Teams that try to skip requirements and go straight to "let AI build it" end up with technically impressive software that solves the wrong problem. Or worse, technically fragile software that breaks the first time a real user does something unexpected.
The shift I keep coming back to: the bottleneck in software development has moved. It used to be "how fast can we write code?" Now it's "how clearly can we define what needs to be built?" Teams that invest in that definition phase unlock the real power of AI-assisted development. Teams that skip it are just building the wrong thing faster.
One Conversation Is All It Takes to Start
My mom didn't have a technical background. She didn't have a business plan or a spec document. She had a problem she cared about and 60 minutes to talk about it.
That was enough.
If you've got an idea (or a backlog of ideas you've been sitting on) and you're wondering what it would take to turn one of them into something real, the starting point is a conversation. Not a pitch. Not a requirements document you need to prepare in advance. Just a conversation.
We call it a Project Blueprint Session. It's a working session where we dig into your idea, ask the questions that surface hidden assumptions, and map out what a path from concept to working product would actually look like. You walk away with a concrete deliverable, not a sales pitch.
Start with a Project Blueprint Session →
Not sure you're ready for that yet? Our AI Readiness Assessment takes about 5 minutes and helps you figure out where AI-assisted development fits into what you're trying to build. And where it doesn't.
Take the AI Readiness Assessment →
Either way, the conversation is free. And if my mom's puzzle library is any indication, you might be surprised how far one good conversation can go.

Mike is Co-Founder of The Gnar Company, a Boston-based software development agency where he leads project delivery for clients like Whoop, Kolide (acquired by 1Password), LevelUp (acquired by GrubHub), Qeepsake (feaured on Shark Tank), and AARP. With over a decade of experience building impactful software solutions for startups, SMBs, and enterprise clients, Mike brings an unconventional perspective having transitioned from professional lacrosse to software engineering, applying an athlete's mindset of obsessive preparation and relentless iteration to every project. As AI reshapes software development, Mike has become a leading practitioner of agentic development, leveraging the latest AI-assisted practices to deliver high-quality, production-ready code in a fraction of the time traditionally required.



