A few months ago I got on a call with a CTO at a PE-backed company. His team had inherited a complicated monolithic codebase, several hundred thousand lines of code, almost entirely undocumented. Three separate homegrown security frameworks. Data siloed across modules. Seven terabytes of images stored in Postgres. The kind of codebase where knowledge lives in people's heads, not in the code.
He was wearing about four hats. Technical design, AWS infrastructure, analysis work, plus trying to keep a side development project going just to stay in the stack. His team was small, mostly developers and one UX person, no dedicated analysts. They were juggling tech debt, bug fixes, and a roadmap the board wanted delivered yesterday.
He'd been reading about AI, encouraging his team to experiment, genuinely trying to figure out how these tools could help them move faster. They had Copilot connected. He'd personally used Claude to help prototype some internal tooling. But the usage had stalled out at tab completion and the occasional prompt for a SQL query. No process around it. No strategy for which tasks AI should handle versus which it shouldn't.
This was a sharp technical leader who wanted to move faster and went to AI first because it seemed like the obvious lever. The board expected gains. He expected gains. And after months of trying, he couldn't point to a meaningful change in how his team was actually shipping software.
I've had some version of this conversation a dozen times in the last quarter.
The question everyone's asking (and why it's the wrong one)
Every engineering leader I talk to is getting the same pressure from the same directions. The board wants an AI strategy. The CEO read an article about 10x productivity gains. A competitor just announced they're "AI-native". And somewhere in the middle of all this, a CTO or VP of Engineering is supposed to figure out which tools to buy and how to make their developers actually use them.
So they ask: "Where should we start with AI?"
Reasonable question. Wrong question.
The teams I've watched succeed with AI over the past two years didn't start by picking tools. We've had a front-row seat across 250+ product engagements, and the pattern is consistent. The teams that got real value started by figuring out where they actually stood. Not where they thought they stood. Not where their last board deck said they stood. Where they actually stood.
The difference matters more than you'd think.
AI tools are an amplifier. That's the problem.
There's a mental model I keep coming back to: AI tools are amplifiers. They make your strengths stronger and your weaknesses louder.
A team with clean documentation, solid test coverage, and well-defined processes? Hand them Claude Code or Cursor and they'll fly. The AI has context. It knows what "good" looks like in that codebase. The developers trust the output because they have tests that catch the 15-20% of AI-generated code that's subtly wrong.
A team with scattered docs, no tests, and a deployment process that involves someone named Dave running a script on his laptop? That same AI tool will generate plausible-looking code that quietly introduces bugs nobody catches for weeks. I've seen it happen. More than once.
(And yes, I realize "invest in fundamentals" is the most boring possible advice about AI. Bear with me, it gets more interesting.)
The pattern we keep seeing is this: companies treat AI adoption as a technology decision when it's actually a readiness decision. You wouldn't hand a Formula 1 car to someone who just got their learner's permit. The car isn't the constraint. The driver is. Or more accurately, the track conditions, the pit crew, the telemetry systems, the whole infrastructure that makes the car useful instead of dangerous.
The five things that actually predict AI success
After watching dozens of teams adopt AI tools, some successfully, many not, we started noticing the same five factors separating the wins from the expensive experiments.
How the team is actually using AI today. Not whether they have licenses. Whether they're using the tools daily, and whether someone has thought about which tools fit which workflows. That CTO I mentioned had done the hard part. He got the tools installed, encouraged the team to experiment, even used them himself. But without a process for how AI should fit into their development cycle, the team defaulted to tab completion and stopped there. Motivation without structure doesn't move the needle.
The state of their documentation. This one surprises people. AI tools are only as good as the context they have. A codebase with clear READMEs, documented architecture decisions, and well-written requirements gives AI something to work with. A codebase where the knowledge lives in three people's heads? The AI is guessing. Expensively. That CTO's team had inherited a codebase that was, in his words, "effectively undocumented." No amount of AI tooling fixes that. You have to give the AI something to work with before it can give you anything back.
Testing and quality practices. This is where it gets uncomfortable. If you don't have automated tests, you can't safely use AI-generated code. Period. Every line an AI writes is a hypothesis. Tests are how you verify the hypothesis. Without them, you're flying blind and hoping for the best. We tried this on an internal project early on, shipping AI-generated code with minimal test coverage. It took about three weeks before a subtle bug cascaded through two services. Lesson learned.
Process and workflow maturity. AI doesn't just speed up coding. It shifts the bottleneck from writing code to defining what to build and reviewing what gets generated. Teams with clean workflows absorb that shift. Teams without them just produce more code faster, which, if the code is wrong, means they produce more bugs faster.
Team culture around AI. This is the one nobody talks about, and it might be the most important. Some teams see AI as a threat. Others see it as a power tool. The difference usually comes down to whether leadership has framed AI as "this replaces what you do" versus "this handles the tedious parts so you can focus on the hard problems." That framing isn't a memo. It's a hundred small conversations and decisions over months.
The assessment we built (and why)
We kept having the same conversation. A CTO would ask us about AI strategy, and we'd spend the first hour just trying to understand their starting point. Do you have tests? How's your documentation? Is your team bought in or freaked out?
So we built a tool to shortcut that first hour.
The AI Readiness Assessment is a 10-question diagnostic that scores your team across all five of those dimensions. Takes about two minutes. You get a score out of 100, a breakdown by category, and prioritized recommendations based on where your specific gaps are. That last part is what people actually find useful.
Not generic "you should adopt AI" advice. Specific next steps like "your testing foundation needs work before AI-generated code will be safe to ship" or "your team culture score suggests you need to address concerns before rolling out new tools."
A score of 80+ means your foundations are solid and you're ready to push into advanced AI workflows. Between 60 and 79 means you're close. A few targeted improvements will unlock real acceleration. Below 60, and you've got gaps that are actively limiting the return on any AI tool you buy.
No matter where you land, you walk away knowing exactly what to work on first instead of guessing.
Why this matters more right now than it did a year ago
I'll be honest. I'm not sure exactly how the next two years play out with AI in software development. Nobody is, and anyone who tells you they know is selling something.
But I am sure about this: the gap between teams that are ready for AI and teams that aren't is compounding. A year ago, you could afford to wait and see. The tools were impressive but rough. The workflows were experimental. Today, the teams that invested in readiness, the documentation, the testing, the processes, the culture, are pulling away. They're shipping faster, with fewer bugs, and their developers are spending less time on boilerplate and more time on interesting problems.
The teams that skipped the readiness work? They're on their second or third tool, still not seeing results, and starting to wonder if the whole thing is overhyped.
It's not overhyped. It's under-prepared-for.
My old lacrosse coach had a saying that I think about more than I probably should: "Under pressure, you don't rise to the level of your expectations. You fall to the level of your preparation." He was talking about championship games, but he might as well have been talking about technology adoption.
So what would I do?
If I were an engineering leader right now, feeling the pressure to "do something with AI" but not sure where to start, I'd skip the tool evaluation. Forget the AI consultant. Put the prompt engineering workshop on hold.
I'd spend two minutes getting an honest read on where my team actually stands.
Take the AI Readiness Assessment →
And if the score reveals gaps you want help closing, or you'd rather talk through your AI strategy with someone who's helped dozens of teams navigate this, we're around.

Mike is Co-Founder of The Gnar Company, a Boston-based software development agency where he leads project delivery for clients like Whoop, Kolide (acquired by 1Password), LevelUp (acquired by GrubHub), Qeepsake (feaured on Shark Tank), and AARP. With over a decade of experience building impactful software solutions for startups, SMBs, and enterprise clients, Mike brings an unconventional perspective having transitioned from professional lacrosse to software engineering, applying an athlete's mindset of obsessive preparation and relentless iteration to every project. As AI reshapes software development, Mike has become a leading practitioner of agentic development, leveraging the latest AI-assisted practices to deliver high-quality, production-ready code in a fraction of the time traditionally required.



