Someone in a CTO group asked two questions that keep coming up: What’s one rule that genuinely helped your team adopt AI coding faster? And: What’s the biggest risk you see right now?

The answer to the first question is easy. The second one is more nuanced.

Make It the Expectation, Not an Experiment

The single thing that moved the needle most was removing the ambiguity. I told my team: the expectation is that you code with an AI agent next to you. Not occasionally. Not when you feel like it. All the time.

Budgets are available. Pick your tools. Experiment with different agents. But come back at least once a week and share what you found. What worked. What didn’t. What surprised you. What frustrated you.

This isn’t “play with it when you have a quiet afternoon.” It’s “I expect you to start using this, and I want to hear about it.”

The difference matters. When AI coding is optional, people try it once, hit a rough edge, and go back to what they know. When it’s the expectation, they push through the rough edges and actually discover where it changes the game. The weekly sharing compounds fast. One person’s breakthrough becomes everyone’s shortcut.

Be very clear on your highs and lows. That’s how the team learns what works.

The Biggest Risk Is Doing the Same Thing Faster

I don’t think there’s one single biggest risk. But there’s a pattern I keep seeing that worries me: people think AI coding means doing the same thing but more efficiently.

It doesn’t. AI changes what matters.

Testing Goes from Checkbox to Survival

Everybody talks about “writing more unit tests now.” That misses the point entirely. What you actually need is a test suite that operates on many different levels. Unit tests, integration tests, smoke tests, end-to-end tests. The whole stack.

Why? Because when you’re generating code at speed, the only thing keeping you honest is a tight feedback loop. If your test suite only covers one layer, you’ll ship confidently and break things spectacularly. You need fast signals at every level telling you whether the thing actually works.

This isn’t about test coverage as a metric. It’s about cranking up testing to twelve because your rate of change just went through the roof.

Your Codebase Gets Amplified, Not Improved

Here’s the thing nobody tells you: AI is a magnifier of your codebase.

If your codebase is tight and controlled—clear conventions, well-structured modules, minimal dead code—the AI will produce more tight and controlled software. It picks up the patterns. It follows the conventions. It gives you more of what you already have.

If your codebase is a mess—dead code everywhere, loose conventions, three different ways to do the same thing—the AI will accelerate you into a bigger mess. It’ll happily use all three patterns. It’ll build on top of the dead code. It’ll match the chaos perfectly.

It’s never been easier to clean up. But you have to actually do it. Agentic coding isn’t just for building new features. It’s remarkably good at cleaning house. Lots of people don’t use it that way. They should.

Prune, Don’t Just Refactor

There’s an important distinction here. I’m not talking about refactoring—reorganizing code into better abstractions. I’m talking about pruning. Removing things. File by file, folder by folder. Deleting what’s not needed. Simplifying what remains.

This is essential now. Without it, you get high-speed rot. When a team generates code faster than ever, the accumulation of unnecessary stuff accelerates at the same rate. What used to take months to become unwieldy now takes weeks.

Keep it clean. Keep it understandable. And keep it like that. If you let things drift, it gets messy faster than ever before.

Dependencies Are the Hidden Multiplier

The same magnification applies to your dependencies.

In the past, you might have accumulated a few half-used libraries. An HTTP client you mostly use. A utility library that overlaps with another one. A date library from 2019 sitting next to a newer one because nobody got around to migrating.

That was always a smell. Now it’s a real problem. AI agents see all your dependencies. When there’s ambiguity—two libraries that do similar things—the confusion bleeds into generated code. You’ll get inconsistent implementations. Mixed patterns. The kind of subtle inconsistency that’s hard to spot in review and compounds over time.

The flip side: it’s never been easier to standardize. Pick the one dependency. Let the AI help you migrate everything to it. Then remove the old one. What used to be a tedious multi-sprint cleanup is now an afternoon’s work.

The Pattern Underneath

All of these risks share the same root cause. People are treating AI as a speed boost for their existing workflow instead of rethinking what their workflow should be.

The teams that are getting the most out of AI coding aren’t the ones writing features faster. They’re the ones who realized that the game changed: testing at every layer matters more. A clean codebase matters more. Consistent dependencies matter more. Pruning matters more.

Speed is the easy part. Staying in control at that speed is the hard part.


What in your codebase would you be afraid to magnify?