Some Thoughts on the Impact of AI on Software Engineering

Index

Key Takeaways

  • AI Coding Is a Magnifier
  • When You Go Faster, Look Further Ahead

Organisational

  • Pace Layering — Stewart Brand
  • The Simplicity Cycle — Dan Ward
  • Cognitive Debt
  • Simplifying = Crucial = Work
  • Handovers Were Hidden Quality Gates
  • Feedback Becomes Soup
  • Unwind the Fidelity

Engineering Management

  • Where Does the Rigor Go?
  • True Agile Practices Are Back!
  • Code Proven to Work
  • Testing Goes from Checkbox to Survival
  • Security Is No Longer Optional
  • LLM as Judge
  • Don't Ship Slop

Workshop Time

  • 15 questions to discuss

Key Takeaways

AI Coding Is a Magnifier

Not a magic wand. A magnifier.

If your codebase is tight, the AI produces more tight software.

If your codebase is a mess, the AI accelerates you into a bigger mess.

If your feedback culture is painful...

If your company culture is tense...

Read the post

When You Go Faster, Look Further Ahead

People treat AI as a speed boost for their existing workflow instead of rethinking what their workflow should be.

Speed is the easy part. Staying in control at that speed is the hard part.

Organisational

Pace Layering — Stewart Brand

Not everything changes at the same speed.

Fashion moves fast. Infrastructure moves slow. Culture barely moves at all.

The fast layers innovate. The slow layers stabilise.

Healthy systems need both — and clear boundaries between them.
Pace Layering

The Simplicity Cycle — Dan Ward

Adding complexity improves goodness — until it doesn't.

Past the peak, every addition makes things worse, not better.

The hard part: knowing when to stop adding and start removing.

Maturity is the move from complex-and-good to simple-and-good.
The Simplicity Cycle

Cognitive Debt

Complexity is becoming cognitive debt.

The system grows more complex than anyone can hold in their head.

And it's not just you — your users and customers carry that complexity too.

Prediction: Feeling left behind and not in control will lead to stress, fights and burn-out.

The answer is simplicity, spring cleaning & proper documentation - not "rest"

“I have made this longer than usual because I have not had time to make it shorter.”
— Blaise Pascal

Simplicity is not where you start. It's where you arrive — after the effort of removing what doesn't belong.

Simplifying = Crucial = Work

The Simplicity Cycle — complexity helps, then hurts.

Pascal's insight — making things less complex takes significant effort.

AI is a magnifier — it amplifies whatever state your system is in.

Take the time to simplify everything: processes, communication, architecture, decisions. Otherwise complexity will spin out of control.

Handovers Were Hidden Quality Gates

Every handover in the old production chain forced a conversation between different perspectives.

wireframe → design → front-end → back-end → QA

AI compresses the entire chain into one person, one session, hours instead of weeks.

The maker gains speed and coherence — but 5–6 quality conversations now happen zero times.

Nobody called them quality gates. They were invisible until they disappeared.

Feedback Becomes Soup

When you show a full prototype, everyone reacts on a different layer simultaneously.

Flow, UI, copy, UX, tech feasibility, edge cases — all at once.

The old process forced focus — you can't give colour feedback on a wireframe.

Now there's one artifact containing all layers, and meetings devolve into noise.

The maker gets defensive. Reviewers feel unheard. Employee happiness drops.

Unwind the Fidelity

The prototype is done. But you reveal it in rising fidelity:

  1. Wireframe → talk flow and structure only
  2. Add real copy → talk language and meaning
  3. Greyscale layout → talk UX and hierarchy
  4. Full design → talk visual decisions
  5. Interactive prototype → talk edge cases and tech
You use AI to undo what AI did, so humans can process it the way humans need to. Not building in layers. Reviewing in layers.

Engineering Management

Where Does the Rigor Go?

Engineering quality control doesn't disappear when AI writes code.

It migrates — to specs, tests, constraints, and risk management.

The work isn't gone. It moved upstream.

ThoughtWorks Retreat Report (PDF)

True Agile Practices Are Back!

Pair programming, ensemble development, continuous integration.

These create the tight feedback loops that agent-assisted development requires.

Compressed your sprint cadences to one week.
Feedback diverges between meetings Meetings re-establish shared perspective

Code Proven to Work

“Your job is to deliver code you have proven to work.”
— Simon Willison

Untested AI-generated PRs are a dereliction of duty.
The job shifts from writing code to proving it works.

Read the post | Simon Willison's Blog — follow this man.

Testing Goes from Checkbox to Survival

You need a test suite that operates on many different levels.

Unit tests, integration tests, smoke tests, end-to-end tests. The whole stack.

Your rate of change just went through the roof.

Crank testing up to twelve.

Security Is No Longer Optional

The amount of critical flaws found is sky-rocketing.

This makes us stronger long-term, but very vulnerable right now.

You have to layer your security properly. This is not optional.

Also not for business people that want to go fast.

Ignore this at your own peril.

LLM as Judge

Use one LLM to evaluate the output of another.

"Thinking" models drastically outperform standard models as judges.

Use it to evaluate: code quality, test coverage, PR descriptions, documentation completeness.

Your test suite is already an LLM judge. Think about what else could be.

Don't Ship Slop

"AI Slop" — low-quality, mass-produced, algorithmically generated code that looks polished but lacks substance.

  • Bloated functions
  • Silent logic errors
  • Hallucinated imports
  • No tests
  • Three patterns where one should exist

The Slop Jar Rule

Get caught committing untested, unverified AI-generated code three times and you're buying the team lunch.

Name it. Shame it. Don't ship it.

Workshop Time

How do you position your technology in (pace) Layers?

How are you going to organise Feedback & Who will lead those sessions?

Best of Breed or Single Ecosystem of Agents?

Tool Selection(s)

How will AI prototypes gain access to your APIs?

How will AI prototypes allow people to login?

How will AI prototypes gain access to your data?

1 or 2 test tenants? Artificial Datasets?

To PR approve or not to approve?

Protect your Master Branch

Assume people with broad permissions WILL try --dangerously skip permissions

AI Slop Bucket — which repositories?

Responsibility to share lessons learned? Claude.md?

OpenClaw, anybody?

Fears, anybody?

Thank You!