<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="4.3.4">Jekyll</generator><link href="https://sparkboxx.com/feed.xml" rel="self" type="application/atom+xml" /><link href="https://sparkboxx.com/" rel="alternate" type="text/html" /><updated>2026-04-13T15:23:02+00:00</updated><id>https://sparkboxx.com/feed.xml</id><title type="html">Sparkboxx</title><subtitle>Thoughts on product &amp; software engineering leadership.
</subtitle><author><name>Wilco van Duinkerken</name></author><entry><title type="html">Some Thoughts on the Impact of AI on Software Engineering</title><link href="https://sparkboxx.com/notes/2026/04/08/impact-of-ai-on-software-engineering.html" rel="alternate" type="text/html" title="Some Thoughts on the Impact of AI on Software Engineering" /><published>2026-04-08T08:00:00+00:00</published><updated>2026-04-08T08:00:00+00:00</updated><id>https://sparkboxx.com/notes/2026/04/08/impact-of-ai-on-software-engineering</id><content type="html" xml:base="https://sparkboxx.com/notes/2026/04/08/impact-of-ai-on-software-engineering.html"><![CDATA[<p>This post accompanies the presentation: <a href="/holiday_coding/impact-of-ai-on-software-engineering.html">Some Thoughts on the Impact of AI on Software Engineering</a>.</p>

<h2 id="whats-in-the-presentation">What’s in the presentation?</h2>

<p>The aim of the presentation is to inspire and encourage engineering (leadership) to think about some of the long term and big picture effects of the AI coding craze. The presentation ends with some very practical questions that teams need to start finding answers to if they decide to go “all in” with 1 or 2 fast moving AI teams.</p>

<p>The slides are organised into four sections:</p>

<p><strong>Key Takeaways</strong> — AI coding is a magnifier, not a magic wand. If your codebase, feedback culture, or company culture has problems, AI will amplify them. And when you go faster, you need to look further ahead.</p>

<p><strong>Organisational</strong> — Pace layering, the simplicity cycle, cognitive debt, and why the handovers we lost were actually hidden quality gates. Includes a technique for unwinding fidelity when reviewing AI-generated prototypes.</p>

<p><strong>Engineering Management</strong> — Where does the rigor go when AI writes code? Testing as survival, security as non-optional, LLM-as-judge, and why you shouldn’t ship slop.</p>

<p><strong>Workshop Time</strong> — Fifteen questions to discuss with your team, from pace layering your technology to protecting your master branch.</p>

<h2 id="image-credits">Image credits</h2>

<ul>
  <li>Pace Layering diagram by Stewart Brand, via <a href="https://blog.longnow.org/02015/01/27/stewart-brand-pace-layers-thinking-at-the-interval/">The Long Now Foundation</a></li>
  <li>The Simplicity Cycle diagram by Dan Ward, via <a href="http://thefourthrevolution.org/wordpress/archives/3702">The Fourth Revolution</a></li>
  <li>Meeting feedback diagrams from Keith, J. Elise. <em>Where the Action Is: The Meetings That Make or Break Your Organization.</em> Second Rise. Kindle Edition.</li>
</ul>]]></content><author><name>Wilco van Duinkerken</name></author><category term="notes" /><summary type="html"><![CDATA[A presentation covering key takeaways, organisational impact, and engineering management challenges when AI becomes part of how we build software.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://sparkboxx.com/assets/images/default-card.webp" /><media:content medium="image" url="https://sparkboxx.com/assets/images/default-card.webp" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">The Missing Handovers</title><link href="https://sparkboxx.com/notes/2026/04/08/the-missing-handovers.html" rel="alternate" type="text/html" title="The Missing Handovers" /><published>2026-04-08T08:00:00+00:00</published><updated>2026-04-08T08:00:00+00:00</updated><id>https://sparkboxx.com/notes/2026/04/08/the-missing-handovers</id><content type="html" xml:base="https://sparkboxx.com/notes/2026/04/08/the-missing-handovers.html"><![CDATA[<p>AI didn’t just speed up software development. It collapsed the chain of human conversations that kept quality in check. Here’s what broke, why it hurts, and a technique that might fix it.</p>

<h2 id="the-chain-that-nobody-noticed">The Chain That Nobody Noticed</h2>

<p>Think about how a feature used to move through a team. A stakeholder had an idea. A PM sketched a wireframe. A designer turned it into a low-fi mockup, then a high-fi design. A slicer prepared the assets. A front-end engineer built it. A back-end engineer connected it. A data engineer wired the pipeline. QA tested the whole thing.</p>

<p>Every arrow between those roles was a handover. And every handover was a conversation.</p>

<p>The designer receiving a wireframe <em>had</em> to ask questions — “what happens when the user clicks here?” The front-end engineer looking at a design <em>had</em> to push back — “this animation will kill performance on mobile.” The back-end engineer reviewing the API contract <em>had</em> to negotiate — “we can’t expose that data without pagination.”</p>

<p>Nobody called these conversations “quality gates.” They didn’t feel like process. They felt like work. But they were doing something crucial: <code class="language-plaintext highlighter-rouge">each handover forced a translation between perspectives</code>. And in that translation, problems surfaced early, cheaply, and naturally.</p>

<h2 id="ai-folded-the-chain">AI Folded the Chain</h2>

<p>Now someone with an idea and access to AI goes from concept to clickable prototype in hours. Not days. Not weeks. Hours.</p>

<p>This is genuinely powerful. The person building has a tight feedback loop with the material itself. They make, they see, they feel something’s off, they adjust. Twenty micro-decisions happen on intuition. The result often feels more coherent and more “on taste” than anything a committee would have produced. One person holding the full context can do things a fragmented chain never could.</p>

<p>But here’s what got lost: <code class="language-plaintext highlighter-rouge">five or six conversations that used to happen automatically now happen zero times</code>.</p>

<p>The designer never questioned the flow. The engineer never flagged the impossible interaction. The data engineer never pointed out the missing edge case. Nobody challenged the assumptions, because there was nobody to hand over to.</p>

<p>Those conversations weren’t overhead. They were the immune system.</p>

<h2 id="feedback-becomes-soup">Feedback Becomes Soup</h2>

<p>So the prototype gets built. It looks great. It works. Time to get feedback.</p>

<p>And then you put it in front of the team.</p>

<p>What happens next is predictable and painful. Everyone reacts at once, but on completely different layers. Someone says “that button should be blue.” Someone else says “why does this step exist at all?” A third person asks “will this scale?” A fourth notices the error state is missing. The copywriter flags three labels that don’t match the brand voice.</p>

<p>All of this feedback is valid. None of it is on the same level. And nobody can stay on one level because <code class="language-plaintext highlighter-rouge">the prototype makes everything visible at once</code>.</p>

<p>In the old model, you couldn’t give color feedback on a wireframe — there were no colors. You couldn’t question the API design during a visual review — the API didn’t exist yet. The constraints of each phase forced focus. That focus wasn’t a limitation. It was a feature.</p>

<p>Now there are no phases. There’s one artifact that contains all layers simultaneously. The meeting devolves into a soup of comments jumping between flow, UI, design, UX, copy, technical feasibility, happy paths, unhappy paths, and fundamental assumptions. Often in the same sentence.</p>

<p>That’s not a review. That’s noise.</p>

<h2 id="the-hidden-damage">The Hidden Damage</h2>

<p>There’s something more insidious going on beneath the surface.</p>

<p>The person who built the prototype has already made dozens of decisions. They’ve already iterated. In their head, there’s a coherent narrative about <em>why</em> it looks and works the way it does. But that narrative is invisible to everyone else in the room.</p>

<p>So when people start giving feedback on six layers at once, the maker gets defensive. Not because the feedback is wrong, but because it feels like people are tearing apart something whose internal logic they can’t see. In the old model, you never had to defend your entire chain of reasoning — each step was already validated before the next one started.</p>

<p>Do this a few times and <code class="language-plaintext highlighter-rouge">employee happiness starts to drop</code>. People don’t enjoy these meetings. The maker feels attacked. The reviewers feel unheard. Everyone leaves frustrated. The speed gains from AI start feeling like they come at a human cost.</p>

<h2 id="pace-layers-not-everything-moves-at-the-same-speed">Pace Layers: Not Everything Moves at the Same Speed</h2>

<p>One thing that helps is thinking in <a href="https://jods.mitpress.mit.edu/pub/issue3-brand/release/2">pace layers</a> — Stewart Brand’s model where different layers of a system move at different speeds.</p>

<p>A UI screen can be thrown away and rebuilt tomorrow. A data model lives for years. An API contract sits somewhere in between. These layers deserve different speeds of decision-making and different levels of rigor.</p>

<p>We’ve started applying this deliberately. Go fast on the outer layers — visual design, copy, interaction details. These are cheap to change. Go slow and deliberate on the foundational layers — data architecture, API contracts, core business logic. Have those deeper conversations early, with the right people, before anyone opens a code editor.</p>

<p>This helps. But it doesn’t solve the feedback soup problem.</p>

<h2 id="unwind-the-fidelity">Unwind the Fidelity</h2>

<p>Here’s a technique we’re experimenting with that feels promising.</p>

<p>You have a high-fidelity prototype. It’s done. It works. But instead of showing it as-is and inviting chaos, <code class="language-plaintext highlighter-rouge">you ask AI to strip it back down</code>.</p>

<p>In the case of, for example a browser based prototype: rebuild it as a wireframe. Grey boxes. No colors. No copy. Just structure and flow.</p>

<p>Then in one meeting, you walk through the prototype in rising fidelity. Five steps, ten minutes each:</p>

<ol>
  <li><strong>Wireframe.</strong> The room can only talk about flow and structure. There’s nothing else to react to.</li>
  <li><strong>Wireframe with real copy.</strong> Now the conversation shifts to language, labels, and whether the steps make sense to a user.</li>
  <li><strong>Greyscale with layout.</strong> UX and information hierarchy. Does the eye go where it should?</li>
  <li><strong>Full visual design.</strong> Colors, typography, spacing. The design conversation.</li>
  <li><strong>Interactive prototype.</strong> Edge cases, error states, technical feasibility. The engineering conversation.</li>
</ol>

<p>You’re not building in layers. The thing is already built. You’re <em>revealing</em> it in layers. And each layer constrains what the room can talk about.</p>

<p>This is the trick: <code class="language-plaintext highlighter-rouge">you use AI to undo what AI did, so that humans can process it the way humans need to</code>.</p>

<p>The prototype exists in full fidelity for speed. But the review happens in progressive fidelity for focus. You get the benefit of AI’s compression without losing the benefit of structured human feedback.</p>

<h2 id="why-this-matters-beyond-process">Why This Matters Beyond Process</h2>

<p>There’s a deeper point here.</p>

<p>AI has accelerated the <em>making</em> process dramatically. But it hasn’t accelerated the human capacity to evaluate, discuss, and align. Our brains still need to separate concerns. We still need to focus on one layer at a time. We still need the psychological safety of critiquing a wireframe instead of something that looks “finished.”</p>

<p>The old workflow wasn’t just a production process. It was <code class="language-plaintext highlighter-rouge">an externalized thinking process</code>. Each phase gave people permission to think about one thing at a time. AI removed those phases because they’re no longer technically necessary. But they were always cognitively necessary.</p>

<p>The teams that will thrive aren’t the ones that go fastest. They’re the ones that figure out where to reintroduce structure — not because the tools need it, but because people do.</p>

<p>We’re all beginners at this. Hopefully fast learners.</p>

<hr />

<p><strong>What gates has your team lost to speed? And what would it look like to bring them back deliberately?</strong></p>]]></content><author><name>Wilco van Duinkerken</name></author><category term="notes" /><summary type="html"><![CDATA[AI didn't just speed up software development. It collapsed the chain of human conversations that kept quality in check. Here's what broke, why it hurts, and a technique that might fix it.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://sparkboxx.com/assets/images/default-card.webp" /><media:content medium="image" url="https://sparkboxx.com/assets/images/default-card.webp" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">When AI Writes the Code — Holiday Coding, March 2026</title><link href="https://sparkboxx.com/notes/2026/03/24/when-ai-writes-the-code.html" rel="alternate" type="text/html" title="When AI Writes the Code — Holiday Coding, March 2026" /><published>2026-03-24T08:00:00+00:00</published><updated>2026-03-24T08:00:00+00:00</updated><id>https://sparkboxx.com/notes/2026/03/24/when-ai-writes-the-code</id><content type="html" xml:base="https://sparkboxx.com/notes/2026/03/24/when-ai-writes-the-code.html"><![CDATA[<p>This post accompanies the <a href="/holiday_coding/when-ai-writes-the-code.html">Holiday Coding presentation: When AI Writes the Code</a>. It covers the ideas, references, and practical exercises from the session — meant as both a standalone read and a companion to the slides.</p>

<h2 id="what-is-holiday-coding">What Is Holiday Coding?</h2>

<p>Play isn’t the opposite of serious work. Play is how you learn at the edge of your competence.</p>

<p>Holiday Coding is structured play on production systems. You pick a direction, not a destination. You work on real code, but the path is yours.</p>

<p>Why production code? Because it drags ideas out of the laboratory and into the real world. When you apply a new concept to a codebase that matters, you find out in minutes whether it actually improves things — not in theory, but in practice.</p>

<p>It’s not a course — there’s no curriculum, no certification. It’s not a sprint — there’s no backlog, no velocity. It’s not a hack day — the goal isn’t to ship. The goal is to understand. The only deliverable is what you learned.</p>

<h2 id="what-are-holiday-coding-sessions">What Are Holiday Coding Sessions?</h2>

<p>A Holiday Coding session is a 2+ hour block built around a single topic. Someone prepares a set of inspiration and hooks, then presents them in a fast-paced 20–40 minute slot. It’s not a polished keynote — it’s a presentation put together by someone with passion for the subject. The energy comes from enthusiasm, not production value.</p>

<p>After the presentation, participants use the remaining time for Holiday Coding: hands-on, practical exploration of the topic on real code.</p>

<p>The presentation is designed to work as a reference. It deliberately contains far more information than you’d normally put in a slide deck. A typical tech talk or keynote is optimized for single-time delivery — you polish every slide because the audience sees it once. Holiday Coding presentations are the opposite. They’re dense with links, ideas, and starting points so participants can come back to them later when they want to dig deeper into something that caught their attention.</p>

<p>This makes them fast to create, too. You don’t spend days on transitions and visual polish. You spend your time collecting the best ideas, the sharpest provocations, and the most useful references. It’s up to the presenter to pick which inspirations to focus on in their 30-minute slot — the deck is always bigger than the talk.</p>

<h2 id="inspiration-and-hooks">Inspiration and Hooks</h2>

<p>These are lessons learned, not laws. Ideas to spark your interest.</p>

<h3 id="ai-coding-is-a-magnifier">AI Coding Is a Magnifier</h3>

<p>Not a magic wand. A magnifier. If your codebase is tight, the AI produces more tight software. If your codebase is a mess, the AI accelerates you into a bigger mess.</p>

<p><a href="/notes/2026/02/14/ai-coding-is-a-magnifier.html">Read the post</a></p>

<h3 id="its-never-been-easier-to-clean-up">It’s Never Been Easier to Clean Up</h3>

<p>Agentic coding isn’t just for building new features. It’s remarkably good at cleaning house. Lots of people don’t use it that way. They should.</p>

<h3 id="prune-dont-just-refactor">Prune, Don’t Just Refactor</h3>

<p>Removing things. File by file, folder by folder. Without it, you get high-speed rot. What used to take months to become unwieldy now takes weeks.</p>

<h3 id="testing-goes-from-checkbox-to-survival">Testing Goes from Checkbox to Survival</h3>

<p>You need a test suite that operates on many different levels. Unit tests, integration tests, smoke tests, end-to-end tests. The whole stack. Your rate of change just went through the roof. Crank testing up to twelve.</p>

<h3 id="dependencies-are-a-multiplier-of-noise">Dependencies Are a Multiplier of Noise</h3>

<p>Two libraries that do similar things? The confusion bleeds into generated code. Pick the one dependency. Let the AI migrate everything. What used to be a tedious multi-sprint cleanup is now an afternoon’s work.</p>

<h3 id="where-does-the-rigor-go">Where Does the Rigor Go?</h3>

<p>Engineering quality doesn’t disappear when AI writes code. It migrates — to specs, tests, constraints, and risk management. The work isn’t gone. It moved upstream.</p>

<p><a href="https://www.thoughtworks.com/content/dam/thoughtworks/documents/report/tw_future%20_of_software_development_retreat_%20key_takeaways.pdf">ThoughtWorks Retreat Report (PDF)</a></p>

<h3 id="unbundling-code-review">Unbundling Code Review</h3>

<p>Code review is being unbundled. It was always four functions in a trench coat: mentorship, consistency, correctness, and trust. Each one now needs a new home. Agents handle correctness. Hooks enforce consistency. Mentorship and trust stay human.</p>

<p><a href="https://www.thoughtworks.com/content/dam/thoughtworks/documents/report/tw_future%20_of_software_development_retreat_%20key_takeaways.pdf">ThoughtWorks Retreat Report (PDF)</a></p>

<h3 id="cognitive-debt">Cognitive Debt</h3>

<p>Technical debt is becoming cognitive debt. The system grows more complex than anyone can hold in their head. And it’s not just you — your users and customers carry that complexity too.</p>

<h3 id="specs-vs-constraints">Specs vs. Constraints</h3>

<p>Specifications describe what should change. Constraints define what must not be touched. Constraints limit blast radius and let agents work safely across domain boundaries.</p>

<h3 id="when-you-go-faster-look-further-ahead">When You Go Faster, Look Further Ahead</h3>

<p>People treat AI as a speed boost for their existing workflow instead of rethinking what their workflow should be. Speed is the easy part. Staying in control at that speed is the hard part.</p>

<h2 id="the-industry-is-noticing">The Industry Is Noticing</h2>

<h3 id="thoughtworks-retreat">ThoughtWorks Retreat</h3>

<p>Deer Valley, Utah — February 2026. ThoughtWorks hosted the Future of Software Engineering Retreat on the 25th anniversary of the Agile Manifesto. The central question: what happens when AI takes over code production?</p>

<table>
  <tbody>
    <tr>
      <td><a href="https://www.thoughtworks.com/content/dam/thoughtworks/documents/report/tw_future%20_of_software_development_retreat_%20key_takeaways.pdf">Read the report (PDF)</a></td>
      <td><a href="https://martinfowler.com/fragments/2026-02-18.html">Martin Fowler’s Reflections</a></td>
    </tr>
  </tbody>
</table>

<h3 id="supervisory-engineering">Supervisory Engineering</h3>

<p>A new middle loop emerges. Not writing code. Not release management. Something in between. Directing agents. Evaluating output. Calibrating trust. Encoding standards and defining constraints within which agents can safely operate.</p>

<h3 id="know-your-customers--intimately">Know Your Customers — Intimately</h3>

<p>If every cousin in this world can produce “average software,” it’s your job to deliver great software. You can only do that if you deeply understand what problems you are solving for your customers.</p>

<h3 id="true-agile-practices-are-back">True Agile Practices Are Back</h3>

<p>Pair programming, ensemble development, continuous integration. These create the tight feedback loops that agent-assisted development requires. Some teams compressed sprint cadences to one week.</p>

<h2 id="tips-for-code-quality-in-the-age-of-agents">Tips for Code Quality in the Age of Agents</h2>

<h3 id="code-proven-to-work">Code Proven to Work</h3>

<p>Simon Willison: “Your job is to deliver code you have proven to work.” Untested AI-generated PRs are a dereliction of duty. The job shifts from <em>writing</em> code to <em>proving</em> it works.</p>

<table>
  <tbody>
    <tr>
      <td><a href="https://simonwillison.net/2025/Dec/18/code-proven-to-work/">Read the post</a></td>
      <td><a href="https://simonwillison.net/">Simon Willison’s Blog</a> — follow this man.</td>
    </tr>
  </tbody>
</table>

<h3 id="dont-ship-slop">Don’t Ship Slop</h3>

<p>“AI Slop” — low-quality, mass-produced, algorithmically generated code that <em>looks</em> polished but lacks substance. Macquarie Dictionary’s 2025 Word of the Year. Bloated functions. Silent logic errors. Hallucinated imports. No tests. Three patterns where one should exist.</p>

<p><strong>The Slop Jar Rule:</strong> get caught committing untested, unverified AI-generated code three times and you’re buying the team lunch. Name it. Shame it. Don’t ship it.</p>

<h2 id="claude-code">Claude Code</h2>

<p>Let’s get practical.</p>

<h3 id="mcp--the-universal-tool-belt">MCP — The Universal Tool Belt</h3>

<p>Model Context Protocol: an open standard for connecting AI agents to your tools. 10,000+ active servers. 97 million SDK downloads/month. GitHub, Slack, Postgres, Google Drive, Jira, your internal APIs — all accessible through one protocol.</p>

<p>Claude Code + MCP = an agent that can read your tickets, query your database, check your monitoring, and commit the fix. Write your own MCP server in dozens of lines of TypeScript or Python.</p>

<table>
  <tbody>
    <tr>
      <td><a href="https://code.claude.com/docs/en/mcp">Official docs</a></td>
      <td><a href="https://modelcontextprotocol.io/">MCP Protocol</a></td>
    </tr>
  </tbody>
</table>

<h3 id="claudemd--your-projects-personality-file">CLAUDE.md — Your Project’s Personality File</h3>

<p>A persistent configuration file that Claude reads before every conversation. Think <code class="language-plaintext highlighter-rouge">.editorconfig</code> for AI — it tells Claude who you are, how you work, and what matters.</p>

<p>Three-layer hierarchy: <code class="language-plaintext highlighter-rouge">~/.claude/CLAUDE.md</code> for personal preferences (your style, your tools), <code class="language-plaintext highlighter-rouge">./CLAUDE.md</code> for project conventions (check this into git!), and <code class="language-plaintext highlighter-rouge">./src/auth/CLAUDE.md</code> for subdirectory overrides (domain-specific rules). Most-specific wins. Loaded automatically. Every line costs context window budget.</p>

<p>Start with ~50 lines. For each line ask: “Would removing this cause Claude to make a mistake?” If not, cut it.</p>

<table>
  <tbody>
    <tr>
      <td><a href="https://claude.com/blog/using-claude-md-files">Anthropic Blog: Using CLAUDE.md Files</a></td>
      <td><a href="https://code.claude.com/docs/en/best-practices">Official docs</a></td>
      <td><a href="https://www.humanlayer.dev/blog/writing-a-good-claude-md">Writing a Good CLAUDE.md</a></td>
    </tr>
  </tbody>
</table>

<h3 id="claudemd-forces-clarity">CLAUDE.md Forces Clarity</h3>

<p>It forces teams to explicitly codify what everyone “just knows.” This might be the most valuable artifact a team creates. More useful than most documentation. You can give your agent a persona. A tone. Constraints. Domain vocabulary. The context IS the prompt.</p>

<h3 id="skills--executable-standards">Skills — Executable Standards</h3>

<p>A SKILL.md file with instructions Claude follows when invoked. Not documentation nobody reads. Executable standards.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>---
name: "tdd-writer"
description: "Write failing tests first, no implementation"
---
## Instructions here
</code></pre></div></div>

<p>Skills can spawn subagents, inject live data, restrict tools. From static instructions to programmable agents.</p>

<table>
  <tbody>
    <tr>
      <td><a href="https://code.claude.com/docs/en/skills">Official docs</a></td>
      <td><a href="https://github.com/anthropics/skills">Anthropic Skills Repository</a></td>
    </tr>
  </tbody>
</table>

<h3 id="hooks--deterministic-guardrails">Hooks — Deterministic Guardrails</h3>

<p>Shell commands that execute at specific points in Claude’s workflow. Not suggestions Claude might forget. Rules that always execute: <code class="language-plaintext highlighter-rouge">PreToolUse</code> to approve or block before execution, <code class="language-plaintext highlighter-rouge">PostToolUse</code> to validate after completion, <code class="language-plaintext highlighter-rouge">SessionStart</code> to inject context before Claude sees anything.</p>

<p>Auto-format after every edit. Run tests before commits. Block dangerous patterns.</p>

<p><a href="https://code.claude.com/docs/en/hooks-guide">Official docs</a></p>

<h3 id="modes--calibrated-autonomy">Modes — Calibrated Autonomy</h3>

<p>Cycle with <code class="language-plaintext highlighter-rouge">Shift+Tab</code>: Normal (asks permission for everything), Auto-Accept Edits (file edits proceed, risky ops still escalate), and Plan Mode (read-only analysis, no modifications). Additional modes include Bypass (no permission checks, isolated environments only) and Auto Mode (research preview — Claude decides what needs approval based on risk).</p>

<p><a href="https://code.claude.com/docs/en/permissions">Official docs</a></p>

<h3 id="claude-code-for-testing">Claude Code for Testing</h3>

<p>Point Claude at any module. It reads the code, understands the intent, and generates tests. Unit tests, edge cases, error paths — the stuff nobody ever gets around to writing. There is no excuse left for low test coverage. The tedious part is gone.</p>

<p>Got legacy code with zero tests? Claude can bootstrap a full test suite in minutes, not sprints.</p>

<p><a href="https://thenewstack.io/claude-code-and-the-art-of-test-driven-development/">Claude Code and the Art of TDD</a></p>

<h3 id="claude-code-for-documentation">Claude Code for Documentation</h3>

<p>Automated documentation pipelines that keep docs in sync with code. README generation. Changelog automation. API docs that reflect actual endpoints. Mermaid diagrams generated from your actual code structure — architecture docs that stay current.</p>

<p>If documentation requires AI to stay updated, maybe the real problem is the documentation is too coupled to implementation details.</p>

<p><a href="https://medium.com/@dan.avila7/automated-documentation-with-claude-code-building-self-updating-docs-using-docusaurus-agent-2c85d3ec0e19">Automated Docs with Claude Code</a></p>

<h3 id="claude-code-for-security-review">Claude Code for Security Review</h3>

<p>Claude Code Security found 500+ vulnerabilities in production open-source code. Bugs missed for decades by traditional tooling and expert code review. It reasons about data flow and attack chains. Not pattern matching. Multi-stage verification: re-examines its own results, attempts to disprove itself.</p>

<p><a href="https://www.anthropic.com/news/claude-code-security">Claude Code Security</a></p>

<h2 id="context-engineering">Context Engineering</h2>

<p>The discipline has shifted from prompt engineering to context engineering.</p>

<h3 id="context--prompts">Context &gt; Prompts</h3>

<p>It’s not about crafting the perfect prompt. It’s about giving the agent the right context: CLAUDE.md, skills, hooks, file structure, test suites, constraints. The context IS the prompt.</p>

<h3 id="steal-like-an-artist-system-prompts">Steal Like an Artist: System Prompts</h3>

<p>Want to learn how the best AI products work? Read their system prompts. Leaked system prompts for Claude, ChatGPT, Gemini, Cursor, Devin, Copilot — they’re all online. They’re masterclasses in context engineering: how to give an AI a persona, how to set constraints, how to structure instructions. Great inspiration for your own CLAUDE.md and skills.</p>

<table>
  <tbody>
    <tr>
      <td><a href="https://github.com/elder-plinius/CL4R1T4S">System Prompts Collection</a></td>
      <td><a href="https://github.com/EliFuzz/awesome-system-prompts">Awesome System Prompts</a></td>
      <td><a href="https://medium.com/@alexefimenko/i-read-the-system-prompts-leaks-for-claude-gemini-and-chatgpt-here-is-what-i-found-340131ab7bb0">Analysis of the Leaks</a></td>
    </tr>
  </tbody>
</table>

<h3 id="llm-as-judge">LLM as Judge</h3>

<p>Use one LLM to evaluate the output of another. “Thinking” models drastically outperform standard models as judges. Use it to evaluate: code quality, test coverage, PR descriptions, documentation completeness. Your test suite is already an LLM judge. Think about what else could be.</p>

<h2 id="the-big-shift">The Big Shift</h2>

<table>
  <thead>
    <tr>
      <th>Before</th>
      <th>After</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>Write code</td>
      <td>Prove code works</td>
    </tr>
    <tr>
      <td>Prompt engineering</td>
      <td>Context engineering</td>
    </tr>
    <tr>
      <td>Code review</td>
      <td>Supervisory engineering</td>
    </tr>
    <tr>
      <td>Style guides</td>
      <td>Executable skills</td>
    </tr>
    <tr>
      <td>Best practices docs</td>
      <td>CLAUDE.md + Hooks</td>
    </tr>
    <tr>
      <td>Optional testing</td>
      <td>Testing as survival</td>
    </tr>
    <tr>
      <td>Refactoring</td>
      <td>Pruning</td>
    </tr>
  </tbody>
</table>

<h2 id="holiday-wanderings">Holiday Wanderings</h2>

<p>Pick one. Go deep. Apply it on production code.</p>

<p><strong>For Beginners:</strong> Install Claude Code and run it on your project. Write a CLAUDE.md for your repository — what does the team “just know”? Ask Claude to explain your own code back to you. Generate tests for a module that has none.</p>

<p><strong>For the Curious:</strong> Try Plan Mode on a refactoring you’ve been putting off. Set up a hook that auto-formats or runs tests after edits. Write a skill that enforces your team’s coding conventions. Use Claude Code to do a security review of a service. Go into plan mode to see what different testing strategies could be for your full stack.</p>

<p><strong>For the Adventurous:</strong> Do a full TDD loop — write failing tests, let Claude implement, iterate. Prune: pick a folder, delete what’s dead, let Claude help you find it. Don’t code! Just tell it what to do. File by file. Folder by folder. Standardize a dependency: pick the one library, migrate everything in an afternoon. Agent vs. Agent: have one Claude instance write code, then a second instance review it — LLM-as-judge in practice. Connect an MCP server to a tool you use daily. Make Claude talk to your infrastructure.</p>

<p><strong>The Recipé:</strong> Wander and be inspired. Pick a concept. Apply it in full on production code. Power through reality!</p>

<h2 id="references">References</h2>

<h3 id="inspiration-posts">Inspiration Posts</h3>

<ul>
  <li><a href="https://sparkboxx.com/notes/2026/02/14/ai-coding-is-a-magnifier.html">AI Coding Is a Magnifier — Sparkboxx</a></li>
  <li><a href="https://www.thoughtworks.com/content/dam/thoughtworks/documents/report/tw_future%20_of_software_development_retreat_%20key_takeaways.pdf">ThoughtWorks Future of Software Engineering Retreat (PDF)</a></li>
  <li><a href="https://martinfowler.com/fragments/2026-02-18.html">Martin Fowler — Fragments, Feb 18</a></li>
  <li><a href="https://simonwillison.net/2025/Dec/18/code-proven-to-work/">Simon Willison — Code Proven to Work</a></li>
  <li><a href="https://simonwillison.net/">Simon Willison’s Blog</a> — essential reading on AI-assisted programming</li>
</ul>

<h3 id="claude-code-1">Claude Code</h3>

<ul>
  <li><a href="https://code.claude.com/docs/en/best-practices">Claude Code Official Docs</a></li>
  <li><a href="https://claude.com/blog/using-claude-md-files">Using CLAUDE.md Files — Anthropic Blog</a></li>
  <li><a href="https://www.humanlayer.dev/blog/writing-a-good-claude-md">Writing a Good CLAUDE.md — HumanLayer</a></li>
  <li><a href="https://code.claude.com/docs/en/skills">Claude Code Skills</a></li>
  <li><a href="https://code.claude.com/docs/en/hooks-guide">Claude Code Hooks</a></li>
  <li><a href="https://code.claude.com/docs/en/permissions">Claude Code Permissions &amp; Modes</a></li>
  <li><a href="https://code.claude.com/docs/en/mcp">Claude Code MCP Integration</a></li>
  <li><a href="https://www.anthropic.com/news/claude-code-security">Claude Code Security</a></li>
  <li><a href="https://thenewstack.io/claude-code-and-the-art-of-test-driven-development/">Claude Code and the Art of TDD</a></li>
</ul>

<h3 id="system-prompts--context-engineering">System Prompts &amp; Context Engineering</h3>

<ul>
  <li><a href="https://github.com/elder-plinius/CL4R1T4S">Leaked System Prompts Collection — CL4R1T4S</a></li>
  <li><a href="https://github.com/EliFuzz/awesome-system-prompts">Awesome System Prompts — AI Coding Agents</a></li>
  <li><a href="https://medium.com/@alexefimenko/i-read-the-system-prompts-leaks-for-claude-gemini-and-chatgpt-here-is-what-i-found-340131ab7bb0">Analysis: What the Leaks Reveal</a></li>
</ul>

<h3 id="mcp">MCP</h3>

<ul>
  <li><a href="https://modelcontextprotocol.io/">Model Context Protocol</a></li>
  <li><a href="https://modelcontextprotocol.io/docs/develop/connect-local-servers">MCP Specification</a></li>
  <li><a href="https://medium.com/@dan.avila7/automated-documentation-with-claude-code-building-self-updating-docs-using-docusaurus-agent-2c85d3ec0e19">Automated Docs with Claude Code</a></li>
</ul>]]></content><author><name>Wilco van Duinkerken</name></author><category term="notes" /><summary type="html"><![CDATA[Companion post for the Holiday Coding session on what changes when AI writes the code. Covers inspiration, practical Claude Code features, context engineering, and exercises on production code.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://sparkboxx.com/assets/images/default-card.webp" /><media:content medium="image" url="https://sparkboxx.com/assets/images/default-card.webp" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">AI Coding Is a Magnifier</title><link href="https://sparkboxx.com/notes/2026/02/14/ai-coding-is-a-magnifier.html" rel="alternate" type="text/html" title="AI Coding Is a Magnifier" /><published>2026-02-14T08:00:00+00:00</published><updated>2026-02-14T08:00:00+00:00</updated><id>https://sparkboxx.com/notes/2026/02/14/ai-coding-is-a-magnifier</id><content type="html" xml:base="https://sparkboxx.com/notes/2026/02/14/ai-coding-is-a-magnifier.html"><![CDATA[<p>Someone in a CTO group asked two questions that keep coming up: <em>What’s one rule that genuinely helped your team adopt AI coding faster?</em> And: <em>What’s the biggest risk you see right now?</em></p>

<p>The answer to the first question is easy. The second one is more nuanced.</p>

<h2 id="make-it-the-expectation-not-an-experiment">Make It the Expectation, Not an Experiment</h2>

<p>The single thing that moved the needle most was removing the ambiguity. I told my team: the expectation is that you code with an AI agent next to you. Not occasionally. Not when you feel like it. All the time.</p>

<p>Budgets are available. Pick your tools. Experiment with different agents. But come back at least once a week and share what you found. What worked. What didn’t. What surprised you. What frustrated you.</p>

<p>This isn’t “play with it when you have a quiet afternoon.” It’s “I expect you to start using this, and I want to hear about it.”</p>

<p>The difference matters. When AI coding is optional, people try it once, hit a rough edge, and go back to what they know. When it’s the expectation, they push through the rough edges and actually discover where it changes the game. The weekly sharing compounds fast. One person’s breakthrough becomes everyone’s shortcut.</p>

<p>Be very clear on your highs and lows. That’s how the team learns what works.</p>

<h2 id="the-biggest-risk-is-doing-the-same-thing-faster">The Biggest Risk Is Doing the Same Thing Faster</h2>

<p>I don’t think there’s one single biggest risk. But there’s a pattern I keep seeing that worries me: people think AI coding means doing the same thing but more efficiently.</p>

<p>It doesn’t. AI changes what matters.</p>

<h3 id="testing-goes-from-checkbox-to-survival">Testing Goes from Checkbox to Survival</h3>

<p>Everybody talks about “writing more unit tests now.” That misses the point entirely. What you actually need is a test suite that operates on many different levels. Unit tests, integration tests, smoke tests, end-to-end tests. The whole stack.</p>

<p>Why? Because when you’re generating code at speed, the only thing keeping you honest is a tight feedback loop. If your test suite only covers one layer, you’ll ship confidently and break things spectacularly. You need fast signals at every level telling you whether the thing actually works.</p>

<p>This isn’t about test coverage as a metric. It’s about cranking up testing to twelve because your rate of change just went through the roof.</p>

<h3 id="your-codebase-gets-amplified-not-improved">Your Codebase Gets Amplified, Not Improved</h3>

<p>Here’s the thing nobody tells you: AI is a magnifier of your codebase.</p>

<p>If your codebase is tight and controlled—clear conventions, well-structured modules, minimal dead code—the AI will produce more tight and controlled software. It picks up the patterns. It follows the conventions. It gives you more of what you already have.</p>

<p>If your codebase is a mess—dead code everywhere, loose conventions, three different ways to do the same thing—the AI will accelerate you into a bigger mess. It’ll happily use all three patterns. It’ll build on top of the dead code. It’ll match the chaos perfectly.</p>

<p>It’s never been easier to clean up. But you have to actually do it. Agentic coding isn’t just for building new features. It’s remarkably good at cleaning house. Lots of people don’t use it that way. They should.</p>

<h3 id="prune-dont-just-refactor">Prune, Don’t Just Refactor</h3>

<p>There’s an important distinction here. I’m not talking about refactoring—reorganizing code into better abstractions. I’m talking about pruning. Removing things. File by file, folder by folder. Deleting what’s not needed. Simplifying what remains.</p>

<p>This is essential now. Without it, you get high-speed rot. When a team generates code faster than ever, the accumulation of unnecessary stuff accelerates at the same rate. What used to take months to become unwieldy now takes weeks.</p>

<p>Keep it clean. Keep it understandable. And keep it like that. If you let things drift, it gets messy faster than ever before.</p>

<h3 id="dependencies-are-the-hidden-multiplier">Dependencies Are the Hidden Multiplier</h3>

<p>The same magnification applies to your dependencies.</p>

<p>In the past, you might have accumulated a few half-used libraries. An HTTP client you mostly use. A utility library that overlaps with another one. A date library from 2019 sitting next to a newer one because nobody got around to migrating.</p>

<p>That was always a smell. Now it’s a real problem. AI agents see all your dependencies. When there’s ambiguity—two libraries that do similar things—the confusion bleeds into generated code. You’ll get inconsistent implementations. Mixed patterns. The kind of subtle inconsistency that’s hard to spot in review and compounds over time.</p>

<p>The flip side: it’s never been easier to standardize. Pick the one dependency. Let the AI help you migrate everything to it. Then remove the old one. What used to be a tedious multi-sprint cleanup is now an afternoon’s work.</p>

<h2 id="the-pattern-underneath">The Pattern Underneath</h2>

<p>All of these risks share the same root cause. People are treating AI as a speed boost for their existing workflow instead of rethinking what their workflow should be.</p>

<p>The teams that are getting the most out of AI coding aren’t the ones writing features faster. They’re the ones who realized that the game changed: testing at every layer matters more. A clean codebase matters more. Consistent dependencies matter more. Pruning matters more.</p>

<p>Speed is the easy part. Staying in control at that speed is the hard part.</p>

<hr />

<p><strong>What in your codebase would you be afraid to magnify?</strong></p>]]></content><author><name>Wilco van Duinkerken</name></author><category term="notes" /><summary type="html"><![CDATA[Someone asked me what helped my team adopt AI coding faster and what the biggest risk is. The answer to the first question is easy. The second one is more nuanced.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://sparkboxx.com/assets/images/default-card.webp" /><media:content medium="image" url="https://sparkboxx.com/assets/images/default-card.webp" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Implementing Rodeo Service - A Field Guide for Technical Leaders</title><link href="https://sparkboxx.com/leadership/2026/01/21/rodeo-field-guide.html" rel="alternate" type="text/html" title="Implementing Rodeo Service - A Field Guide for Technical Leaders" /><published>2026-01-21T08:06:42+00:00</published><updated>2026-01-21T08:06:42+00:00</updated><id>https://sparkboxx.com/leadership/2026/01/21/rodeo-field-guide</id><content type="html" xml:base="https://sparkboxx.com/leadership/2026/01/21/rodeo-field-guide.html"><![CDATA[<p>If you <a href="/leadership/2026/01/21/when-process-fails.html">read the previous piece</a> and thought “this might actually work for us,” this guide is your implementation playbook. Not theory—what actually worked when my team tried this ten years ago, and what I’ve seen work since.</p>

<p>Let’s talk about how to actually do this.</p>

<h2 id="before-you-start-is-rodeo-service-right-for-your-situation">Before You Start: Is Rodeo Service Right for Your Situation?</h2>

<p>First, the reality check. Rodeo Service is a specific tool for a specific problem.</p>

<p><strong>Rodeo Service works when:</strong></p>

<ul>
  <li>Your team is underwater with interruptions despite having “proper” ticketing in place</li>
  <li>Frustration between engineering and other teams is actively eroding trust</li>
  <li>You’re in a high-pressure period (pre/post major release, growth spike, etc.)</li>
  <li>Quick wins and visible responsiveness would buy time for deeper fixes</li>
  <li>You have at least 4-5 engineers (smaller teams need different approaches)</li>
</ul>

<p><strong>Rodeo Service doesn’t work when:</strong></p>

<ul>
  <li>You don’t have leadership buy-in to protect Rodeo engineers from other work</li>
  <li>Your culture is so toxic that human contact would just create more conflict</li>
  <li>You have product owners or team leads that are so protective, that they are afraid to let go and have people actually talk to each other.</li>
  <li>When you have no way of “quickly” releasing or making changes to your software. For example, if CI/CD is fully missing or your code review processes are so stiff and slow that minor changes can’t get through.</li>
</ul>

<p>Be honest about which situation you’re in.</p>

<h2 id="step-1-set-up-the-foundation">Step 1: Set Up the Foundation</h2>

<p>Before anyone puts on a cowboy hat, you need to establish the ground rules.</p>

<h3 id="define-the-scope">Define the Scope</h3>

<p>When we first did this, we kept it simple. Rodeo Service handles:</p>

<ul>
  <li>All incoming bug reports and service requests</li>
  <li>Triage and initial diagnosis</li>
  <li>Quick fixes where possible</li>
  <li>Communication about what’s happening and why</li>
  <li>Routing complex issues to the right people</li>
  <li>Talk to the team lead / PO / senior engineers when they want help with triaging.</li>
</ul>

<p>What it doesn’t handle:</p>

<ul>
  <li>Feature development</li>
  <li>Scheduled project work</li>
  <li>Deep architectural decisions</li>
  <li>Anything requiring days of heads-down work</li>
</ul>

<p>The boundaries matter. Rodeo Service isn’t “do whatever anyone asks.” It’s structured availability.</p>

<h3 id="establish-the-core-rule-talk-to-humans">Establish the Core Rule: Talk to Humans</h3>

<p>This is the most important part of setup, and you need to be crystal clear about it from day one:</p>

<p><strong>If you want Rodeo Service’s help, you have a conversation. Five to ten minutes. Face-to-face or on camera. No exceptions.</strong></p>

<p>This is not the same as your normal ticketing system. This is not “log a ticket and we’ll call you.” This is: you pick up the phone, you jump on a call, you walk over to the person wearing the (proverbial) cowboy hat, and you talk to them.</p>

<p>If you’re not willing to invest ten minutes in a conversation about your issue, it’s not urgent enough to interrupt focused work. Route it through your normal product management processes instead.</p>

<p>Why this rule matters:</p>

<p><strong>It filters for real urgency.</strong> The person who fires off fifteen “P0” tickets in a day won’t schedule fifteen ten-minute calls. Email courage disappears. Real problems surface.</p>

<p><strong>It prevents weaponization.</strong> The ticket that says “This is COMPLETELY BROKEN and UNACCEPTABLE” becomes a much calmer “Hey, customers are reporting issues with the export feature” when you’re looking at someone.</p>

<p><strong>It forces context.</strong> “The thing is broken” becomes a real conversation where engineers can ask “Which thing? What’s happening? Who’s affected? What’s the business impact?”</p>

<p><strong>It protects Rodeo engineers.</strong> Without this rule, they drown in noise. With it, they have focused conversations about real problems.</p>

<h3 id="how-to-communicate-this-rule">How to Communicate This Rule</h3>

<p>Send a clear message to the entire organization:</p>

<p>“We’re launching Rodeo Service as a way to handle urgent issues. Here’s how it works: [Engineer Name] and [Engineer Name] are on duty this week. If you have something urgent, reach out to them directly via [slack channel/phone/desk]. Be ready for a quick 5-10 minute conversation about the issue—this helps us understand context and get you help faster. If something doesn’t need immediate attention, please continue using our normal product management channels.”</p>

<p>Don’t apologize for the conversation requirement. Don’t frame it as a barrier. Frame it as what it is: the fastest way to get help when something actually matters.</p>

<h3 id="establish-the-schedule">Establish the Schedule</h3>

<p>We ran one-week rotations. Some teams do two weeks. I don’t recommend shorter—there’s a learning curve each rotation.</p>

<p>For team size, we had about 15 engineers, so we did two people on Rodeo at a time. One person works for smaller teams. Two is better if you can afford it—prevents single points of failure and allows for pairing on complex stuff.</p>

<h3 id="create-the-communication-channel">Create the Communication Channel</h3>

<p>Set up one clear place where everyone knows to reach Rodeo Service. We used a dedicated Slack channel, but that channel’s purpose was to say “I need to talk” not to have the conversation. The conversation happens on a call or in person.</p>

<p>The pattern we used:</p>

<ul>
  <li>Someone posts in #rodeo-service: “Need help with customer export issue, can someone call me?”</li>
  <li>Rodeo engineer responds: “Jumping on a call with you now” or “Can you meet me at my desk?”</li>
  <li>The actual work happens in conversation</li>
</ul>

<p>Not “DM whoever’s on Rodeo” or “check the schedule.” Make it dead simple.</p>

<h3 id="get-leadership-alignment">Get Leadership Alignment</h3>

<p>This is non-negotiable. You need explicit buy-in from leadership that:</p>

<ul>
  <li>Rodeo engineers do not have any sprint commitments during their rotation</li>
  <li>Their primary responsibility that week is Rodeo, not shipping features</li>
  <li>The rest of the organization understands the “talk to us” requirement</li>
  <li>This is the official urgent channel, not a workaround</li>
</ul>

<p>Have this conversation with your CEO, CPO, Head of Sales—anyone whose team will interact with Rodeo Service. Make sure they understand and endorse the conversation requirement.</p>

<h2 id="step-2-choose-your-first-rodeo-engineers">Step 2: Choose Your First Rodeo Engineers</h2>

<p>Don’t assign this based on who’s “available.” Ask who wants to try it.</p>

<p><strong>What makes a good Rodeo engineer:</strong></p>

<ul>
  <li>Comfortable with human interaction (doesn’t have to love it, can’t dread it)</li>
  <li>Good at triaging and prioritizing quickly (almost everybody can when put on the spot!)</li>
  <li>Enough system knowledge to navigate different areas</li>
  <li>Can translate tech → business language (most can when in conversation!)</li>
  <li>Temperament to handle pressure without getting defensive</li>
  <li>Willing to have multiple 10-minute conversations throughout the day</li>
</ul>

<p><strong>How to identify them:</strong></p>

<p>Send out a message: “We’re trying something new called Rodeo Service. Here’s what it entails [including the conversation requirement]. Who’s interested in being part of the first rotation?”</p>

<p>Some of your quietest engineers will volunteer. Some of your most social will say “absolutely not.” Both responses are valuable data.</p>

<p>For the first rotation, I strongly recommend two people and at least one volunteer. Forced participation poisons the well.</p>

<h2 id="step-3-launch-week---set-expectations">Step 3: Launch Week - Set Expectations</h2>

<p>The first week sets the tone.</p>

<h3 id="to-the-organization">To the Organization</h3>

<p>Send a company-wide announcement explaining what Rodeo Service is, who’s on duty, how to reach them. Make it clear this is an experiment.</p>

<p>Be explicit about the conversation requirement. Not as a barrier, but as the mechanism:</p>

<p>“When you reach out to Rodeo Service, one of the engineers will get on a quick call with you (5-10 minutes) to understand what’s happening. This helps us triage effectively and get you the right help fast. If you’re not able to jump on a call, your issue can go through our normal product management channels.”</p>

<p>Make Rodeo engineers visible. Actual cowboy hats work remarkably well if you’re in-office. We’ve also had big pink flamingo’s marking the rodeo desks. Make the people visible and easy to find. If you’re remote, Slack status + profile photos. The point is: no ambiguity about who to talk to.</p>

<h3 id="to-rodeo-engineers">To Rodeo Engineers</h3>

<p>Give them permission to organize however they want. Keep notes in whatever system works for them. Track issues however makes sense. The goal is human contact and context, not process compliance.</p>

<p>In high-compliance environments, yes, things eventually need to get logged properly. But that can happen after the conversation. The front end should be human. The organisation should be organic.</p>

<p>Tell them explicitly:</p>

<ul>
  <li>You have permission to say “no” to non-Rodeo work</li>
  <li>You have permission to enforce the conversation rule—if someone won’t get on a call, route them to normal channels</li>
  <li>Quick fixes are good; rabbit holes are not</li>
  <li>Communicate liberally—over-communication is the point</li>
  <li>Talk to each other daily about what you’re seeing</li>
</ul>

<p><strong>On handling resistance to the conversation requirement:</strong></p>

<p>Some people will push back: “Can’t I just send you the details?” The answer is: “For urgent issues that need Rodeo Service, we need to talk. For things that can wait, please use the normal product process. Which one is this?”</p>

<p>This isn’t about being difficult. It’s about the person self-selecting the right channel based on actual urgency.</p>

<h3 id="to-the-rest-of-engineering">To the Rest of Engineering</h3>

<p>Make sure everyone else knows that Rodeo engineers are off-limits for sprint work. Protect them fiercely. If you are in firefighting mode all day, you can’t expect an engineer to also join a 2 hour architecture meeting that sets the future of the platform. The brain simply won’t be in as good a shape to make long term technical decisions when on Rodeo duty.</p>

<p>Also make sure the team understands why you’re requiring conversation: it’s not bureaucracy, it’s anti-bureaucracy. It’s removing all the machinery and getting back to humans helping humans.</p>

<h2 id="step-4-the-first-conversations---what-they-look-like">Step 4: The First Conversations - What They Look Like</h2>

<p>Here’s what actual Rodeo conversations look like in practice:</p>

<p><strong>Example 1: The Quick Fix</strong></p>

<p><em>[Slack rings]</em></p>

<p>Rodeo: “Hey, what’s going on?”</p>

<p>Stakeholder: “The customer dashboard is showing last week’s data instead of today’s.”</p>

<p>Rodeo: “Okay, is this affecting all customers or specific ones?”</p>

<p>Stakeholder: “All customers as far as I can tell. I’ve had three complaints this morning.”</p>

<p>Rodeo: “Got it. Let me check the data refresh job… ah, yeah, it failed overnight. Give me twenty minutes, I’ll fix it and confirm with you.”</p>

<p><em>[20 minutes later]</em></p>

<p>Rodeo: “Fixed. It was a API timeout issue. I’ve increased the timeout and the data is refreshing now. Let me know if you get any more reports. I do think we might have a fundamental issue that needs a longer time to solve. I’ll let the team know to look at it more deeply. However, for now, it’s fixed. If it happens again tomorrow, please contact me again. I did create a formal incident ticket, you might get a follow up on that later next week.”</p>

<p>Total time: 5 minutes of conversation, 20 minutes of work. No ticket. No back-and-forth. Just: problem explained, problem fixed, problem confirmed resolved.</p>

<p><strong>Example 2: The Misunderstood Urgency</strong></p>

<p><em>[Video call]</em></p>

<p>Rodeo: “What can I help with?”</p>

<p>Stakeholder: “The reporting feature doesn’t have the filters we talked about last month. We need them this week.”</p>

<p>Rodeo: “Okay, so this is a missing feature rather than something broken?”</p>

<p>Stakeholder: “Yeah, but it’s important.”</p>

<p>Rodeo: “I believe you. But this sounds like something that should go through product planning rather than Rodeo Service. Rodeo is for things that are broken or blocking people right now. Can you work with [Product Manager] to get this prioritized? I expect it to be several weeks worth of work, so don’t expect miracles immediately. Did you already consider to use [x,y,z] dashboard to see if that is good enough for now?”</p>

<p>Stakeholder: “Oh, I thought this was the fast channel for everything.”</p>

<p>Rodeo: “It’s the fast channel for urgent problems. For new features or improvements, product planning is the right place. They can help you get this scheduled properly.”</p>

<p>Total time: 3 minutes. Appropriately routed. No hurt feelings because it’s a human conversation, not a form rejection.</p>

<p><strong>Example 3: The Context That Changes Everything</strong></p>

<p><em>[In-person conversation]</em></p>

<p>Rodeo: “What’s going on?”</p>

<p>Stakeholder: “The mobile app crashes when you try to upload photos.”</p>

<p>Rodeo: “Okay, how many users are affected?”</p>

<p>Stakeholder: “I don’t know exactly, but we’ve had reports from our biggest client. They’re doing a product launch tomorrow and they need this working.”</p>

<p>Rodeo: “Okay, that context helps. If they’re launching tomorrow, this is genuinely urgent. Let me look at it right now. Who can I reach out to at the customer to screen share with them? It’d be great if we can ‘see it failing’ can you setup a customer call for me this morning?</p>

<p>Stakeholder: “Yeah, I’’ll call the customer now. Thanks for jumping on this.”</p>

<p>Rodeo: “No problem. While waiting for a meeting link I’ll start debugging.”</p>

<p>Total time: unknown, but, given that the stakeholder is willing to immediately setup a meeting with the customer, at least the urgency is really there.</p>

<h2 id="step-5-let-it-run---resist-the-urge-to-measure">Step 5: Let It Run - Resist the Urge to Measure</h2>

<p>Here’s where most implementations go wrong: someone immediately wants dashboards showing response times, resolution rates, number of conversations.</p>

<p>Don’t.</p>

<p>The whole point of Rodeo Service is to remove the machinery and let humans back in. The moment you start measuring conversation duration or tracking ticket velocity, you’re back to optimizing for the wrong thing.</p>

<p>What you should pay attention to:</p>

<ul>
  <li>Does the atmosphere in the office change?</li>
  <li>Are people less frustrated?</li>
  <li>Are conversations between teams less tense?</li>
  <li>Do Rodeo engineers report patterns they’re seeing?</li>
  <li>Is trust rebuilding?</li>
  <li>Are people respecting the conversation requirement or fighting it?</li>
</ul>

<p>These aren’t things you measure in Jira. They’re things you feel in the room and hear in hallway conversations.</p>

<p>When my team first did this, I didn’t track a single metric for the first month. I just watched. Listened. Checked in with Rodeo engineers about what they were experiencing. Asked people around the company how it felt.</p>

<p>The data was qualitative and obvious: the temperature dropped. People were calmer. Engineers had context they didn’t have before. Problems were getting fixed that we didn’t even know existed. Roadmaps were re-aligned, the process became more human.</p>

<p>That’s the measurement that matters.</p>

<h2 id="step-6-the-daily-rhythm">Step 6: The Daily Rhythm</h2>

<p>Once Rodeo Service is running, a simple daily cadence helps:</p>

<p><strong>Brief morning sync (15 min):</strong>
Rodeo engineers talk to each other about what came in yesterday, what’s still open, what needs escalation. They also check: are people respecting the conversation requirement? Is anyone trying to circumvent it? Are there bigger issues at hand that might need engineering or product leadership to step in?</p>

<p><strong>End-of-day wrap (15 min):</strong>
Quick debrief about what got resolved, what’s rolling forward, any surprises. Then a short update to the team channel—not detailed metrics, just enough that people see activity.</p>

<p><strong>Weekly retrospective (1 hour):</strong>
At the end of the rotation, sit down with Rodeo engineers and ask:</p>

<ul>
  <li>What did we learn?</li>
  <li>What worked, what didn’t</li>
  <li>What patterns did you see</li>
  <li>How did the conversation requirement work in practice</li>
  <li>Did anyone push back hard on it</li>
  <li>What should we fix</li>
  <li>Who should go next</li>
</ul>

<p>This is where the self-healing aspect kicks in. The engineers who lived in Rodeo Service for a week see things others don’t. Listen to what they tell you.</p>

<p>The retrospective will provide extremely valuable insights to leadership to deal with process, cultural and overall context issues.</p>

<h2 id="step-7-triage-through-conversation">Step 7: Triage Through Conversation</h2>

<p>Here’s how triage actually works when it’s conversation-based:</p>

<p>You don’t need P0/P1/P2 definitions. You need Rodeo engineers who can have a conversation and use judgment.</p>

<p>The key questions in every conversation:</p>

<ul>
  <li>“What’s actually happening?”</li>
  <li>“Who’s affected?”</li>
  <li>“What’s the business impact?”</li>
  <li>“Is there a workaround?”</li>
  <li>“What happens if we don’t fix this today?”</li>
</ul>

<p>These questions reveal real priority. Not formal severity levels, but actual understanding of impact.</p>

<p>Mind the goal of Rodeo Service - it’s about restoring trust - it’s a cultural tool - not an efficiency instrument! Even, if in hindsight, an engineer completely misjudges the priority, that’s perfectly fine as long as trust increases and the human touch is involved.</p>

<p>When someone insists something is urgent and Rodeo engineers assess it differently, they continue the conversation:</p>

<p>“I’m thinking this can wait until tomorrow because X. Am I missing context about why it needs to be today? We’re currently also working for [Person X] solving [Issue Y] and we’d like to finish that first.”</p>

<p>Usually that reveals either that the engineer is right, or that there’s business context they didn’t have. Either way, everyone’s smarter afterward.</p>

<p>And here’s the key: you can have this conversation in real time. You can’t do this in a ticket thread.</p>

<h2 id="step-8-handling-resistance-to-the-conversation-requirement">Step 8: Handling Resistance to the Conversation Requirement</h2>

<p>Some people will resist. Here’s how to handle it:</p>

<p><strong>“I don’t have time for a call, just read my message”</strong></p>

<p>Response: “For urgent issues, we need real-time conversation to understand context. For things that can be handled asynchronously, please use our normal product channels. Which is this?”</p>

<p><strong>“This is such a small thing, why do I need to talk?”</strong></p>

<p>Response: “If it’s small, a quick call will be faster than a message thread. If it’s not small, we need to understand it properly. Either way, let’s jump on a call.”</p>

<p><strong>“I already explained everything in my message”</strong></p>

<p>Response: “I appreciate the detail. A quick conversation will help me ask follow-up questions and make sure I understand the full context. Can we talk now?”</p>

<p><strong>“This feels like you’re making it harder to report issues”</strong></p>

<p>Response: “We’re actually making it easier for urgent issues—you get direct access to an engineer immediately. For non-urgent things, the normal process is still there. This just makes sure the urgent channel stays urgent.”</p>

<p>The key is consistency. If you let people circumvent the conversation requirement, it collapses. Everyone will go back to firing off messages and expecting immediate responses.</p>

<h2 id="step-9-what-to-track-if-you-must">Step 9: What to Track (If You Must)</h2>

<p>Look, some organizations demand something. Fine. But make it human-centered:</p>

<p><strong>Don’t track:</strong></p>

<ul>
  <li>Number of conversations</li>
  <li>Average conversation length</li>
  <li>Time-to-resolution</li>
  <li>Response SLAs</li>
</ul>

<p><strong>Do notice:</strong></p>

<ul>
  <li>Feedback from people who used Rodeo Service (just ask them how it felt)</li>
  <li>Feedback from Rodeo engineers in debriefs</li>
  <li>Observed change in team/company tension</li>
  <li>Patterns discovered that led to fixes</li>
  <li>Context gaps identified and closed</li>
  <li>Whether the conversation requirement is being respected or constantly challenged</li>
</ul>

<p>The real win isn’t “had 47 conversations this week.” It’s “the gap between sales and engineering is no longer an active fire” or “we discovered that one workflow was generating 40% of our support load and fixed it.”</p>

<p>You’ll know if it’s working because you’ll feel it. The hallway conversations change. The Slack tone changes. People stop cc’ing leadership on every escalation. People stop trying to bypass the conversation requirement because they see it works.</p>

<p>Trust your ability to sense this. You don’t need a dashboard.</p>

<h2 id="step-10-common-pitfalls">Step 10: Common Pitfalls</h2>

<p><strong>Pitfall 1: Letting the conversation requirement slide</strong></p>

<p>This is the death of Rodeo Service. Someone sends a long message. Rodeo engineer reads it, understands it, fixes it. Seems efficient. But now everyone else sees that you don’t really need a conversation. The flood gates open.</p>

<p>Fix: Be religious about it. “Thanks for the detail. Let’s jump on a quick call so I can make sure I understand everything. Can you talk now?”</p>

<p><strong>Pitfall 2: Rodeo becomes “do whatever anyone asks”</strong></p>

<p>Someone books a conversation, then uses it to request a feature they’ve wanted for months.</p>

<p>Fix: “This sounds like a feature request rather than an urgent issue. Let’s route this to product planning where it can be properly prioritized.”</p>

<p><strong>Pitfall 3: Measuring the wrong things</strong></p>

<p>Someone starts tracking “conversations per day” or “average resolution time.”</p>

<p>Fix: Stop. Have a conversation with whoever’s asking for metrics about what Rodeo Service actually solves.</p>

<p><strong>Pitfall 4: Rodeo becomes permanent</strong></p>

<p>You’re running it for six months. It’s just “how things work now.”</p>

<p>Fix: This means you have foundational problems. Rodeo is buying you time. Use that time to fix the actual issues, then wind down Rodeo.</p>

<p><strong>Pitfall 5: Not protecting Rodeo engineers from other work</strong></p>

<p>Someone pulls a Rodeo engineer into a sprint planning meeting or asks them to review a PR.</p>

<p>Fix: Leadership enforcement. “They’re on Rodeo this week. Talk to them next week.”</p>

<p><strong>Pitfall 6: The Product Owner or Product Manager steps in for the conversation</strong></p>

<p>If you put a proxy back in place, you’re most likely rapidly slipping back into dynamics that actually created the problem to begin with.</p>

<p>fix: Have to PO/PM roam around and join calls, but let the engineers do the talking.</p>

<h2 id="step-11-common-objections">Step 11: Common Objections</h2>

<p><strong>Objection 1: But we need the process for compliance</strong></p>

<p>Humans first, process later. If 2 people talk before they start documenting, you can then backfill any ticketing system or process after the conversation. If your process is so stiff that it doesn’t allow that, you might want to look at that first. Consider asking your PO/PM to take care of keeping the compliance in check.</p>

<p><strong>Objection 2: This will lead to terrible code!</strong></p>

<p>Maybe it does, maybe it doesn’t. Most engineers don’t piss in their own pond if they don’t have to. If one needs to hot wire something together to make it work, it’s a clear indication that one of the upcoming sprints might need some time reserved to polish things. If you’ve got a working system, and a happy customer, you should be able to prioritise more calmly than when working under high pressure of a non-working situation.</p>

<p>This does of course assume that you either trust the engineer and/or you trust your CI/CD, test suite, code review process or anything “internal” to the engineering team that catches grove mistakes. Also, depending on the area of the fix, you might put some guardrails on security. What changes you need in the internal tech team processes are things you will learn when you start Rodeo Service.</p>

<p><strong>Objection 3: All my engineers hate talking to business people</strong></p>

<p>I call bull crap. Engineers are just people, often people that like helping other people. Also, many engineers aspire to be “more than just engineers” in their career. They might actually love the opportunity to be involved directly. If communication in your organisation is so toxic that people don’t want to talk to people, you have a culture issue that surpasses your process problems.</p>

<p><strong>Objection 4: Isn’t this the job of the PO/PM</strong></p>

<p>Yeap, in the normall process, this is often put on the PO/PM. But the normal process (temporarily) doesn’t work as it should, otherwise you would not be considering rodeo service. So take a step back, remove the process, then bring back in the PO/PM with extra clarity on what stakeholder management entails. Also, during the rodeo services, the PO/PM keeps handling any normal process flow!</p>

<p><strong>Objection 5: This is just the same as an engineer “being on call”</strong></p>

<p>Then you’re not operating lean and mean enough. Being on call is great for stakeholder communication. It can and maybe should be part of your standard operating process. However, an engineer ‘on call’ normally does contribute to the sprint it’s an “extra task” on top of normal work, not the main focus. If you always have an engineer on call in a pressure cooker environment, you are effectively running rodeo service as part of your standard process. In that case, take a good look at your retrospectives to see what you can improve to reduce stress in the team.</p>

<p><strong>Objection 6: This will only raise expectations of non-tech people, to which we can’t deliver!</strong></p>

<p>That’s the counter intuitive part. It does not. Unreasonable expectations are often fuelled by lack of context. Rodeo brings the human back into the loop, and with that it brings context and normal emotion back in. This helps solve the context and the priority problems.</p>

<h2 id="step-11-when-to-stop">Step 11: When to Stop</h2>

<p>Rodeo Service should have an expiration date.</p>

<p>Stop when:</p>

<ul>
  <li>The volume of critical interruptions has dropped</li>
  <li>You’ve fixed the major patterns Rodeo identified</li>
  <li>The organization has rebuilt trust enough to get back to normal processes</li>
  <li>Normal processes are working again</li>
  <li>The pressure situation has passed</li>
</ul>

<p>We ran our first implementation for about three months. Then we kept it in our back pocket as something we could spin up when needed.</p>

<p>Have a proper retrospective when you stop. What did you learn? What would you do differently? How did the conversation requirement work? Because there probably will be a next time.</p>

<h2 id="the-implementation-truth">The Implementation Truth</h2>

<p>Here’s what implementing Rodeo Service actually requires:</p>

<p><strong>Leadership courage:</strong> To remove process when everyone expects you to add it</p>

<p><strong>Internal Team trust:</strong> To let people organize their work as they see fit</p>

<p><strong>Cultural commitment:</strong> To prioritize human connection over measurement</p>

<p><strong>Boundary enforcement:</strong> To protect the conversation requirement even when it’s inconvenient</p>

<p><strong>Patience:</strong> To let the benefits emerge without forcing metrics on them</p>

<p>The conversation requirement is the keystone. It’s the thing that makes everything else work. It filters. It humanizes. It prevents weaponization. It forces context.</p>

<p>Without it, Rodeo Service is just another support queue with a fun name.</p>

<p>With it, it’s a way to remember that behind every issue is a person trying to do their job, and the fastest path to helping them is to actually talk to them.</p>

<hr />

<p><strong>Ready to try this? Start with one rotation. Enforce the conversation requirement from day one. See what you learn. Trust your people to figure out the details.</strong></p>]]></content><author><name>Wilco van Duinkerken</name></author><category term="leadership" /><summary type="html"><![CDATA[A practical fuild guide for rodeo service. A way of restoring trust between tech and the rest of the company.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://sparkboxx.com/assets/images/default-card.webp" /><media:content medium="image" url="https://sparkboxx.com/assets/images/default-card.webp" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">When Process Becomes the Problem - Why Opening Doors Beats Building Walls</title><link href="https://sparkboxx.com/leadership/2026/01/21/when-process-fails.html" rel="alternate" type="text/html" title="When Process Becomes the Problem - Why Opening Doors Beats Building Walls" /><published>2026-01-21T07:12:42+00:00</published><updated>2026-01-21T07:12:42+00:00</updated><id>https://sparkboxx.com/leadership/2026/01/21/when-process-fails</id><content type="html" xml:base="https://sparkboxx.com/leadership/2026/01/21/when-process-fails.html"><![CDATA[<p>There’s a moment in every tech organization when things start to crack. Not the code—though that’s probably cracking too—but the space between teams. Between engineering and the rest of the company. Between what was promised and what can actually be delivered.</p>

<p>You know the moment I’m talking about. Bugs are piling up faster than they’re getting fixed. Customer support is escalating everything. Sales is making commitments that engineering finds out about on Slack. Leadership is asking “when will it be ready” with increasing frequency and decreasing patience.</p>

<p>The instinct, for most technical leaders, is to build walls.</p>

<p>Create a ticketing system. Implement approval workflows. Route all requests through product. Send people links to Jira. Establish “focus time” where the team can’t be interrupted. Build barriers between engineering and everyone else who seems determined to distract them from shipping.</p>

<p>I understand this instinct. I’ve done it myself.</p>

<p>But here’s what I learned ten years ago when my own team hit this breaking point: when things are truly on fire, process doesn’t put out the flames. It just makes people angrier about how long it takes to report the smoke.</p>

<p><a href="/leadership/2026/01/21/rodeo-field-guide.html">A practical field guide is available here</a></p>

<h2 id="the-frustration-spiral">The Frustration Spiral</h2>

<p>Let’s talk about what’s actually happening when organizations reach this point.</p>

<p>It’s not just that there are bugs. Every production system has bugs. It’s not just that people are stressed.</p>

<p>What’s happening is that trust is eroding. The rest of the organization doesn’t understand why fixes take so long. Engineering doesn’t understand why people can’t just log a ticket and wait their turn. Everyone is frustrated. Everyone feels unheard. Everyone is right, from their perspective.</p>

<p>This is when culture starts to crack.</p>

<p>And this is exactly when most teams respond by adding more process. More gates. More ticket fields. More structured communication. The reasoning makes sense: if we can just get people to follow the process, we can manage the chaos.</p>

<p>But process is designed for normal operations. It catches your everyday occurrence of this issue. When you’re in a frustration spiral, process becomes one more thing people are angry about.</p>

<h2 id="the-counterintuitive-play">The Counterintuitive Play</h2>

<p>Ten years ago, I tried something completely different.</p>

<p>Instead of building walls to give the team focus time, we opened the doors as wide as we could. We designated rotating “Rodeo Service” duty—one or two engineers each week whose primary job was to be as reachable as possible. Take the calls. Walk over to desks. Jump on a call. Triage everything that came in. Fix what could be fixed quickly. Communicate clearly about what couldn’t. Get a “real time” sense of the issues.</p>

<p>We brought cowboy hats for the engineers on duty.</p>

<p>And we had one non-negotiable rule: if you want Rodeo’s attention, you talk to a human. Five to ten minutes. On camera or in person. Actual conversation.</p>

<p>No tickets as the entry point. No Slack messages. No lists of requirements or prioritized spreadsheets. You want help? You talk to the person wearing the cowboy hat.</p>

<p>This sounds extreme until you understand what it actually does.</p>

<p>First, it’s a filter for real urgency. If someone won’t invest ten minutes in a conversation about their “critical” issue, it’s not actually critical. When everything routes through tickets, everything can claim to be urgent—there’s no cost to escalating. But asking someone to get on a call? That requires them to actually believe it matters.</p>

<p>Second, it’s an anti-weaponization mechanism. I’ve seen plenty of tickets that read like attacks. “This STILL doesn’t work. WHEN will this be fixed? This is COMPLETELY unacceptable.” The same person on a call sounds different. “Hey, I’m getting customer complaints about this export feature. Can we look at it?” It’s much harder to be mean to someone’s face than to weaponize a Jira ticket.</p>

<p>Third, it forces context to flow. When you’re on a call, you can ask follow-up questions in real time. “Who’s affected? What’s the actual impact? Is there a workaround? What happens if we fix it tomorrow instead of today?” Tickets don’t have this. They’re static snapshots, often missing crucial context.</p>

<p>I knew what I hoped for: that human contact could solve what process couldn’t. That the temperature would drop. That context would flow both directions. That trust could rebuild.</p>

<p>It worked.</p>

<h2 id="why-this-works-when-process-doesnt">Why This Works When Process Doesn’t</h2>

<p>Process is designed to scale. You build a system that works at steady state, then you follow it. This is good and necessary for sustainable operations.</p>

<p>But when you’re in crisis mode, you don’t have a scale problem. You have a trust problem and a context problem.</p>

<p>Trust erodes when people feel like they’re shouting into a void. When their urgent request gets auto-responded with a ticket number and a reminder about SLAs. When they’re told their critical issue is “in the backlog” with no clarity about when or if it will be addressed.</p>

<p>Context degrades when engineers only see tickets, not people. When they’re optimizing for closing Jira issues rather than solving actual business problems. When they make technical decisions without understanding the commercial impact.</p>

<p>Human contact solves both of these problems in ways that process cannot.</p>

<p>When someone can walk up to the engineer and say “this customer is about to churn if we don’t fix this,” that engineer can get the context. When that engineer can explain “here’s why that’s hard and here’s what we can do today,” and discuss potential short term workarounds you have trust.</p>

<p>And when both of those people are looking at each other—on camera or in person—the conversation stays human. The engineer remembers they’re helping a real person with a real problem. The stakeholder remembers they’re talking to someone who’s genuinely trying to help, not “the system that’s ignoring them.”</p>

<p>You can’t get this from tickets. You can’t get this from Slack threads. You can only get this from actual conversation.</p>

<h2 id="but-wont-this-just-burn-people-out">But Won’t This Just Burn People Out?</h2>

<p>This is always one of the first objections, and it’s a reasonable one.</p>

<p>The answer is: it depends entirely on how you do it.</p>

<p>If Rodeo Service becomes “everyone dumps on these engineers all week,” then yes, it’s brutal and unsustainable. If it’s a recognition that things are broken and this is the temporary measure while we fix the underlying issues, it can actually be energizing.</p>

<p>I’ve seen plenty of engineers volunteer for Rodeo duty. Multiple rotations in a row. Not because they’re masochists, but because they find it satisfying. There’s something deeply rewarding about short-term problem solving. About being the person who helps. About getting thank-yous directly instead of through layers of tickets and product managers.</p>

<p>The ten-minute conversation requirement actually protects against burnout. It filters out the noise. The person who fires off fifteen “urgent” tickets in a day won’t schedule fifteen ten-minute calls. The real issues surface. The email-courage complaints disappear.</p>

<p>The key is that it can’t be permanent. If your team needs dedicated Rodeo Service for months on end, you don’t have a communication problem. You have a foundational problem that needs different solutions.</p>

<p>But as a circuit breaker? As a way to stop the frustration spiral and buy time to actually fix things? It’s remarkably effective.</p>

<h2 id="what-this-reveals-about-your-organization">What This Reveals About Your Organization</h2>

<p>Here’s what implementing Rodeo Service will tell you about your team:</p>

<p><strong>You’ll discover who actually likes human contact.</strong> There’s this myth that all engineers hate talking to people. It’s nonsense. Some engineers thrive on it. Some absolutely don’t. Both preferences are fine. But you need to know who is who.</p>

<p><strong>You’ll see where the real communication gaps are.</strong> The issues that flood Rodeo Service—the ones people are willing to get on calls about—show you exactly where your organization lacks shared understanding. These are the areas that need fixing, not more documentation or process.</p>

<p><strong>You’ll learn whether your team culture is healthy.</strong> If nobody wants to do Rodeo Service ever, or if people fight about who has to do it, that’s a signal. If people naturally rotate through it and it becomes a badge of honor, that’s a different signal.</p>

<p><strong>You’ll understand what “urgent” actually means.</strong> When everything is filed as P0 in Jira, nothing is actually urgent. The ten-minute conversation requirement forces people to think: is this worth ten minutes of my time to explain? If not, it goes through normal product management channels. This natural sorting happens without anyone having to play process cop.</p>

<p><strong>You’ll see what gets lost in tickets.</strong> The nuance. The context. The “by the way, this is also affecting X.” The human understanding of trade-offs. All of this comes out in conversation and gets buried in ticket fields.</p>

<h2 id="the-cultural-statement">The Cultural Statement</h2>

<p>Requiring conversation isn’t just a tactical choice. It’s a statement about culture.</p>

<p>It says: we’re not hiding behind process. We’re not making you navigate a bureaucracy to talk to an engineer. We’re not treating you like a ticket number. We’re humans solving problems together, and that requires actually talking to each other.</p>

<p>When we implemented this, some people initially resisted. “I just want to send a quick message, why do I have to get on a call?”</p>

<p>Because if it’s actually quick, a ten-minute call resolves it faster than a ticket thread that takes days. And if it’s not actually quick, you shouldn’t be framing it as something that needs to interrupt focused work.</p>

<p>Within a few weeks, everyone adapted. The people who genuinely had urgent issues appreciated the direct access. The people who were just venting learned to route things through normal product processes instead. And the Rodeo engineers weren’t drowning in a flood of tickets claiming equal priority.</p>

<p>There’s something powerful about removing all the machinery between “I need help” and “let me help you.” No forms. No routing. No approval chains. Just: talk to me about what’s broken.</p>

<h2 id="when-you-should-try-this">When You Should Try This</h2>

<p>Rodeo Service isn’t for every situation. It’s not a replacement for good incident management or sustainable support practices.</p>

<p>But if you’re seeing these signs, it might be exactly what you need:</p>

<p>The gap between what the business expects and what tech can deliver is widening. Communication between engineering and other teams has become tense or adversarial. Your team is drowning in interruptions despite having all the “right” processes in place. People are escalating around your ticket system rather than through it. Quality is dropping and frustration is rising in a compounding cycle.</p>

<p>If this sounds familiar, consider that maybe the answer isn’t better process. Maybe it’s more human contact, not less.</p>

<h2 id="the-question-process-cant-answer">The Question Process Can’t Answer</h2>

<p>Process is built on the assumption that everything can be categorized, prioritized, and queued. That if we just structure communication correctly, the chaos will resolve.</p>

<p>But organizations aren’t machines. They’re groups of people trying to build something together under pressure. When that pressure builds to a breaking point, spreadsheets and ticket workflows don’t ease it. Conversation does. Context does. The feeling that someone is listening and actually cares about your problem.</p>

<p>That’s what Rodeo Service provides. Not a perfect system. Not a permanent solution. Just a way to lower the temperature, rebuild trust, and create the space for actual fixes to happen.</p>

<p>And the ten-minute conversation requirement? That’s not a barrier. That’s the mechanism that makes it work. It filters for real urgency. It prevents weaponization. It forces humanity back into a system that’s lost it.</p>

<p>Sometimes the most sophisticated technical solution is to put on a cowboy hat and answer the phone. And to require that the person calling actually wants to talk to you like a human being.</p>

<hr />

<p><strong>What does pressure look like in your organization right now? And what happens when people can’t get through?</strong></p>]]></content><author><name>Wilco van Duinkerken</name></author><category term="leadership" /><summary type="html"><![CDATA[There's a moment in every tech organization when things start to crack. Not the code—though but the space between engineering and the rest of the company. Between what was promised and what can actually be delivered. What I learned is thatwhen things are truly on fire, process doesn't put out the flames. It just makes people angrier about how long it takes to report the smoke.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://sparkboxx.com/assets/images/default-card.webp" /><media:content medium="image" url="https://sparkboxx.com/assets/images/default-card.webp" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Embracing Curiosity - Holiday Coding Sessions</title><link href="https://sparkboxx.com/holiday_coding/2025/03/28/holiday-coding-sessions.html" rel="alternate" type="text/html" title="Embracing Curiosity - Holiday Coding Sessions" /><published>2025-03-28T08:20:42+00:00</published><updated>2025-03-28T08:20:42+00:00</updated><id>https://sparkboxx.com/holiday_coding/2025/03/28/holiday-coding-sessions</id><content type="html" xml:base="https://sparkboxx.com/holiday_coding/2025/03/28/holiday-coding-sessions.html"><![CDATA[<p>Holiday Coding started as a simple idea: dedicating relaxed time each week to explore, learn, and reflect together. What began as a small experiment has quickly become a cherished weekly ritual, bringing together individuals from different companies to share their unique perspectives and experiences.</p>

<p>The Sessions are based on <a href="https://sparkboxx.com/holiday_coding/2025/02/19/holiday-coding.html">my believe in Holiday Coding</a>: a playful and informal way of learning by improving real production systems through experimentation during relaxed, stress-free “off time.”</p>

<h2 id="how-holiday-coding-sessions-work">How Holiday Coding Sessions Work</h2>

<p>Each Holiday Coding Session is a compact but meaningful two-hour experience, blending inspiration and practical exploration:</p>

<ul>
  <li><strong>30-minute “Brain Dump”:</strong> Rather than a polished academic presentation, I kick things off with an informal, energised  “brain dump” on a particular topic intended to spark curiosity and ideas within the group. I share my brain dump in a loose collection of slides for people to refer back to later.</li>
  <li><strong>60-minute Individual Exploration:</strong> The heart of the session involves everyone diving into individual explorations using their own production code bases and real-world systems. This isn’t theory; it’s hands-on exploring amid the messy complexity of actual production environments.</li>
  <li><strong>30-minute Group Reflection:</strong> We close by reconvening to share what each person discovered, discuss insights, and reflect on the practical challenges encountered during exploration.</li>
</ul>

<h2 id="examples-from-past-sessions">Examples from Past Sessions</h2>

<p>We’ve tackled fascinating topics and experienced powerful moments of learning:</p>

<ul>
  <li><strong><a href="/holiday_coding/files-and-folders.html">Files and Folder Structures</a>:</strong> Our first session looked deeply into the impact of organising real production code.</li>
  <li><strong><a href="/holiday_coding/claude-code.html">Coding with AI</a></strong>: Over several sessions, we experimented with integrating AI into everyday coding tasks in actual production environments. These explorations uncovered practical, actionable ways to leverage AI effectively.</li>
  <li><strong><a href="/holiday_coding/dwmuan-part1-pace-layering.html">Pace Layering &amp; “Don’t Wake Me Up at Night”</a>:</strong> Inspired by Pace Layering, we discussed structuring production systems according to change frequency, balancing stability and flexibility. The practical “Don’t Wake Me Up at Night” approach helped define critical operational boundaries.</li>
</ul>

<h2 id="why-it-matters">Why It Matters</h2>

<p>Holiday Coding Sessions create space in a fast-paced team setting, allowing people to learn and grow through real-world exploration and practical problem-solving. They encourage authentic curiosity, revealing innovation often hidden within the complexity of actual production challenges.</p>

<p>If you’re intrigued by the idea, I’d love to hear what sparks your curiosity—or better yet, consider joining or starting your own Holiday Coding Sessions!</p>

<p>We might open up with 1 or 2 extra spots in our Thursday afternoon Holiday Coding group.</p>]]></content><author><name>Wilco van Duinkerken</name></author><category term="holiday_coding" /><summary type="html"><![CDATA[Holiday Coding started as a simple idea: dedicating relaxed time each week to explore, learn, and reflect together. What began as a small experiment has quickly become a cherished weekly ritual.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://sparkboxx.com/assets/images/default-card.webp" /><media:content medium="image" url="https://sparkboxx.com/assets/images/default-card.webp" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Don’t Wake Me Up At Night (DWMUAN)</title><link href="https://sparkboxx.com/holiday_coding/2025/03/28/DWMUAN-pacing.html" rel="alternate" type="text/html" title="Don’t Wake Me Up At Night (DWMUAN)" /><published>2025-03-28T07:24:42+00:00</published><updated>2025-03-28T07:24:42+00:00</updated><id>https://sparkboxx.com/holiday_coding/2025/03/28/DWMUAN-pacing</id><content type="html" xml:base="https://sparkboxx.com/holiday_coding/2025/03/28/DWMUAN-pacing.html"><![CDATA[<p>After Vibing for several weeks it’s time to move to another topic.</p>

<p>The upcoming weeks we’ll be exploring the “Don’t Wake Me Up At Night” - DWMUAN - lens on engineering</p>

<p>The presentation can be found here: <a href="/holiday_coding/dwmuan-part1-pace-layering.html">DWMUAN - Placing // Holiday Coding</a></p>]]></content><author><name>Wilco van Duinkerken</name></author><category term="holiday_coding" /><summary type="html"><![CDATA[Don't Wake Me Up At Night (DWMUAN) - is a lens through which you can look at your architecture, code and processes. It helps you sleep better.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://sparkboxx.com/assets/images/default-card.webp" /><media:content medium="image" url="https://sparkboxx.com/assets/images/default-card.webp" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Coding with LLMs</title><link href="https://sparkboxx.com/holiday_coding/2025/03/28/claude-code.html" rel="alternate" type="text/html" title="Coding with LLMs" /><published>2025-03-28T06:24:42+00:00</published><updated>2025-03-28T06:24:42+00:00</updated><id>https://sparkboxx.com/holiday_coding/2025/03/28/claude-code</id><content type="html" xml:base="https://sparkboxx.com/holiday_coding/2025/03/28/claude-code.html"><![CDATA[<p>For the second (and third, and fourth) Holiday Coding session we explored coding with the aid of LLMs.</p>

<p>The presentation can be found here: <a href="/holiday_coding/claude-code.html">Claude Code - Holiday Coding</a></p>]]></content><author><name>Wilco van Duinkerken</name></author><category term="holiday_coding" /><summary type="html"><![CDATA[In the 2nd holiday coding session we explore some advantages and disadvantages of coding with LLMs and AI.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://sparkboxx.com/assets/images/default-card.webp" /><media:content medium="image" url="https://sparkboxx.com/assets/images/default-card.webp" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Files and Folders</title><link href="https://sparkboxx.com/holiday_coding/2025/02/27/files-and-folders.html" rel="alternate" type="text/html" title="Files and Folders" /><published>2025-02-27T11:34:42+00:00</published><updated>2025-02-27T11:34:42+00:00</updated><id>https://sparkboxx.com/holiday_coding/2025/02/27/files-and-folders</id><content type="html" xml:base="https://sparkboxx.com/holiday_coding/2025/02/27/files-and-folders.html"><![CDATA[<p>For the first Holiday Coding session I picked the topic of <strong>files and folders</strong>.</p>

<p>The presentation can be found here: <a href="/holiday_coding/files-and-folders.html">Files and Folders - Holiday Coding</a></p>]]></content><author><name>Wilco van Duinkerken</name></author><category term="holiday_coding" /><summary type="html"><![CDATA[We are about to host the first Holiday Coding session. This is about Files and Folders]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://sparkboxx.com/assets/images/default-card.webp" /><media:content medium="image" url="https://sparkboxx.com/assets/images/default-card.webp" xmlns:media="http://search.yahoo.com/mrss/" /></entry></feed>