← Back to Thinking

2 April 2026

I Gave Claude a Team Handbook. Here's What Changed.

Most people use AI coding tools the same way. Open a session. Explain the project. Build something. Close the session. Next session. Explain the project again. The problem isn't the tool. It's the onboarding.

Most people use AI coding tools the same way.

Open a session. Explain the project. Build something. Close the session.

Next session. Explain the project again. Build something. Wonder why things are drifting.

I did this for a while. It works well enough — until it doesn’t. Until the AI suggests a colour that’s not in your design system. Until it generates TypeScript when your project is JSX. Until you’re three sessions deep and something important got lost in the gap between them.

The problem isn’t the tool. It’s the onboarding.

In any team, a new person doesn’t start from zero. They get a handbook. A tech stack guide. A set of conventions. An understanding of what’s already been decided and why. They get context, on purpose, before they touch anything.

Your AI coding tool deserves the same.

And once I saw it that way, the fix was obvious.

The .claude/ folder

Claude Code supports a project configuration folder — .claude/ — that travels with your repository. Every session, Claude reads it. It’s the team handbook for an audience of one.

Here’s what I’ve built into mine for Soiléire, my B2B SaaS product.

CLAUDE.md — the project brain. Current phase, active context, tech stack, module status, open questions. This is the first thing Claude reads. It tells it where we are, what’s been decided, and what to do if something’s unclear. I update the active context section at the start of every session. Two minutes of discipline that saves twenty minutes of re-explanation.

agents/ — four specialist roles, each with a defined brief. An architect agent that reviews structural decisions the way a peer architect would. A reviewer agent that runs a checklist before any commit. A compliance agent that knows GDPR, the EU AI Act, and DORA — and applies them before I build AI features, not after. A QA agent that stress-tests flows before I mark something done. These aren’t magic — they’re prompts. But prompts that have been thought through carefully, once, and are now available in every session.

commands/ — one-word automations for recurring work. /new-feature scaffolds a feature brief in my exact format. /phase-check produces a status report across my nine-phase methodology. /ship-check runs the pre-ship checklist. /dpia-stub generates a DPIA document for any AI feature. Each of these codifies something I was doing manually, inconsistently, every time.

rules/ — enforced constraints, not guidelines. My design system as rules: lime is brand only, never interactive. Purple is interactive, never decorative. No font weights above 500. No hardcoded hex values. These are things I had documented in a separate file somewhere that Claude had no way of knowing about. Now it knows — and the pre-commit hook catches violations before they land in the repo.

skills/ — knowledge that travels across every project. My nine-phase methodology as a teachable construct. The regulatory context — EU AI Act risk classification, GDPR obligations, DORA requirements for ICT providers — loaded once and available everywhere. The feature brief format, so I never explain it again.

The architectural instinct behind it

This is where my background matters, and not in the way I expected when I started building.

Working with AI coding tools, I kept running into the same problem that enterprise architects run into at scale: the gap between what was decided and what actually gets built. In large organisations, that gap opens because teams lose context, conventions drift, and decisions made in one room don’t make it to the next. You close it with documentation, governance structures, and shared reference points.

The .claude/ folder is that governance structure — miniaturised for a single developer working with an AI collaborator.

The agents replace peer review. The rules replace the conventions a team would absorb over time. The commands replace the rituals that more mature teams develop — the shared language, the standard approaches, the “the way we do things here.” The skills replace the institutional knowledge that normally has to be taught, one person at a time.

None of this is new thinking. It’s just enterprise architecture applied at the micro scale — one person, one product, the same rigour.

What actually changed

Three things, concretely.

Sessions start from context, not from explanation. The first ten minutes of every session used to be reorientation — where were we, what had we decided, what are the constraints. That work is now done once and read automatically. The session starts from the real problem, not from rebuilding the map.

Drift stopped happening. Before, small inconsistencies crept in across sessions. A component named one way here, another way there. A pattern used in one place, forgotten in the next. With the rules and skills loaded, those decisions are remembered for me. The design system holds. The data model holds. The methodology holds.

I stopped re-deciding things. This is the subtle one. A lot of cognitive load in solo building isn’t doing the work — it’s remembering the decisions you already made so you don’t undo them. The .claude/ folder externalises those decisions. I make them once, write them down, and stop carrying them in my head.

What it didn’t change

Setting this up takes time. Real time — a few hours to do it properly, and regular maintenance after that. If your project is a weekend prototype, you don’t need this. If your project is something you intend to live with, you absolutely do.

It also doesn’t remove the need for judgement. The agents don’t replace thinking — they replace the mechanical parts of thinking. The decisions that matter still need you. Nothing about a compliance agent removes the need to actually understand what you’re building and why.

And it doesn’t replace operational scar tissue — the instincts that come from carrying the pager, from being woken up at three in the morning, from watching something fail in the way you least expected. A handbook can encode what you know. It can’t replace what you’ve lived through.

The wider point

The developers I know who are getting the most out of AI coding tools aren’t the ones with the best prompts. They’re the ones who’ve put the most structure around the AI before the prompting starts.

Context is the work. Prompts are the easy bit.

If you’re finding that your AI sessions feel chaotic, that things keep drifting, that you’re explaining the same project over and over — the problem isn’t the tool. It’s that you’re asking a new team member to do good work without ever handing them the handbook.

Give them one. It changes everything.


This is part of my ongoing series on what changes when an architect learns to build. Previous posts cover the nine-phase methodology, the scar tissue that AI-assisted development can’t yet replace, and the agency shift underneath the productivity story.