How I built a simple setup to stop losing context in brownfield projects


I came back from the holidays and spent almost a full day trying to remember what I was doing.

I’m not talking about forgetting how to code. I’m talking about the hundreds of small discoveries (quirks, edge cases, “oh that’s why it does that” or “this is how it is supposed to work, it’s a feature not a bug”) that led me toward certain solutions and away from others. I use Claude Code as my pair programming partner. It does the heavy lifting of searching and figuring things out, while I challenge its proposals, find problems, and work through trade-offs together. But all that accumulated knowledge? Gone.

I’m a software engineer working on unifying multiple projects that were never designed to talk to each other. Identity brokering, data synchronization, event-driven communication between systems that evolved independently for years.

Here’s the fun part: I’m touching five different GitHub repositories. One is deep legacy, the ancient beast. One is middle-aged legacy, newer but full of quirks. One I don’t even build myself; I guide external developers and hope my documentation is clear enough.

🥲

And I’d been using Claude Code to help me navigate this mess. It was going great, until it wasn’t.

The Documentation Death Spiral

AI tools make documentation trivially easy. I can generate a README, a design doc, an architecture overview in minutes. And with so much unfamiliar code to cover, I needed that detail: both to understand how these projects work and to capture why we made certain decisions. You know the pattern when such detail doesn’t exist: two months from now you’re wondering why we did X and not Y, you try the “obvious” approach, hit a wall halfway through, and suddenly remember “oh right, that’s why we did it the other way.”

But keeping documentation accurate is hard. With such a large surface area (five repos, identity flows, event systems), the details change constantly during a conversation. You start with an approach, hit a limitation, tweak it, hit another, tweak again. By the end, you have a detailed document that’s accurate about the outcome but wrong about half the specifics.

Okay, so document at the end of the process instead? But by then, several compactions have happened, and I’ve already lost information. I end up repeating myself, re-explaining architecture, re-discovering edge cases. After a while, this became my main bottleneck.

I tried adding instructions to CLAUDE.md telling Claude to update documentation as it worked. I tried using hooks to automatically save state before clearing or compacting. I tried splitting knowledge across multiple markdown files, one per area. I tried consolidating into a single large file. None of it worked.

Even when I got the documentation right in the moment, it doesn’t stay right. Two weeks later, someone mentions a constraint I’d completely missed, and now that document needs updating. Good luck remembering (you or Claude) that this thing was even documented in the first place.

Long story short, I’ve restarted the documentation process from scratch three times now, each with a different approach.


Understanding the Actual Problem

I started reading documentation (the irony) to understand what was happening.

Context compaction is lossy by design. When conversations get long, Claude Code automatically summarizes older parts to make room. The summary captures the gist, but not the nuance: why you rejected an approach, which edge cases you already handled, what specific constraints led to a decision. After a few compactions, Claude might suggest the exact same tweak you rejected an hour ago. The details that led you to reject it are gone.

Auto-compact can’t be customized. It triggers automatically at ~95% context. You can disable it via /config in the CLI, but then you just hit a hard failure when context fills up. Pick your poison.

Hooks don’t help here, at least not yet. PreCompact hooks only support shell commands, not prompts. You can run a script, but you can’t ask Claude to do anything. There’s no PreClear hook at all. And built-in slash commands like /clear and /compact bypass UserPromptSubmit, so you can’t intercept those either.

CLAUDE.md instructions aren’t always followed. I had instructions telling Claude to update documentation as it worked. Sometimes it did. Often it didn’t. Claude tends to treat instructions as suggestions rather than requirements. This is a known issue — one user put it bluntly: “Does that mean that with each prompt I need to tell you to follow the instructions? …Yes, unfortunately.”

Multiple documentation files don’t scale. New information often needs to update more than one file, and that doesn’t always happen. Then Claude references outdated information from the wrong file, and you’re back to square one.

A single large documentation file bloats context. The more detail you add, the more context it eats. And Claude’s accuracy degrades as context fills up.

The workflows assume greenfield. Every tutorial I found follows almost the same pattern: “describe your idea and let Claude build it.” When you’re starting fresh, you can plan everything upfront. No legacy constraints, no mid-stream discoveries that force you to pivot. Nobody’s writing about maintaining five existing repos where you’re constantly discovering how things actually work, then trying to remember those discoveries next week.

So I stopped looking for the clever solution and started looking for the dumbest thing that might actually work.

What Actually Worked

Instead of relying on CLAUDE.md to automatically update docs (which it often ignores), I took explicit control.

One command: /save

That’s it. When I want to preserve state, I run /save. It tells Claude to update the status file with current progress, decisions, blockers, and anything we discovered about how the code works. Then I run /clear for a fresh start.

No automation that sometimes works. No hoping Claude remembers. Just a manual habit, and for now, a reliable habit beats unreliable automation. Before any transition (break, end of day, context getting full), run /save, then /clear.

The Actual Setup

Here’s what the structure looks like. I’ll walk through the important parts.

workspace/
├── PROJECT_STATUS.md           # What's done, in progress, blocked, decisions
├── PRD_REQUIREMENTS.md         # Current requirements (synced from Confluence)
├── CLAUDE.md                   # Instructions for Claude
├── docs/understanding/         # What I've learned about each codebase
│   ├── project-alpha.md
│   ├── project-beta.md
│   └── ...
├── project-alpha/              # Cloned repo
├── project-beta/               # Cloned repo
└── .claude/
    ├── settings.local.json     # Hooks configuration
    ├── commands/
    │   ├── save.md             # /save command
    │   └── refresh-prds.md     # /refresh-prds command
    └── hooks/
        └── session-start.sh    # Loads context on startup

The Files

PROJECT_STATUS.md — Not a plan, a snapshot of reality that gets updated as reality changes. Current focus, decisions made (with rejected alternatives), blockers. I also keep a sequence of small, testable tasks: the kind of instructions I’d give a junior engineer, or how I’d approach the work if I were writing the code myself. Not “build the whole feature,” but “create the route, add a controller that returns hello world, hit the endpoint to verify it works, then add the actual logic.” One problem at a time. Small checkpoints where I can test and review each piece before moving on, rather than ending up with ten new files and feeling overwhelmed. Gets loaded into context on every session start via the SessionStart hook.

PRD_REQUIREMENTS.md — Current requirements from Confluence, cached locally. Also loaded on session start, so Claude always knows what we’re building toward.

docs/understanding/ — How each codebase actually works. The authentication quirks, the unusual database choices, the non-obvious patterns you only discover by reading the code.

CLAUDE.md — Instructions for Claude. I still have them, but they help just not reliably. Things like “update PROJECT_STATUS.md as you work” and “suggest /save when context is getting long.” Nice to have, not critical path.

The Hooks

SessionStart — When Claude Code starts (including after /clear), this hook loads PROJECT_STATUS.md and PRD_REQUIREMENTS.md into context automatically.

{
  "hooks": {
    "SessionStart": [{
      "matcher": "startup|resume|clear|compact",
      "hooks": [{
        "type": "command",
        "command": "\"$CLAUDE_PROJECT_DIR/.claude/hooks/session-start.sh\"",
        "timeout": 10
      }]
    }]
  }
}

The script just outputs the files to stdout, which goes into Claude’s context:

#!/bin/bash
echo "=== PROJECT STATUS ==="
cat "$CLAUDE_PROJECT_DIR/PROJECT_STATUS.md"
echo ""
echo "=== REQUIREMENTS ==="
cat "$CLAUDE_PROJECT_DIR/PRD_REQUIREMENTS.md"

Every session starts with context. No re-explaining.

The Commands

/save — The core of the whole setup. When I run /save, Claude updates PROJECT_STATUS.md with current progress, decisions, blockers, and next steps. If we discovered how something works internally, it updates the relevant docs/understanding/*.md file too. Then I run /clear for a fresh start.

/refresh-prds — We use Confluence for PRDs. Querying it every session is slow, so this command fetches relevant PRD pages via the Atlassian MCP server and saves them to PRD_REQUIREMENTS.md locally.

The Decision Log

One of the most valuable parts of PROJECT_STATUS.md is the Decision Log with rejected alternatives:

DateTopicDecisionRejectedRationale
2026-01-08User lookupBy UUIDBy emailEmail can change, UUID is immutable across systems
2026-01-05API authSigned JWTAPI key in headerNeed to pass user claims without extra lookup

This prevents Claude from suggesting approaches we already tried and rejected. The “Rejected” column is critical. Without it, you’re one compaction away from re-discovering the same dead ends.

What Still Sucks

I’m not going to pretend this is perfect:

It’s manual. I sometimes forget. The habit helps, but it’s still on me to remember.

Auto-compact can still surprise you. If context fills up mid-task, it compacts before you can save. I’ve learned to run /save proactively rather than waiting until I’m about to leave.

It uses context. Loading PROJECT_STATUS.md and PRD_REQUIREMENTS.md on every session takes up space. The tradeoff is worth it; the alternative is spending even more context and time re-explaining the same things. But it means keeping these files concise: a snapshot of current state, not a detailed history.

SessionStart has bugs. Output sometimes (rarely but it happens) doesn’t inject after compact.

The Results

After a few weeks, this has worked unexpectedly well:

  • New sessions start with context. No more re-explaining the architecture or the end goal.
  • Decisions and rationale are preserved. Including what we rejected and why.
  • Failed approaches are documented. No re-trying things that didn’t work.
  • Implementation knowledge accumulates. The understanding docs grow over time.
  • Coming back after time off works. The status file is a reliable snapshot.
  • PRDs stay current. One command to sync from Confluence.

The Bigger Lesson

With something really simple, I solved 80% of my problem.

A SessionStart hook that loads a couple of markdown files. Two commands that are just instructions for what to update. That’s it. No clever automation, no complex tooling, just files and habits.

But this simple foundation gives me something to build on. I’m already working on a tool to share with my team: something configurable where each developer can define their own files, their own instructions, their own context to keep up to date. What started as a personal workaround is turning into team tooling. And building it has taught me a ton about how AI tools actually work under the hood.

Interestingly, Claude Code itself started this way — as a simple internal tool that Anthropic engineers built for themselves. It spread through the company because it solved a real problem, and they debated whether to release it publicly or keep it as their “secret sauce.”

Most AI workflows assume you’re building something new. Most of us aren’t. We’re maintaining, integrating, extending, debugging (often messy) codebases that evolved over years.

If the tools don’t exist for your situation, build the simplest thing that works. You might be surprised where it leads.


If you’re dealing with similar problems (legacy codebases, context loss, documentation that can’t keep up), I’d love to hear how you’re handling it.