LLM-generated docs are usually vague and generic because they're written from imagination, not from code. This prompt forces Claude to read the implementation first, then document what actually exists — including trade-offs, integration points, and decisions that aren't obvious from the code alone.
The prompt
I need documentation for [describe what needs documenting]. Before writing anything, read the actual implementation:
1. Read every file involved in this feature/module
2. Trace the data flow from input to output
3. Identify the key decisions (why was it built this way, not another way?)
4. Note the integration points (what depends on this, what does this depend on?)
Then write documentation that covers:
## For a README section:
- What it does (one paragraph, no fluff)
- How to use it (concrete examples with real values from the codebase)
- How it works (data flow, key files, architecture decisions)
- Trade-offs (what was chosen and what was rejected, and why)
## For architecture docs:
- Component inventory (what exists, what each piece does)
- Data flow (how information moves through the system)
- Key decisions (why this approach, not alternatives)
- Integration points (how this connects to the rest of the system)
## For change summaries:
- What changed (commit-by-commit or feature-by-feature)
- Why it changed (the problem each change solves)
- What to verify (how to confirm it works)
Rules:
- Use concrete details from the code. No placeholder text.
- Include file paths so readers can navigate to source.
- Document trade-offs explicitly: "We chose X over Y because Z."
- If you don't know why a decision was made, say so — don't invent a rationale.
When to use it
- After completing a feature branch — document what was built before opening the PR
- When onboarding someone to a codebase section they haven't seen
- When writing CLAUDE.md sections (Claude documents the codebase for its own future use)
- When creating change summaries for stakeholders
Why "read the code first" matters
Without the explicit instruction to read the implementation, Claude generates plausible-sounding documentation that may not match reality. With it, you get docs that reference real file paths, real function names, and real data flows. The difference between:
- "The system uses a caching layer for performance" (generic, possibly wrong)
- "
lib/api.jsreads Markdown files from_posts/using gray-matter, sorts by date descending, and returns selected fields. There is no caching — content is read from disk at build time viagetStaticProps." (specific, verifiable)
Tips
- The "trade-offs" section is the most valuable part. Code shows what was built; documentation should explain what wasn't built and why. This context is what future developers (and future Claude sessions) need most.
- For change summaries, point Claude at
git log --onelinefor the branch. It will produce a commit-by-commit narrative that's more accurate than writing from memory. - If the generated docs feel generic, the problem is usually that Claude didn't read enough code. Point it at specific files: "Read
lib/toolkit-api.js,pages/ai-toolkit/index.js, andcomponents/toolkit-filter.js, then document how filtering works."