Test Generation Prompt

Prompt

The problem with LLM-generated tests isn't quality — it's consistency. Claude writes perfectly valid tests that use different patterns, assertion styles, and file structures than the rest of your suite. Then you spend 20 minutes reformatting tests that were supposed to save you time.

This prompt fixes that by making Claude read your existing tests first. Convention matching, not convention inventing.

The prompt

Write tests for [file or function]. Follow this process:

## Step 1: Learn the conventions

Before writing anything, read:
1. The project's existing test files (at least 2-3) to understand:
   - Test framework and assertion library (Jest, Vitest, Mocha, etc.)
   - File naming convention (*.test.js, *.spec.ts, __tests__/, etc.)
   - Test structure (describe/it nesting, flat test() blocks, etc.)
   - Setup/teardown patterns (beforeEach, fixtures, factories, etc.)
   - Mocking approach (jest.mock, vi.mock, manual mocks, dependency injection, etc.)
   - Assertion style (expect().toBe, assert.equal, etc.)
2. Any test utilities, helpers, or fixtures the project uses
3. The test configuration file (jest.config, vitest.config, etc.)

## Step 2: Understand the code under test

Read the file to test. Identify:
- Public interface (exports, public methods, props)
- Edge cases (null inputs, empty arrays, error paths, boundary values)
- Dependencies that need mocking (external APIs, file system, database)
- Integration points (does it call other modules that have their own tests?)

## Step 3: Write the tests

For each behavior of the public interface:
1. Write a test with a descriptive name that reads like a specification
2. Follow the Arrange-Act-Assert pattern
3. Test behavior, not implementation — if you're testing private methods or internal state, step back

## Rules
- Match the project's existing patterns EXACTLY. Don't introduce new libraries, helpers, or styles.
- Test the public interface. If a function isn't exported, test it through the function that calls it.
- Use descriptive test names: "returns empty array when no posts match tag" not "test case 3"
- Don't test framework behavior (e.g., don't test that React renders a div)
- Don't test trivial code (simple getters, pass-through functions)
- Include edge cases: empty inputs, missing fields, malformed data, error conditions
- If the project uses fixtures, use the existing fixtures. Create new ones only if needed, following the existing pattern.

Example

Given a project that uses Vitest with this test style:

// Existing test in the project
describe('getAllPosts', () => {
  it('returns posts sorted by date descending', () => {
    const posts = getAllPosts(['title', 'date', 'slug'])
    const dates = posts.map((p) => p.date)
    expect(dates).toEqual([...dates].sort().reverse())
  })
})

Claude should produce tests in the same style — not switch to test() blocks, not add beforeAll if the existing tests don't use it, not import @testing-library if the project doesn't.

When to use it

  • Adding tests to an existing module that has partial or no coverage
  • After writing a new module and needing tests that fit the project
  • When you want tests written faster than you can write them but don't want to clean up style inconsistencies afterward

When NOT to use it

  • For TDD — use the TDD with Claude Code technique instead, where tests drive the implementation
  • For integration or E2E tests — those require understanding the runtime environment, which this prompt doesn't cover
  • When the code under test is tightly coupled with side effects everywhere — generating tests for bad code produces bad tests. Refactor first.

Tips

  • Point Claude at your best-written test file as the style reference. The conventions it learns from that file will propagate to the generated tests.
  • If Claude generates tests that mock too aggressively, push back. Over-mocking is the most common problem in generated tests — every mock is an assumption about how the dependency works, and assumptions go stale.
  • Review the generated tests for actual value. Tests that only verify the happy path of a simple function aren't worth keeping. The tests worth keeping are the ones that exercise edge cases and error paths.