Documentation Index
Fetch the complete documentation index at: https://docs.testsprite.com/llms.txt
Use this file to discover all available pages before exploring further.

What Plan Generation Does
Plan Generation is the bridge between “TestSprite knows your endpoints” and “TestSprite has runnable tests”. For every endpoint that survived API Discovery, this phase produces a structured plan: a list of test cases grouped by category, each with a title, description, expected behavior, and priority.Nothing executes during plan generation. No HTTP calls are made to your API at this step. Plan generation reads from your discovered endpoints, your PRD, and your natural-language hints.
What Goes Into the Plan
| Input | Where it comes from | What plan generation does with it |
|---|---|---|
| Discovered endpoints | API Discovery | One sub-plan per endpoint, with shape-appropriate tests |
| PRD / docs upload | Configuration step | Anchors the description and asserted behavior in what the endpoint is for |
| Response shapes from discovery | Discovery phase | Used as ground truth for response shape, so tests reference real field names |
| Extra testing instructions | Configuration step | ”Skip the admin endpoints”, “Always assert response time < 200ms” — these get woven in |
The Plan Structure
Each endpoint gets a sub-plan organized by category. Categories are picked based on what the endpoint does — common ones include:
- Functional / happy path — valid inputs, expected response shape, basic error paths (404, 400 on bad input)
- Authorization & authentication — unauthenticated → 401, wrong-user → 403, ownership checks, token handling, signature tampering
- Error handling & edge cases — missing required fields, malformed input, type coercion traps, unicode / null / encoding edges
- Boundary & load — oversized payloads, max array lengths, rate-limit edges, concurrent-write contention
- Security probes — privilege escalation, common attack patterns relevant to the endpoint
Reviewing the Plan
The plan view is grouped by category; each row is collapsible.Expand a category to see the tests in it
Each row inside shows the test title (a one-liner) and a short description. Click the row body to expand the description into full detail.

Untick anything you don't want
The checkbox at the start of each row controls whether the test will be generated. Untick irrelevant or duplicative cases. Whole categories can be untick-all’d via the category header checkbox.

Edit titles and descriptions in place
Click the title to rename. Click the description to edit. Your edits shape the test that gets generated.

Adding Tests in Natural Language
The bottom of the plan list has a chat input — “Add a test in natural language”. TestSprite parses the request, figures out which endpoint(s) it applies to, and writes a new plan row for it.
The added test is the same shape as auto-generated rows. It gets a category, title, description, and priority; you can edit, prune, or reorder it just like the rest.
Refining the Plan via Natural Language
Beyond Add-a-test, the chat panel on the right side supports broader refinements:
What “Generate Tests” Actually Triggers
Clicking Generate Tests advances the wizard to test generation. The plan is locked in at click-time — subsequent edits would require regenerating.Endpoint Tests Generation
The next phase — each plan row becomes a runnable test
Plan Generation and Free-Plan Limits
Plan Generation itself is free for all plans. You can generate plans on any number of endpoints, on any plan tier. The limit applies downstream — at test generation time, where each generated test consumes one credit.Subscription Plans
See plan tiers, credit allocations, and upgrade options
This is intentional. Plan generation is where you decide what’s worth testing; you should have unlimited room to explore options before spending credits on the actual tests.
When Plan Generation Looks Wrong
A test asserts on a field that doesn't exist
A test asserts on a field that doesn't exist
Discovery didn’t fully observe this endpoint’s response, so plan generation inferred the shape from your docs. Either:
- Re-run API Discovery once the endpoint is reachable, then regenerate the plan
- Or correct the test description in place to match the real shape — generation will follow your edit
An obvious test wasn't generated
An obvious test wasn't generated
Two common causes:
- The endpoint wasn’t in the discovered set (revisit Discovery)
- The test was judged redundant or out-of-scope. Add a “Make sure to cover X” instruction in the chat.
Tests are too generic — they don't reflect our domain
Tests are too generic — they don't reflect our domain
You probably haven’t given enough domain context. Use the “Extra testing instructions” textarea on the configuration step or upload a richer PRD — describe your domain rules (“orders have a status enum: pending, confirmed, shipped, delivered, cancelled”) so the plan reflects them in titles and assertions.
The Security category is overkill for an internal-only API
The Security category is overkill for an internal-only API
Untick the Security category at its header. The default sizing assumes external exposure; for internal services 1–2 auth checks are usually enough.
Plan generation returned 0 tests for an endpoint
Plan generation returned 0 tests for an endpoint
Discovery captured the endpoint, but the plan judged it unsafe to test (missing auth pattern, unclear response shape). Look at the Discovery row for that endpoint — if it shows “Unconfirmed” status, fix the discovery issue and re-run.
Where to Go Next
Endpoint Tests Generation
The next phase — turn the plan into runnable tests
Integration Tests Generation
Auto-assembled multi-step chains
Refining Tests
Natural-language refinement after tests are generated
