Skip to content
🤖 AI-optimized docs: llms-full.txt

Progressive Disclosure in Skills

Progressive disclosure is the technique that separates basic skills from powerful ones. The core idea: never load more context than the agent needs right now. This keeps token usage low, prevents context pollution, and makes skills survive long conversations.

Skills can use any combination of these layers. Most production skills use Layers 1-3. Layer 4 is reserved for strict sequential processes.

LayerWhat It DoesToken Cost
1. Frontmatter vs BodyFrontmatter is always in context; body loads only when triggered~100 tokens always, body on demand
2. On-Demand ResourcesSKILL.md points to resources and scripts loaded only when relevantZero until needed
3. Dynamic RoutingSKILL.md acts as a router, dispatching to entirely different prompt flowsOnly the chosen path loads
4. Step FilesAgent reads one step at a time, never sees aheadOne step’s worth at a time

Frontmatter (name + description) is always in context — it is how the LLM decides whether to load the skill. The body only loads when the skill triggers.

This means frontmatter must be precise and include trigger phrases. The body stays under 500 lines and pushes detail into Layers 2-3.

---
name: bmad-my-skill
description: Validates API contracts against OpenAPI specs. Use when user says 'validate API' or 'check contract'.
---
# Body loads only when triggered
...

SKILL.md points to resources loaded only when relevant. This includes both reference files (context for the LLM) and scripts (offload work from the LLM entirely).

## Which Guide to Read
- Python project → Read `resources/python.md`
- TypeScript project → Read `resources/typescript.md`
- Need validation → Run `scripts/validate.py` (don't read the script, just run it)

Scripts are particularly powerful here: the LLM does not process the logic, it just calls the script and receives structured output. This offloads deterministic work and saves tokens.

The skill body acts as a router that dispatches to entirely different prompt flows, scripts, or external skills based on what the user is asking for.

## What Are You Trying To Do?
### "Build a new workflow"
→ Read `prompts/create-flow.md` and follow its instructions
### "Review an existing workflow"
→ Read `prompts/review-flow.md` and follow its instructions
### "Run analysis"
→ Run `scripts/analyze.py --target <path>` and present results

The key difference from Layer 2: Layer 2 loads supplementary resources alongside the skill body. Layer 3 branches the entire execution path — different prompts, different scripts, different skills. The skill body becomes a dispatcher, not an instruction set.

The most restrictive pattern. The agent reads one step file at a time, does not know what is next, and waits for user confirmation before proceeding.

prompts/
├── step-01.md ← agent reads ONLY current step
├── step-02.md ← loaded after user confirms step 1
├── step-03a.md ← branching path A
└── step-03b.md ← branching path B

When to use: Only when you need exact sequential progression with no skipping, compaction-resistance (each step is self-contained), or the agent deliberately constrained from looking ahead.

Trade-off: Very rigid. Limits the agent’s ability to adapt, combine steps, or be creative. Do not use for exploratory or creative tasks. Do not use when Layer 3 routing would suffice. Try to follow level 1-3 first! The lowest level needed is best.

Long-running workflows risk losing context when the conversation compresses. The document-as-cache pattern solves this: the output document itself stores the workflow’s state.

ComponentPurpose
YAML front matterPaths to input files, current stage status, timestamps
Draft sectionsProgressive content built across stages
Status markerWhich stage is complete, for resumption

Each stage reads the output document to restore context, does its work, and writes results back to the same document. If context compacts mid-workflow, the next stage recovers by reading the document and reloading the input files listed in front matter.

---
title: "Analysis: Research Topic"
status: "analysis"
inputs:
- "{project_root}/docs/brief.md"
- "{project_root}/data/sources.json"
---

This avoids separate cache files, file collisions when running multiple workflows, and state synchronization complexity.

SituationRecommended Layer
Single-purpose utility with one pathLayer 1-2
Skill with conditional reference dataLayer 2
Skill that does multiple distinct thingsLayer 3
Skill with stages that depend on each otherLayer 3 + compaction survival
Strict sequential process, no skipping allowedLayer 4
Long-running workflow producing a documentLayer 3 + document-as-cache