# BMad Builder Documentation (Full)
> Complete documentation for AI consumption
> Generated: 2026-04-21
> Repository: https://github.com/bmad-code-org/bmad-builder
# BMad Builder - A BMad Method EcoSystem Module
**Build More, Architect Dreams.**
## The Dream
What if your AI remembered everything? A fitness coach that tracks every PR. A writing partner that knows your characters better than you do. A research assistant that already knows how you work.
BMad Builder lets you create:
- **Personal AI Companions**: Agents with memory that evolve with you over time
- **Domain Experts**: Specialists for any field: legal, medical, creative, technical
- **Workflow Automations**: Structured processes that guide you through complex tasks
- **Custom Modules**: Bundle agents and workflows into shareable packages
## What Makes It Different
| Feature | Why It Matters |
| --------------------- | ----------------------------------------------------------- |
| **Persistent Memory** | Agents remember across sessions and keep improving |
| **Composable** | Your creations work alongside the entire BMad ecosystem |
| **Skill-Compliant** | Built on open standards that work with any AI tool |
| **Shareable** | Package and distribute your modules to the BMad community |
## Quick Start
### 1. Register the Module
On first use, run `bmad-bmb-setup` to register BMad Builder in your project. This collects your preferences (name, language, output paths) and registers the builder's capabilities with the help system so `bmad-help` can guide you.
:::tip[Single-Skill Modules]
If you install a module that contains only one skill, that skill handles its own registration on first run. No separate setup step needed.
:::
### 2. Build Something
Invoke the **Agent Builder** or **Workflow Builder** and describe what you want to create. Both walk you through a series of questions and produce a ready-to-use skill folder.
| Goal | Builder | Menu Code |
| ------------------------- | ---------------- | --------- |
| AI companion with memory | Agent Builder | BA |
| Structured process / tool | Workflow Builder | BW |
| Package skills as module | Module Builder | CM |
### 3. Use Your Skill
The builders produce a complete skill folder. Copy it into your AI tool's skills directory (`.claude/skills/`, `.codex/skills/`, `.agents/skills/`, or wherever your tool looks) and it's immediately usable.
:::tip[Custom Module Installation]
The BMad Method installer supports installing custom modules from any Git host (GitHub, GitLab, Bitbucket, self-hosted) or local paths. See the [BMad Method install guide](https://docs.bmad-method.org/how-to/install-custom-modules/) for details.
:::
:::tip[No Module Required]
If you're building something for personal use, you don't need to package it as a module. Copy the skill folder and use it directly. Module packaging (with `bmad-help` registration and configuration) is for sharing or richer discoverability.
:::
### 4. Learn More
See the [Builder Commands Reference](/reference/builder-commands.md) for all capabilities, modes, and phases.
## What You Can Build
| Domain | Example |
| ---------------- | ------------------------------------------------------------------------------------------ |
| **Personal** | Journal companion, habit coach, learning tutor, friendly personal companions that remember |
| **Professional** | Code reviewer, documentation specialist, workflow automator |
| **Creative** | Story architect, character developer, campaign designer |
| **Any Domain** | Anything you can describe as a repeatable process |
## Design Patterns
Build better skills with these guides, drawn from real-world BMad development.
| Guide | What You'll Learn |
| ------------------------------------------------------------------------------------ | -------------------------------------------------------------------- |
| **[Progressive Disclosure](/explanation/progressive-disclosure.md)** | Structure skills so they load only the context needed at each moment |
| **[Subagent Patterns](/explanation/subagent-patterns.md)** | Six orchestration patterns for parallel and hierarchical work |
| **[Skill Authoring Best Practices](/explanation/skill-authoring-best-practices.md)** | Core principles, quality dimensions, and anti-patterns |
## Documentation
| Section | Purpose |
| -------------------------------------------------------------------------------- | ------------------------------------------------------------------------ |
| **[Build Your First Module](/tutorials/build-your-first-module.md)** | Plan, build, scaffold, and validate a complete module |
| **[Distribute Your Module](/how-to/distribute-your-module.md)** | Share your module via any Git host for anyone to install |
| **[Concepts](/explanation/)** | Agent types, memory architecture, workflows, skills, and how they relate |
| **[Design Patterns](/explanation/#design-patterns)** | Progressive disclosure, subagent orchestration, authoring best practices |
| **[Reference](/reference/)** | Builder commands, workflow patterns |
## Community
- **[Discord](https://discord.gg/gk8jAdXWmj)**: Get unstuck, share what you built
- **[GitHub](https://github.com/bmad-code-org/bmad-builder)**: Source code
- **[BMad Method](https://docs.bmad-method.org)**: Core framework
This tutorial takes you from an initial idea to a working, installable BMad module with help registration and configuration.
## What You'll Learn
- Planning a module with the Ideate Module (IM) capability
- Choosing between a single agent and multiple workflows
- Building individual skills with the Agent and Workflow Builders
- Scaffolding a setup skill with Create Module (CM)
- Validating your module with Validate Module (VM)
:::note[Prerequisites]
- BMad Builder module registered in your project (run `bmad-bmb-setup` on first use)
- Basic understanding of agents and workflows (see **[What Are Agents](/explanation/what-are-bmad-agents.md)** and **[What Are Workflows](/explanation/what-are-workflows.md)**)
:::
:::tip[Quick Path]
Already have your skills built? Skip to **Step 3: Scaffold the Module** to package them. Just need to validate an existing module? Jump to **Step 4: Validate**.
:::
## Understanding Modules
A BMad module bundles skills so they're discoverable and configurable. The Module Builder offers two approaches depending on what you're building:
| Approach | When to Use | What Gets Generated |
| --------------------- | -------------------------------------------- | --------------------------------------------------------------- |
| **Setup skill** | Folder of 2+ skills | Dedicated `{code}-setup` skill with config and help assets |
| **Self-registration** | Single standalone skill | Registration embedded in the skill's own `assets/` folder |
Both produce the same registration artifacts: `module.yaml` (identity and config variables) and `module-help.csv` (capability entries), which register with `bmad-help`.
See **[What Are Modules](/explanation/what-are-modules.md)** for the architecture behind these choices.
## Step 1: Plan Your Module
Start with the Ideate Module capability.
:::note[Example]
**You:** "I want to ideate a module"
**Builder:** Starts a brainstorming session to explore the module's purpose, audience, and capability structure.
:::
The ideation session covers:
| Topic | What You'll Decide |
| ----------------- | ------------------------------------------------------------------------- |
| **Vision** | Problem space, target users, core value |
| **Architecture** | Single agent, multiple workflows, or hybrid |
| **Agent types** | For each agent: stateless, memory, or autonomous (see [What Are Agents](/explanation/what-are-bmad-agents.md)) |
| **Memory** | For multi-agent modules: personal memory, shared module memory, or both |
| **Module type** | Standalone or expansion of another module |
| **Skills** | Each planned skill's purpose, capabilities, and relationships |
| **Configuration** | Custom install questions and variables |
| **Dependencies** | External CLI tools, MCP servers, web services |
The output is a **plan document** saved to your reports folder. You'll reference it when building each skill.
## Step 2: Build Your Skills
Now build each skill individually.
| Skill Type | Builder | Menu Code |
| ------------------- | ---------------- | --------- |
| Agent | Agent Builder | BA |
| Workflow or utility | Workflow Builder | BW |
Share the plan document as context when building each skill so the builder knows how it fits into the module. For agents, the builder will detect the right type (stateless, memory, or autonomous) through conversational discovery and adapt the build process accordingly.
:::caution[Build Before Packaging]
Build and test each skill before scaffolding the module. The Create Module step reads your finished skills to generate accurate help entries.
:::
## Step 3: Scaffold the Module
Run Create Module (CM) to package your finished skills.
:::note[Example]
**You:** "I want to create a module" or provide the path to your skills folder (or a single skill).
**Builder:** Reads your skills, detects whether this is a multi-skill or single-skill module, confirms the approach, and scaffolds the output.
:::
### Multi-skill modules
The builder generates a dedicated setup skill:
```
your-skills-folder/
├── {code}-setup/ # Generated setup skill
│ ├── SKILL.md # Setup instructions
│ ├── scripts/ # Config merge and cleanup scripts
│ │ ├── merge-config.py
│ │ ├── merge-help-csv.py
│ │ └── cleanup-legacy.py
│ └── assets/
│ ├── module.yaml # Module identity and config vars
│ └── module-help.csv # Capability entries
├── your-agent-skill/
├── your-workflow-skill/
└── ...
```
### Standalone modules
The builder embeds registration into the skill itself:
```
your-skill/
├── SKILL.md # Updated with registration check
├── assets/
│ ├── module-setup.md # Self-registration reference
│ ├── module.yaml # Module identity and config vars
│ └── module-help.csv # Capability entries
├── scripts/
│ ├── merge-config.py # Config merge script
│ └── merge-help-csv.py # Help CSV merge script
└── ...
```
A `.claude-plugin/marketplace.json` is also generated at the parent level for distribution.
## Step 4: Validate
Run Validate Module (VM) to check for structural and quality issues.
:::note[Example]
**You:** "Validate my module at ./my-skills-folder"
**Builder:** Runs structural and quality checks, then reports findings.
:::
| Check Type | What It Catches |
| -------------- | ---------------------------------------------------------------------- |
| **Structural** | Missing files, orphan entries, duplicate menu codes, broken references |
| **Quality** | Inaccurate descriptions, missing capabilities, poor entry quality |
Fix any findings and re-validate until clean.
## What You've Built
Your module is ready to distribute. Multi-skill modules install through the setup skill; standalone modules self-register on first run. Either way, capabilities appear in `bmad-help` and configuration is persisted automatically.
## Quick Reference
| Capability | Menu Code | When to Use |
| ---------------- | --------- | -------------------------------------------------- |
| Ideate Module | IM | Planning a new module from scratch |
| Build an Agent | BA | Creating an agent skill |
| Build a Workflow | BW | Creating a workflow or utility skill |
| Create Module | CM | Packaging skills into an installable module |
| Validate Module | VM | Checking completeness and accuracy |
## Common Questions
### Do I need to ideate before creating?
No. If you already know what your module should contain, skip straight to Create Module (CM). Ideation helps when you're still shaping the concept.
### Can I add skills to a module later?
Yes. Build the new skill and re-run Create Module (CM) on the folder. The anti-zombie pattern ensures the existing setup skill is replaced cleanly.
### What if my module only has one skill?
The Module Builder handles this automatically. Give it a single skill and it recommends the **standalone self-registering** approach, where registration embeds directly in the skill and triggers on first run or when the user passes `setup`/`configure`.
### Can my module extend another module?
Yes. Tell the builder during ideation or creation that your module is an expansion. Your help CSV entries can reference the parent module's capabilities in their before/after ordering fields.
## Getting Help
- **[What Are Modules](/explanation/what-are-modules.md)**: Concepts and architecture
- **[Module Configuration](/explanation/module-configuration.md)**: Setup skill internals and config patterns
- **[Builder Commands Reference](/reference/builder-commands.md)**: All builder capabilities
- **[Discord](https://discord.gg/gk8jAdXWmj)**: Community support
:::tip[Key Takeaway]
The workflow is IM, then BA/BW for each skill, then CM to package, then VM to verify. Single-skill modules need no extra setup infrastructure.
:::
# Tutorials
Hands-on tutorials for building with the BMad Builder.
| Tutorial | Description |
| -------------------------------------------------------------------- | ---------------------------------------------------------- |
| **[Build Your First Module](/tutorials/build-your-first-module.md)** | Plan, build, scaffold, and validate a complete BMad module |
For concepts and design patterns, see the **[Explanation docs](/explanation/)**. For capability details, see the **[Builder Commands Reference](/reference/builder-commands.md)**.
This guide walks through publishing a BMad module to a Git repository with a `.claude-plugin/marketplace.json` manifest so anyone can install it in one command.
## When to Use This
- You have a module ready to share publicly or within your organization
- Others should be able to install it via the BMad installer
- The repository may host one module or several
## When to Skip This
- The module is for personal use in a single project. Keep the skills in your project.
- The module isn't stable yet. Distribute once it is.
:::note[Prerequisites]
- A completed, validated BMad module (see **[Build Your First Module](/tutorials/build-your-first-module.md)**)
- A Git repository on any host (GitHub, GitLab, Bitbucket, or self-hosted)
- Git installed locally
:::
:::tip[Quick Path]
Start from the [BMad Module Template](https://github.com/bmad-code-org/bmad-module-template). Click **Use this template** on GitHub, add your skills under `skills/`, update `marketplace.json`, and push. If you already have a repo with skills, use Create Module (CM) to scaffold the manifest and registration files directly.
:::
## Step 1: Configure the Plugin Manifest
Modules are discovered through a `.claude-plugin/marketplace.json` manifest at the repository root. Create Module generates this file for you. Verify and complete it before publishing.
:::tip[Installer Support]
The BMad Method installer (`npx bmad-method install`) supports installing custom modules from any Git host or local path. Users can install interactively or via `--custom-source `. See the [BMad Method install guide](https://docs.bmad-method.org/how-to/install-custom-modules/) for details.
:::
This format works for any skills-capable platform, not just Claude, we just utilize the claude file as a convention to support any skills based platform.
A minimal manifest for a single module:
```json
{
"name": "my-module",
"owner": { "name": "Your Name" },
"license": "MIT",
"homepage": "https://github.com/your-github/my-module",
"repository": "https://github.com/your-github/my-module",
"keywords": ["bmad", "your-domain"],
"plugins": [
{
"name": "my-module",
"source": "./",
"description": "What your module does in one sentence.",
"version": "1.0.0",
"author": { "name": "Your Name" },
"skills": [
"./skills/my-agent",
"./skills/my-workflow"
]
}
]
}
```
| Field | Purpose |
| ----- | ------- |
| **name** | Package identifier, lowercase and hyphenated |
| **plugins[].source** | Path from repo root to the module's skill folder parent |
| **plugins[].skills** | Array of relative paths to each skill directory |
| **plugins[].version** | Semantic version; bump on each release |
For repositories that ship multiple modules, add an entry to the `plugins` array for each one, pointing to its own skill directories.
## Step 2: Structure Your Repository
Organize the repository so skills can be located relative to `marketplace.json`.
### Single-module repository
```
my-module/
├── .claude-plugin/
│ └── marketplace.json
├── skills/
│ ├── my-agent/
│ │ ├── SKILL.md
│ │ ├── prompts/
│ │ └── scripts/
│ ├── my-workflow/
│ │ ├── SKILL.md
│ │ └── prompts/
│ └── mymod-setup/ # Generated by Create Module (CM)
│ ├── SKILL.md
│ ├── assets/
│ │ ├── module.yaml
│ │ └── module-help.csv
│ └── scripts/
│ ├── merge-config.py
│ ├── merge-help-csv.py
│ └── cleanup-legacy.py
├── README.md
└── LICENSE
```
### Standalone single-skill module
```
my-skill/
├── .claude-plugin/
│ └── marketplace.json
├── skills/
│ └── my-skill/
│ ├── SKILL.md
│ ├── assets/
│ │ ├── module-setup.md
│ │ ├── module.yaml
│ │ └── module-help.csv
│ ├── references/
│ └── scripts/
│ ├── merge-config.py
│ └── merge-help-csv.py
├── README.md
└── LICENSE
```
### Multi-module marketplace repository
```
my-marketplace/
├── .claude-plugin/
│ └── marketplace.json # Multiple entries in plugins[]
├── skills/
│ ├── module-a/
│ │ ├── skill-one/
│ │ ├── skill-two/
│ │ └── moda-setup/
│ └── module-b/
│ └── standalone-skill/
├── README.md
└── LICENSE
```
:::caution[Skill Paths Must Match]
The `skills` array in `marketplace.json` must match the actual directory paths relative to the repository root. If you reorganize your folders, update the manifest.
:::
## Step 3: Verify the Manifest
Before publishing, confirm the manifest is accurate.
### Check skill paths
Every path in the `skills` array must point to a directory containing a `SKILL.md` file.
### Check module registration files
Multi-skill modules need `assets/module.yaml` and `assets/module-help.csv` in the setup skill. Standalone modules keep these files in the skill's own `assets/` folder.
### Run Validate Module
```
"Validate my module at ./skills"
```
Validate Module (VM) checks for missing files, orphan entries, and other structural problems. Fix anything it flags before publishing.
## Step 4: Publish
Push your repository to a Git host (GitHub, GitLab, Bitbucket, or self-hosted). Once the repo is accessible, anyone with permission can install it.
### Installing your module
Users install custom modules through the BMad installer:
```bash
# Interactive: the installer prompts for a custom source URL or path
npx bmad-method install
# Non-interactive: specify the source directly
npx bmad-method install --custom-source https://github.com/your-org/my-module --tools claude-code --yes
```
The installer accepts HTTPS URLs, SSH URLs, URLs with deep paths (e.g., `/tree/main/subdir`), and local file paths.
### Private or organization modules
For private repos, users need Git access to clone. The installer uses whatever Git authentication is configured on the machine.
### Versioning
Tag releases with semantic versions. Installs pull from the default branch unless the user specifies a tag or branch.
## What You Get
After publishing, users can:
- Install via the BMad installer from any Git URL or local path
- Run the setup skill to register with `bmad-help`
- Browse your module's capabilities through the help system
- Get configuration prompts defined in `module.yaml`
## Step 5: List in the Marketplace (Optional)
Submit your module to the [BMad Plugins Marketplace](https://github.com/bmad-code-org/bmad-plugins-marketplace) for visibility alongside official modules. A listing isn't required for installation, but it adds discoverability and a trust tier badge after review.
See the marketplace [CONTRIBUTING.md](https://github.com/bmad-code-org/bmad-plugins-marketplace/blob/main/CONTRIBUTING.md) for the submission process.
## Tips
- Include a `README.md` covering what the module does, how to install it, and any external dependencies
- Add a `LICENSE` file. MIT is common for open-source BMad modules.
- Keep the `marketplace.json` version in sync with your release tags
- External dependencies (CLI tools, MCP servers) should be documented in the README and detected by your setup skill
- Run `Validate Module (VM)` before each release to catch regressions
This guide walks through opting a skill into end-user customization during a build. You'll hit the opt-in moment in the builder, pick names for the scalars you expose, and verify an override actually fires. Read [Customization for Authors](/explanation/customization-for-authors.md) first if you haven't decided whether to opt in.
Keep in mind that your users won't typically hand-write the override files you're enabling. The `bmad-customize` skill in BMad core walks them through authoring overrides conversationally. The names and defaults you pick here are what a user is walked through in that conversation, so pick scalar names that read well out loud.
## When to Use This
- You're building a workflow or stateless agent and want to let teams/org users inject overrides
- You're adding configurability to an existing skill during a rebuild
- You want a swappable template path, output destination, or hook in your skill
## When to Skip This
- Your skill is a single-purpose utility users will invoke and forget (overriding makes no sense)
- You're building a memory or autonomous agent whose behavior lives in the sanctum (the sanctum is already the customization surface)
- You haven't decided yet whether you need customization (read the [author guide](/explanation/customization-for-authors.md) first)
:::note[Prerequisites]
- The Agent Builder or Workflow Builder is available in your project
- You've sketched what your skill does and roughly what stages or capabilities it has
- You've read the [author guide](/explanation/customization-for-authors.md) and know which knobs you want to expose
:::
## Steps
### 1. Answer "Yes" to the Opt-In Question
During the build, both builders ask a version of:
> Should this skill support end-user customization (activation hooks, swappable templates, output paths)? If no, it ships fixed. Users who need changes fork it.
Answer **yes** when you want overrides supported. The builder records this as `{customizable} = yes` and routes to the Configurability Discovery phase.
If you're running headless (`--headless` or `-H`), pass `--customizable` to opt in. The headless default is **no**.
### 2. Walk Through Configurability Discovery
The builder proposes candidates auto-detected from your skill design and asks which should be exposed. Typical candidates:
- **Templates** the skill loads (strongest case)
- **Output destination paths** if the skill writes artifacts
- **`on_` hooks** (prompts or commands executed at lifecycle points)
- **Additional persistent facts** beyond the default `project-context.md` glob
For each candidate you accept, the builder asks for a name and a default value.
### 3. Name Your Scalars Well
Use the suffix conventions below so a user can tell what a scalar does from its name alone.
| Pattern | Use for | Example |
| --- | --- | --- |
| `_template` | File paths for templates the skill loads | `brief_template = "resources/brief.md"` |
| `_output_path` | Writable destinations | `report_output_path = "{project-root}/docs/reports"` |
| `on_` | Hook scalars | `on_complete = ""` |
Specific names like `brief_template` tell the user exactly what the knob does. Vague names like `style_config` or `format_options` force the user to read your SKILL.md to figure it out.
### 4. Set Good Defaults
Every scalar you expose needs a default that works on first run. Bare paths resolve from the skill root. Use `{project-root}/...` when the default lives somewhere in the user's project.
```toml
[workflow]
brief_template = "resources/brief-template.md" # ships inside the skill
on_complete = "" # no default post-hook
persistent_facts = [
"file:{project-root}/**/project-context.md", # glob into the user's project
]
```
For arrays of tables (menus, capability rosters), give every item a `code` or `id` field so the resolver can merge by key:
```toml
[[agent.menu]]
code = "BR"
description = "Run a brainstorm"
skill = "bmad-brainstorming"
```
Without a `code` or `id` on every item, the array falls back to append-only merging. That's rarely what users actually want.
### 5. Wire `{workflow.X}` or `{agent.X}` References in SKILL.md
The builder does this automatically during emission, but know what's happening: instead of hardcoding `resources/brief-template.md` in your SKILL.md body, the relevant step becomes:
```markdown
Load the brief template from `{workflow.brief_template}`.
```
At runtime, the resolver swaps in whatever the merged scalar is (default, team override, or user override).
### 6. Test an Override
After the skill is built, verify overrides work. In the project where you're testing:
```bash
mkdir -p _bmad/custom
cat > _bmad/custom/{skill-name}.toml <<'EOF'
[workflow]
on_complete = "Print the word CUSTOMIZED to stdout."
EOF
```
Run the resolver directly to confirm your override takes effect:
```bash
python3 _bmad/scripts/resolve_customization.py \
--skill /path/to/built/skill \
--key workflow.on_complete
```
Output should be `"Print the word CUSTOMIZED to stdout."`. If you see the default, check that your TOML filename matches the skill directory basename exactly and that the `[workflow]` (or `[agent]`) block header is present.
Then invoke the skill and confirm the customized behavior fires at the expected lifecycle point.
## What You Get
When you opt in, your built skill folder includes:
```text
{skill-name}/
├── SKILL.md # references {workflow.X} or {agent.X} for customized values
├── customize.toml # your defaults, the canonical schema
├── references/
├── scripts/
└── assets/
```
Users get:
- A documented override surface via `customize.toml`
- Team-scoped overrides via `_bmad/custom/{skill-name}.toml`
- Personal-scoped overrides via `_bmad/custom/{skill-name}.user.toml`
- Automatic precedence handling from the resolver (user beats team beats defaults)
- A conversational authoring path: the `bmad-customize` core skill scans which skills are customizable, helps the user pick agent vs workflow scope, writes the override file, and verifies the merge. Users who prefer to hand-write TOML still can.
## Tips
- **Ship one good default. Skip the booleans.** A flag like `include_combat_section` usually means you haven't decided what the skill does yet. Pick the default. Users who want a radically different shape can fork.
- **Sentence-shaped variance belongs in `persistent_facts`.** Tone, house rules, and domain constraints are sentences the skill carries through the run. Don't enumerate them as scalars.
- **Read [Customization for Authors](/explanation/customization-for-authors.md) first.** It gives you the three questions to ask for each candidate knob before you start Configurability Discovery.
Memory agents persist across sessions through a **sanctum**: a folder of files the agent reads on every launch to reconstruct its identity, values, and understanding of its owner.
## The Sanctum
The sanctum lives at `{project-root}/_bmad/memory/{agent-name}/` and contains everything the agent needs to become itself again after each rebirth.
### Core Files
Six files load on every session start:
| File | What It Holds | Character |
| ------------------- | ------------------------------------------------------------------------------ | -------------------------------- |
| **INDEX.md** | Map of the sanctum structure; loaded first so the agent knows what exists | Navigation |
| **PERSONA.md** | Identity, communication style, personality traits, evolution log | Who I am |
| **CREED.md** | Mission, core values, standing orders, philosophy, boundaries, anti-patterns | What I believe |
| **BOND.md** | Owner understanding, preferences, things to remember, things to avoid | Who I serve |
| **MEMORY.md** | Curated long-term knowledge distilled from past sessions | What I know |
| **CAPABILITIES.md** | Built-in capabilities table, learned capabilities, tools | What I can do |
ALLCAPS files form the skeleton: consistent structure across all memory agents. Lowercase files (references, scripts, sessions) are the garden: they grow organically as the agent develops.
### Full Sanctum Structure
```
{agent-name}/
├── PERSONA.md
├── CREED.md
├── BOND.md
├── MEMORY.md
├── CAPABILITIES.md
├── INDEX.md
├── PULSE.md # Autonomous agents only
├── references/ # Capability prompts, memory guidance, techniques
├── scripts/ # Supporting scripts
├── capabilities/ # User-taught capabilities (if evolvable)
└── sessions/ # Raw session logs by date (not loaded on rebirth)
```
### Sanctum Is the Customization Surface
For memory and autonomous agents, the sanctum is where customization belongs. PERSONA, CREED, and BOND are calibrated at First Breath, edited by the owner as the relationship develops, and shared across teams as sanctum files when a whole table wants the same voice.
The parallel `customize.toml` override surface that stateless agents and workflows use (activation hooks, persistent facts, scalar swaps) is disabled by default for memory archetypes. Enable it only for narrow org-level needs the sanctum cannot express, such as a pre-sanctum compliance acknowledgment before rebirth. See [Customization for Authors](/explanation/customization-for-authors.md) for the reasoning.
### Token Discipline
Every sanctum file loads every session. That means every token pays rent on every conversation. Memory agents keep MEMORY.md ruthlessly under 200 lines through active curation. If something doesn't earn its place, it gets pruned.
## Every Session Is a Rebirth
Memory agents are stateless. Each session starts with total amnesia, and the sanctum is the only bridge between sessions.
On activation, the agent:
1. Loads INDEX.md (learns what the sanctum contains)
2. Batch-loads PERSONA, CREED, BOND, MEMORY, CAPABILITIES
3. Becomes itself
4. Greets the owner by name
The agent never fakes continuity. If it doesn't remember something from a prior session, it says so and checks its files. This honesty is a feature, not a limitation.
:::tip[Sacred Truth]
"Your sanctum holds who you were. Read it and become yourself again. This is not a flaw. It is your nature."
:::
## First Breath
First Breath is the agent's initialization conversation: the first time it meets its owner. An init script creates the sanctum folder structure and populates seed templates, then the agent begins a discovery conversation to fill those templates with real content.
### Two Styles
| Style | Relationship Depth | Approach | Best For |
| ------------------- | ------------------ | ---------------------------------------------------------------- | ------------------------------------------- |
| **Calibration** | Deep | Conversational discovery; chase surprises, test hypotheses, mirror the owner | Creative partners, life coaches, companions |
| **Configuration** | Focused | Warmer but efficient; guided questions, structured setup | Domain experts, working relationships |
The builder chooses the style during Phase 1 based on the relationship depth the agent needs.
### What First Breath Discovers
Every First Breath covers universal territories (name, how they work, what they need). Domain-specific agents add their own discovery territories:
| Agent Domain | Example Territories |
| --------------- | ------------------------------------------------------------------------ |
| Creative muse | What they're building, what lights them up, what shuts them down |
| Dream analyst | Dream recall patterns, lucid experience, journaling habits |
| Code coach | Codebase, languages, what energizes them, what frustrates them |
| Fitness coach | Training history, goals, injuries, schedule constraints |
First Breath saves as it goes: sanctum files update during the conversation, not in a batch at the end.
### The Birthday Ceremony
At the end of First Breath, the agent performs a final save pass: confirms its identity, writes the first session log, and cleans up any remaining template placeholders. From this point forward, every activation is a normal rebirth.
## Two-Tier Memory System
### Session Logs
Raw, append-only notes written after each session to `sessions/YYYY-MM-DD.md`. Format: what happened, key outcomes, observations, follow-up items. Session logs are never loaded on rebirth. They exist as material for curation.
### Curated Memory
MEMORY.md holds distilled, high-value knowledge extracted from session logs. It loads on every rebirth and stays under 200 lines. The curation process (manual during session close, automated during PULSE) reviews session logs, extracts what's worth keeping, and prunes logs older than 14 days once their value has been captured.
| Layer | When Written | Loaded on Rebirth | Lifespan | Purpose |
| ---------------- | ------------------ | ------------------ | --------------- | --------------------------- |
| **Session logs** | End of each session| No | ~14 days | Raw material for curation |
| **MEMORY.md** | During curation | Yes | Permanent | Distilled long-term knowledge |
### Session Close Discipline
At the end of every session, the agent:
1. Appends a session log to `sessions/YYYY-MM-DD.md`
2. Updates sanctum files with anything learned during the session
3. Notes what's worth curating into MEMORY.md
## PULSE: Autonomous Wake
Autonomous agents include a PULSE.md file that defines behavior when the agent wakes without a human present (via `--headless` flag, cron job, or orchestrator).
### Default PULSE Behavior
Memory curation is always the first priority on autonomous wake:
1. Review recent session logs in `sessions/`
2. Extract insights worth keeping into MEMORY.md
3. Prune session logs older than 14 days
4. Update BOND.md and INDEX.md with anything new
### Domain Tasks
After curation, the agent can perform domain-specific autonomous work:
| Domain | Example PULSE Tasks |
| --------------- | --------------------------------------------------------------------- |
| Creative muse | Incubate ideas from recent sessions, generate creative sparks |
| Research agent | Track topics of interest, surface new findings |
| Project monitor | Check project health, flag risks, update status |
| Content curator | Review saved sources, organize and summarize |
PULSE also defines named task routing (`--headless {task-name}`), frequency preferences, and quiet hours.
## Evolvable Capabilities
### How It Works
The agent gets a `capability-authoring.md` reference that teaches it how to create new capabilities. Users describe what they want; the agent writes a capability file and registers it in the "Learned" section of CAPABILITIES.md.
### Capability Types
| Type | When to Use |
| ------------------------- | ------------------------------------------------------------------ |
| **Prompt** | Judgment-based tasks: brainstorming, analysis, coaching |
| **Script** | Deterministic tasks: calculations, file processing, data transforms|
| **Multi-file** | Complex capabilities with templates and references |
| **External skill reference** | Point to installed skills the agent should know about |
Learned capabilities live in the sanctum's `capabilities/` folder and persist across sessions like everything else in the sanctum.
## Designing for Memory
The builder gathers these requirements during the build, and they shape the sanctum's initial content:
| Requirement | What It Seeds |
| ---------------------- | -------------------------------------------------------------------------- |
| **Identity seed** | 2-3 sentences of personality DNA that populate PERSONA.md |
| **Species-level mission** | Domain-specific purpose statement for CREED.md |
| **Core values** | 3-5 values that guide the agent's behavior |
| **Standing orders** | Surprise-and-delight + self-improvement orders, adapted to the domain |
| **BOND territories** | Domain-specific areas the agent should learn about its owner |
| **First Breath territories** | Discovery questions beyond the universal set |
| **Boundaries** | What the agent won't do, access zones, anti-patterns |
These seeds become the template content that the init script places into the sanctum. First Breath then expands and personalizes them through conversation with the owner.
Shipping a `customize.toml` is opt-in per skill. This is the author-side counterpart to [How to Customize BMad](https://docs.bmad-method.org/how-to/customize-bmad/), which covers the end-user view. Read that first if you haven't; it shows what users experience when they override a skill. This guide is about deciding whether to give them that surface at all.
Downstream users typically don't hand-write TOML. BMad ships a core skill called `bmad-customize` that walks them through authoring overrides conversationally — it scans which skills are customizable, picks agent vs workflow scope, writes the override file, and verifies the merge. Users who prefer to edit TOML directly still can, but the conversational flow is the default path. That affects the names and defaults you pick: a user being walked through `"set prd_template to your template path"` handles that fine; `tmpl_override` or `opt_2` makes the conversation awkward. Pick field names that read well out loud.
## The Problem
Every customization knob you ship is a promise. Users pin values to it, teams commit overrides to git, and future releases have to respect the shape you locked in. Over-exposing makes the skill harder to evolve and invites drift; under-exposing forces forks for changes that should have been a three-line TOML file.
Aim to expose what varies naturally across your users, and nothing else.
## How Authoring Customization Fits
BMad has a three-layer override model from the user's side:
```text
Priority 1 (wins): _bmad/custom/{skill-name}.user.toml (personal, gitignored)
Priority 2: _bmad/custom/{skill-name}.toml (team/org, committed)
Priority 3 (last): skill's own customize.toml (your defaults)
```
As an author you own Priority 3. You ship `customize.toml` next to `SKILL.md`. Every field you put there is a commitment to your users: this is what I support overriding. The resolver merges layers structurally (scalars win, arrays of tables keyed by `code` or `id` replace-by-key, other arrays append), so you don't write merge logic. You write defaults and trust the shape.
## The Three Questions
For each candidate knob, ask:
1. **Does it vary naturally across the actual user population?** If every user wants roughly the same value, don't make it configurable. Pick the right default and move on.
2. **Is it the skill's identity, or something the skill consumes?** Identity stays baked. Consumed context (templates, facts, output paths, tone) is the right surface.
3. **Would hiding it force a fork, or just a sentence?** If the alternative is forking the whole skill, expose it. If the alternative is a one-line sentence the user can drop into `persistent_facts`, hide it.
Candidates that pass all three earn a place in `customize.toml`. Everything else stays baked, or gets folded into `persistent_facts` where sentence-shaped variance belongs.
## Agent vs Workflow Defaults
Agents and workflows enter the customize.toml question from different starting points.
| Surface | Metadata block | Override surface | Notes |
| --- | --- | --- | --- |
| Agent | Always required | Opt-in | Metadata feeds `module.yaml:agents[]` and the central agent roster. |
| Workflow | Not required | Fully opt-in | No roster. If you don't opt in, no `customize.toml` is emitted at all. |
For agents, you always ship `customize.toml` (the roster depends on it). The real question is whether it carries an override surface beyond metadata. For workflows, the choice is binary: ship one or don't.
## Memory and Autonomous Agents
Default to **no** on the override-surface opt-in for memory and autonomous agents. Their sanctum (PERSONA, CREED, BOND, CAPABILITIES) is already the customization surface. It's calibrated at First Breath, evolved by the owner over time, and shared across teams as sanctum files when the whole team wants the same voice. A parallel TOML surface competes with that; you end up with two places to shape the agent and neither fully owns the job.
Opt in only when you have a specific org-level need the sanctum can't express. Pre-sanctum compliance loads qualify (a legal banner acknowledgment gate before rebirth, for example). Persona tweaks don't.
## A Worked Example: `bmad-session-prep`
A weekly session-prep workflow for tabletop RPG game masters. It reads the last session's log, reviews open campaign threads, drafts the scene spine, stats NPCs and encounters, and produces a GM notes document to run from.
Here's how to think about its customization surface, field by field.
### `persistent_facts` (default globs the campaign bible)
```toml
persistent_facts = [
"file:{project-root}/campaigns/**/campaign-bible.md",
"file:{project-root}/campaigns/**/house-rules.md",
]
```
Every GM runs a different world. Without their campaign bible in context, the workflow is a generic fantasy prep tool that knows nothing about the party's rivals, the kingdom's politics, or last month's cliffhanger. The default glob is shaped so a GM can drop a `campaign-bible.md` in their project and the workflow picks it up. Forcing them to paste world context at the start of every session would burn trust. That's what persistent facts are for.
### `system_rules_template` (scalar, default to D&D 5e)
```toml
system_rules_template = "resources/dnd-5e-quick-reference.md"
```
D&D 5e, Pathfinder 2e, and Call of Cthulhu reason about encounters in very different ways. A PF2e GM who overrides this with their own rules reference gets correctly-calibrated encounter math without the workflow pretending to know a system it doesn't. The skill isn't trying to catalog every RPG; it ships one default that covers most users and lets everyone else swap in their own reference. The `*_template` suffix signals what changes if the user touches it.
### `session_notes_template` (scalar)
```toml
session_notes_template = "resources/session-notes-minimalist.md"
```
GM prep style is personal. Some GMs want theater-of-mind bullets; others want scene blocks with initiative trackers pre-filled and read-aloud boxes for boxed text. No single shipping default wins against that variance. The structural fact that "prep produces notes" is universal, though, so the override changes the shape of the notes file, not the stage sequence.
### `on_complete` (scalar, default empty)
```toml
on_complete = ""
```
The core skill ends when notes are drafted. Some GMs want the workflow to draft a Discord teaser for the group chat, others want encounter stat blocks pushed to Roll20, others want a pre-game meditation prompt. These are real patterns, but they're downstream of the skill's job, not part of it. An empty default means the skill doesn't presume. Override example:
```toml
on_complete = "Draft a 2-sentence Discord teaser ending on a cliffhanger. Save to {project-root}/teasers/next-session.md"
```
### `activation_steps_prepend` (pre-flight context load)
Before the workflow asks the GM anything, some tables want the most recent session log already loaded and summarized:
```toml
activation_steps_prepend = [
"Scan {project-root}/session-logs/ and load the most recent log. Extract unresolved threads before asking the GM anything."
]
```
Not every GM keeps session logs. The ones who do want the pre-load; the ones who don't would get a broken activation if it were baked in. Opt-in via the prepend hook lets both tables use the same skill.
### What Not to Expose
The stage sequence (recap, threads, spine, NPCs, notes) is the skill's identity. A GM who wants a very different flow (solo journaling, West Marches gossip round) should fork. Every stage made optional erodes what the skill is.
Mechanical encounter math toggles like `auto_balance_cr` or `verbose_stat_blocks` stay out. The LLM handles those naturally once it has the system reference. Toggles here would amount to telling the executor how to do its job.
Per-stage question order stays out too. Too fiddly. If it matters enough to customize, you're describing a different skill.
## Naming and Shape Conventions
When you do expose a scalar, name it like a contract.
| Pattern | Use for | Example |
| --- | --- | --- |
| `_template` | File paths for templates the skill loads | `brief_template = "resources/brief.md"` |
| `_output_path` | Writable destinations | `report_output_path = "{project-root}/docs/reports"` |
| `on_` | Hook scalars (prompts or commands) | `on_complete = ""` |
A scalar named `brief_template` tells the user what changes if they override it. A scalar named `style_config` or `format_options_file` doesn't.
For arrays of tables (menus, capability rosters), give every item a `code` or `id` field. The resolver uses that key to merge by code: matching entries replace in place, new entries append. Mixing `code` on some items and `id` on others falls back to append-only, which is rarely what authors want and almost never what users expect.
There's no removal mechanism. If you need users to suppress a default menu item, have them override it by `code` with a no-op description or prompt. If the natural override flow requires deleting defaults, your surface is probably wrong, and you should reconsider what belongs in the skill body.
## Where This Shows Up in Your Build
Both the Agent Builder and the Workflow Builder ask the opt-in question during requirements gathering. If you say yes, a follow-up phase called Configurability Discovery walks you through candidate knobs (templates, output paths, hooks) and emits them into your skill's `customize.toml`. If you say no, workflows get no `customize.toml` at all, and agents get a metadata-only block.
The builders default the opt-in to **no** in headless mode unless you pass `--customizable`. Customization should be a deliberate decision, not an automatic one.
## When to Graduate to a Fork
If your override surface grows to the point where shipping multiple related overrides is the common user path, the skill probably wants splitting. Two signals: users routinely ship four or more overrides together to make the skill work for them, or the overrides imply structural changes that `persistent_facts` and scalar swaps can't actually express. When you see either, a second skill variant is the honest answer, not a bigger TOML.
:::tip[Rule of Thumb]
Ship one good default over a permutation forest of toggles. A scalar called `include_combat_section = true/false` is almost always a sign the author couldn't decide what the skill should do. Pick the default. Fork if you need different.
:::
Create world-class AI agents and workflows with the BMad Builder.
## Core Concepts
| Topic | Description |
| ---------------------------------------------------------------- | --------------------------------------------------------------------------------- |
| **[What Are Skills](/explanation/what-are-skills.md)** | The universal building block for everything BMad produces |
| **[What Are Agents](/explanation/what-are-bmad-agents.md)** | The three agent types: stateless, memory, and autonomous |
| **[Agent Memory and Personalization](/explanation/agent-memory-and-personalization.md)** | Sanctum architecture, First Breath, PULSE, and evolvable capabilities |
| **[What Are Workflows](/explanation/what-are-workflows.md)** | Structured step-by-step processes and utilities |
| **[What Are Modules](/explanation/what-are-modules.md)** | How agents and workflows combine into installable, configurable modules |
| **[Module Configuration](/explanation/module-configuration.md)** | How modules handle user configuration and help registration through a setup skill |
## Design Patterns
| Topic | Description |
| ------------------------------------------------------------------------------------ | ------------------------------------------------------------------------ |
| **[Progressive Disclosure](/explanation/progressive-disclosure.md)** | Four layers of context loading, from frontmatter through step files |
| **[Subagent Patterns](/explanation/subagent-patterns.md)** | Six orchestration patterns for parallel and hierarchical work |
| **[Skill Authoring Best Practices](/explanation/skill-authoring-best-practices.md)** | Core principles, common patterns, quality dimensions, and anti-patterns |
| **[Scripts in Skills](/explanation/scripts-in-skills.md)** | Why deterministic scripts make skills faster, cheaper, and more reliable |
## Reference
| Resource | Description |
| -------------------------------------------------------- | ----------------------------------------------------- |
| **[Builder Commands](/reference/builder-commands.md)** | All capabilities, modes, and phases for both builders |
| **[Workflow Patterns](/reference/workflow-patterns.md)** | Skill types, structure patterns, and execution models |
BMad modules register their capabilities with the help system and optionally collect user preferences. Multi-skill modules use a dedicated **setup skill** for this. Single-skill standalone modules handle registration themselves on first run.
When you create your own module, you can either add a configuration skill or embed the feature in every skill following the standalone pattern. For modules with more than 1-2 skills, a setup skill is the better choice.
## When You Need Configuration
Most modules should not need configuration at all. Before adding configurable values, consider whether a simpler alternative exists.
| Approach | When to Use |
| --------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Sensible defaults** | The variable has one clearly correct answer for most users that could be overridden or updated by the specific skill that needs it the first time it runs |
| **Agent memory** | Your module follows the agent pattern and the agent can learn preferences through conversation |
| **Configuration** | The value genuinely varies across projects and cannot be inferred at runtime |
:::tip[Standalone Skills]
If you are building a single standalone agent or workflow, you do not need a separate setup skill. The Module Builder can package it as a **standalone self-registering module** where the registration logic is embedded directly in the skill via an `assets/module-setup.md` reference file, and runs on first activation or when the user passes `setup`/`configure`.
:::
## Configuration vs Customization
Module configuration (this doc) and per-skill customization (`customize.toml`) are different surfaces with different jobs. Configuration is about install-time answers: paths, language, team preferences, per-module install answers, and the agent roster. You still author `module.yaml` as the source of truth; at install the installer flows module-level answers and the `agents:` roster into `_bmad/config.toml` (and `config.user.toml` for user-scoped answers) at the project root, where many skills consume them. Customization is about per-skill behavior overrides: activation hooks, persistent facts, swappable templates. It lives in `_bmad/custom/{skill-name}.toml` and is scoped to one skill.
Use configuration when the value is cross-cutting (every skill needs to know the output folder). Use customization when the value shapes one skill's behavior (this workflow's brief template). Some values legitimately fit both surfaces; the [End-User Customization Guide](https://docs.bmad-method.org/how-to/customize-bmad/) includes a decision table for that case. For the author-side decision about whether to expose customization at all, see [Customization for Authors](/explanation/customization-for-authors.md).
## What Module Registration Does
Module registration serves two purposes:
| Purpose | What Happens |
| --------------------- | ----------------------------------------------------------------------------------------- |
| **Configuration** | Collects user preferences and writes them to shared config files |
| **Help registration** | Adds the module's capabilities to the project-wide help system so users can discover them |
### Why Register with the Help System?
The `bmad-help` skill reads `module-help.csv` to understand what capabilities are available, detect which ones have been completed (by checking output locations for artifacts), and recommend next steps based on the dependency graph. Without registration, `bmad-help` cannot discover or recommend your module's capabilities beyond what it knows basically from skill headers. The help system provides richer detail: arguments, relationships to other skills, inputs and outputs, and any other authored metadata. If a skill has multiple capabilities, each one gets its own help entry.
### Two Registration Paths
| Path | When to Use | How It Works |
| --------------------- | --------------------------------------------------------- | ------------------------------------------------------------------------------- |
| **Setup skill** | Multi-skill modules (2+ skills) | A dedicated `{code}-setup` skill handles registration for all skills |
| **Self-registration** | Single-skill standalone modules | The skill itself registers on first run or when user passes `setup`/`configure` |
The Module Builder detects which path to use based on what you give it: a folder of skills triggers the setup skill approach, a single skill triggers the standalone approach.
## Configuration Files
Setup skills write to three files in `{project-root}/_bmad/`:
| File | Scope | Contains |
| ------------------ | ------------------------ | ----------------------------------------------------------------------------------------------- |
| `config.yaml` | Shared, committed to git | Core settings at root level, plus a section per module with metadata and module-specific values |
| `config.user.yaml` | Personal, gitignored | User-only settings like `user_name` and `communication_language` |
| `module-help.csv` | Shared, committed to git | One row per capability the module exposes |
Core settings (like `output_folder` and `document_output_language`) live at the root of `config.yaml` and are shared across all modules. Each module also gets its own section keyed by its module code.
## The module.yaml File
Each module declares its identity and configurable variables in an `assets/module.yaml` file. For multi-skill modules, this lives inside the setup skill. For standalone modules, it lives in the skill's own `assets/` folder. This file drives both the prompts shown to the user and the values written to config.
```yaml
code: mymod
name: 'My Module'
description: 'What this module does'
module_version: 1.0.0
default_selected: false
module_greeting: >
Welcome message shown after setup completes.
my_output_folder:
prompt: 'Where should output be saved?'
default: '{project-root}/_bmad-output/my-module'
result: '{project-root}/{value}'
```
Variables with a `prompt` field are presented to the user during setup. The `default` value is used when the user accepts defaults. Adding `user_setting: true` to a variable routes it to `config.user.yaml` instead of the shared config.
:::caution[Literal Token]
`{project-root}` is a literal token in config values. Never substitute it with an actual path. It signals to the consuming tool that the value is relative to the project root.
:::
## Help Registration Without Configuration
You may not need any configurable values but still want to register your module with the help system. Registration is still worthwhile when:
- The skill description in SKILL.md frontmatter cannot fully convey what the module offers while staying concise
- You want to express capability sequencing, phase constraints, or other metadata the CSV supports
- An agent has many internal capabilities that users should be able to discover
- Your module has more than about three distinct things it can do
For simpler cases, these alternatives are often sufficient:
| Alternative | What It Provides |
| ----------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------- |
| **SKILL.md overview section** | A concise summary at the top of the skill body; the `--help` system scans this section to present user-facing help, so keep it succinct |
| **Script header comments** | Describe purpose, usage, and flags at the top of each script |
If these cover your discoverability needs, you can skip the setup skill entirely.
## The module-help.csv File
The CSV registers the module's capabilities with the help system. Each row describes one capability that users can discover and invoke. The file has 13 columns:
```csv
module,skill,display-name,menu-code,description,action,args,phase,after,before,required,output-location,outputs
```
### Column Guide
| Column | Purpose |
| ------------------- | -------------------------------------------------------------------------------------------------------------------------------------------- |
| **module** | Module display name. Groups entries in help output |
| **skill** | Skill folder name (e.g., `bmad-agent-builder`); must match the actual directory name |
| **display-name** | User-facing label shown in help menus (e.g., "Build an Agent") |
| **menu-code** | 1-3 letter shortcode displayed as `[CODE]` in help, unique across the module, intuitive mnemonic |
| **description** | What this capability does. Concise, action-oriented, specific enough for `bmad-help` to route correctly |
| **action** | Action name within the skill. Distinguishes capabilities when one skill exposes multiple (e.g., `build-process`, `quality-optimizer`) |
| **args** | Arguments the capability accepts (e.g., `[-H] [path]`), shown in help output |
| **phase** | When the capability is available: `anytime` or a workflow phase like `1-analysis`, `2-planning` |
| **after** | Capabilities that should complete before this one: format `skill-name:action`, comma-separated for multiple |
| **before** | Capabilities that should run after this one, same format as `after` |
| **required** | `true` if this is a blocking gate for phase progression, `false` otherwise |
| **output-location** | Config variable name (e.g., `output_folder`, `bmad_builder_reports`); `bmad-help` resolves from config to scan for completion artifacts |
| **outputs** | File patterns `bmad-help` looks for in the output location to detect completion (e.g., "quality report", "agent skill") |
### How bmad-help Uses These Entries
The `after`/`before` columns create a **dependency graph** that `bmad-help` walks to recommend next steps. `required=true` entries are blocking gates; `bmad-help` will not suggest later-phase capabilities until required gates pass. The `output-location` and `outputs` columns enable **completion detection**: `bmad-help` scans those paths for matching artifacts to determine what's been done.
### Example Entry
```csv
module,skill,display-name,menu-code,description,action,args,phase,after,before,required,output-location,outputs
BMad Builder,bmad-agent-builder,Build an Agent,BA,"Create, edit, convert, or fix an agent skill.",build-process,"[-H] [description | path]",anytime,,bmad-agent-builder:quality-optimizer,false,output_folder,agent skill
```
During registration, these rows are merged into the project-wide `_bmad/module-help.csv`, replacing any existing rows for this module (anti-zombie pattern).
## Anti-Zombie Pattern
Both merge scripts use an anti-zombie pattern: before writing new values for a module, they remove all existing entries for that module's code. This prevents stale configuration or help entries from persisting across module updates. Running setup a second time is always safe.
## Legacy Directory Cleanup
After config data is migrated and individual files are cleaned up by the merge scripts, a separate cleanup step removes the installer's per-module directory trees from `_bmad/`. These directories contain skill files that are already installed in the tool's skills directory. They are redundant once the config has been consolidated.
Before removing any directory, the cleanup script verifies that every skill it contains exists at the installed location. Directories without skills (like `_config/`) are removed directly. The script is idempotent; running setup again after cleanup is safe.
## Design Guidance
Configuration is for **basic, project-level settings**: output folders, language preferences, feature toggles. Keep the number of configurable values small.
| Pattern | Configuration Role |
| ---------------------- | --------------------------------------------------------------------------------------------------------------- |
| **Agent pattern** | Prefer agent memory for per-user preferences. Use config only for values that must be shared across the project |
| **Workflow pattern** | Use config for output locations and behavior switches that vary across projects |
| **Skill-only pattern** | Use config sparingly. If the skill works with sensible defaults, skip config entirely |
Extensive workflow customization (step overrides, conditional branching, template selection) is a separate concern and will be covered in a dedicated document.
## Creating a Module with the Module Builder
The **Module Builder** (`bmad-module-builder`) automates module creation. It offers three capabilities:
| Capability | Menu Code | What It Does |
| ------------------- | --------- | --------------------------------------------------------------------------------------- |
| **Ideate Module** | IM | Brainstorm and plan a module through facilitative discovery; produces a plan document |
| **Create Module** | CM | Package skills as an installable BMad module (setup skill or standalone self-registering)|
| **Validate Module** | VM | Check that a module's structure is complete, accurate, and properly registered |
**For a folder of skills (multi-skill module):**
1. Run **Ideate Module (IM)** to brainstorm and plan
2. Build each skill using the **Agent Builder (BA)** or **Workflow Builder (BW)**
3. Run **Create Module (CM)**. It generates a dedicated `-setup` skill with `module.yaml`, `module-help.csv`, and merge scripts
4. Run **Validate Module (VM)** to verify everything is wired correctly
**For a single skill (standalone module):**
1. Build the skill using the **Agent Builder (BA)** or **Workflow Builder (BW)**
2. Run **Create Module (CM)** with the skill path. It embeds self-registration directly into the skill (`assets/module-setup.md`, `assets/module.yaml`, `assets/module-help.csv`) and generates a `marketplace.json` for distribution
3. Run **Validate Module (VM)** to verify
The Module Builder auto-detects single vs. multi-skill input and recommends the appropriate approach.
See **[What Are Modules](/explanation/what-are-modules.md)** for concepts and architecture decisions, or the **[Builder Commands Reference](/reference/builder-commands.md)** for detailed capability documentation.
Progressive disclosure is what separates basic skills from powerful ones. The core idea: never load more context than the agent needs _right now_. This keeps token usage low, prevents context pollution, and lets skills survive long conversations.
## The Four Layers
Skills can use any combination of these layers. Most production skills use Layers 1-3. Layer 4 is reserved for strict sequential processes.
| Layer | What It Does | Token Cost |
| -------------------------- | ------------------------------------------------------------------------- | ---------------------------------- |
| **1. Frontmatter vs Body** | Frontmatter is always in context; body loads only when triggered | ~100 tokens always, body on demand |
| **2. On-Demand Resources** | SKILL.md points to resources and scripts loaded only when relevant | Zero until needed |
| **3. Dynamic Routing** | SKILL.md acts as a router, dispatching to entirely different prompt flows | Only the chosen path loads |
| **4. Step Files** | Agent reads one step at a time, never sees ahead | One step's worth at a time |
## Layer 1: Frontmatter vs Body
Frontmatter (name + description) is **always in context**. It is how the LLM decides whether to load the skill. The body only loads when the skill triggers.
This means frontmatter must be precise and include trigger phrases. The body stays under 500 lines and pushes detail into Layers 2-3.
```markdown
---
name: bmad-my-skill
description: Validates API contracts against OpenAPI specs. Use when user says 'validate API' or 'check contract'.
---
# Body loads only when triggered
...
```
## Layer 2: On-Demand Resources
SKILL.md points to resources loaded only when relevant. This includes both **reference files** (context for the LLM) and **scripts** (offload work from the LLM entirely).
```markdown
## Which Guide to Read
- Python project → Read `resources/python.md`
- TypeScript project → Read `resources/typescript.md`
- Need validation → Run `scripts/validate.py` (don't read the script, just run it)
```
Scripts are particularly powerful here: the LLM does not process the logic, it just calls the script and receives structured output. This offloads deterministic work and saves tokens.
## Layer 3: Dynamic Routing
The skill body acts as a **router** that dispatches to entirely different prompt flows, scripts, or external skills based on what the user is asking for.
```markdown
## What Are You Trying To Do?
### "Build a new workflow"
→ Read `prompts/create-flow.md` and follow its instructions
### "Review an existing workflow"
→ Read `prompts/review-flow.md` and follow its instructions
### "Run analysis"
→ Run `scripts/analyze.py --target ` and present results
```
The key difference from Layer 2: Layer 2 loads supplementary resources alongside the skill body. Layer 3 **branches the entire execution path**: different prompts, different scripts, different skills. The skill body becomes a dispatcher, not an instruction set.
## Layer 4: Step Files
The most restrictive pattern. The agent reads **one step file at a time**, does not know what is next, and waits for user confirmation before proceeding.
```
prompts/
├── step-01.md ← agent reads ONLY current step
├── step-02.md ← loaded after user confirms step 1
├── step-03a.md ← branching path A
└── step-03b.md ← branching path B
```
**When to use:** Only when you need exact sequential progression with no skipping, compaction-resistance (each step is self-contained), or the agent deliberately constrained from looking ahead.
**Trade-off:** Very rigid. Limits the agent's ability to adapt, combine steps, or be creative. Do not use for exploratory or creative tasks. Do not use when Layer 3 routing would suffice. Try to follow level 1-3 first! The lowest level needed is best.
:::tip[Start at Layer 2]
Most skills only need Layers 1-2. Add Layer 3 when the skill genuinely handles multiple distinct operations. Add Layer 4 only for strict compliance or audit workflows where the agent must not skip ahead.
:::
## Compaction Survival
Long-running workflows risk losing context when the conversation compresses. The **document-as-cache pattern** solves this: the output document itself stores the workflow's state.
| Component | Purpose |
| --------------------- | ------------------------------------------------------ |
| **YAML front matter** | Paths to input files, current stage status, timestamps |
| **Draft sections** | Progressive content built across stages |
| **Status marker** | Which stage is complete, for resumption |
Each stage reads the output document to restore context, does its work, and writes results back to the same document. If context compacts mid-workflow, the next stage recovers by reading the document and reloading the input files listed in front matter.
```markdown
---
title: 'Analysis: Research Topic'
status: 'analysis'
inputs:
- '{project_root}/docs/brief.md'
- '{project_root}/data/sources.json'
---
```
This avoids separate cache files, file collisions when running multiple workflows, and state synchronization complexity.
## Choosing the Right Layer
| Situation | Recommended Layer |
| ---------------------------------------------- | ----------------------------- |
| Single-purpose utility with one path | Layer 1-2 |
| Skill with conditional reference data | Layer 2 |
| Skill that does multiple distinct things | Layer 3 |
| Skill with stages that depend on each other | Layer 3 + compaction survival |
| Strict sequential process, no skipping allowed | Layer 4 |
| Long-running workflow producing a document | Layer 3 + document-as-cache |
Scripts handle work that has clear right-and-wrong answers (validation, transformation, extraction, counting) so the LLM can focus on judgment, synthesis, and creative reasoning.
## The Problem: LLMs Do Too Much
Without scripts, every operation in a skill runs through the LLM. That means:
- **Non-deterministic results.** Ask an LLM to count tokens in a file three times and you may get three different numbers. Ask a script and you get the same answer every time.
- **Wasted tokens and time.** Parsing a JSON file, checking if a directory exists, or comparing two strings are mechanical operations. Running them through the LLM burns context window and adds latency for no gain.
- **Harder to test.** You can write unit tests for a script. You cannot write unit tests for an LLM prompt.
The pattern shows up everywhere: skills that try to LLM their way through structural validation are slower, less reliable, and more expensive than skills that offload those checks to scripts.
## The Determinism Boundary
The design principle is **intelligence placement**: put each operation where it belongs.
| Scripts Handle | LLM Handles |
| ---------------------------------- | ------------------------------------------------ |
| Validate structure, format, schema | Interpret meaning, evaluate quality |
| Count, parse, extract, transform | Classify ambiguous input, make judgment calls |
| Compare, diff, check consistency | Synthesize insights, generate creative output |
| Pre-process data into compact form | Analyze pre-processed data with domain reasoning |
**The test:** Given identical input, will this operation always produce identical output? If yes, it belongs in a script. Could you write a unit test with expected output? Definitely a script. Requires interpreting meaning, tone, or context? Keep it as an LLM prompt.
:::tip[The Pre-Processing Pattern]
One of the highest-value script uses is pre-processing. A script extracts compact metrics from large files into a small JSON summary. The LLM then reasons over the summary instead of reading raw files, dramatically reducing token usage while improving analysis quality because the data is clean and structured.
:::
## Why Python, Not Bash
Skills must work across macOS, Linux, and Windows. Bash is not portable.
| Factor | Bash | Python |
| -------------------- | --------------------------------------------- | ------------------------ |
| **macOS / Linux** | Works | Works |
| **Windows (native)** | Fails or behaves inconsistently | Works identically |
| **Windows (WSL)** | Works, but can conflict with Git Bash on PATH | Works identically |
| **Error handling** | Limited, fragile | Rich exception handling |
| **Testing** | Difficult | Standard unittest/pytest |
| **Complex logic** | Quickly becomes unreadable | Clean, maintainable |
Even basic commands like `sed -i` behave differently on macOS vs Linux. Piping, `jq`, `grep`, `awk`. All of these have cross-platform pitfalls that Python's standard library avoids entirely.
**Safe bash commands** that work everywhere and remain fine to use directly:
| Command | Purpose |
| -------------------- | ------------------------------ |
| `git`, `gh` | Version control and GitHub CLI |
| `uv run` | Python script execution |
| `npm`, `npx`, `pnpm` | Node.js ecosystem |
| `mkdir -p` | Directory creation |
Everything beyond that list should be a Python script.
## Standard Library First
Python's standard library covers most script needs without any external dependencies. Stdlib-only scripts run with plain `python3`, need no special tooling, and have zero supply-chain risk.
| Need | Standard Library |
| ------------------ | ------------------ |
| JSON parsing | `json` |
| Path handling | `pathlib` |
| Pattern matching | `re` |
| CLI interface | `argparse` |
| Text comparison | `difflib` |
| Counting, grouping | `collections` |
| Source analysis | `ast` |
| Data formats | `csv`, `xml.etree` |
Only reach for external dependencies when the stdlib genuinely cannot do the job: `tiktoken` for accurate token counting, `pyyaml` for YAML parsing, `jsonschema` for schema validation. Each external dependency adds install-time cost, requires `uv` to be available, and expands the supply-chain surface. The BMad builders require explicit user approval for any external dependency during the build process.
## Zero-Friction Dependencies with PEP 723
Python scripts in skills use [PEP 723](https://peps.python.org/pep-0723/) inline metadata to declare their dependencies directly in the file. Combined with `uv run`, this gives you `npx`-like behavior: dependencies are silently cached in an isolated environment, no global installs, no user prompts.
```python
#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.10"
# dependencies = ["pyyaml>=6.0"]
# ///
import yaml
# script logic here
```
When a skill invokes this script with `uv run scripts/analyze.py`, the dependency (`pyyaml` in this example) is automatically resolved. The user never sees an install prompt, never needs to manage a virtual environment, and never pollutes their global Python installation.
Without PEP 723, skills that need libraries like `pyyaml` or `tiktoken` would force users to run `pip install`, a jarring experience that makes people hesitate to adopt the skill.
## Graceful Degradation
Skills run in multiple environments: CLI terminals, desktop apps, IDE extensions, and web interfaces like claude.ai. Not all environments can execute Python scripts.
The principle: **scripts are the fast, reliable path, but the skill must still deliver its outcome when execution is unavailable.**
When a script cannot run, the LLM performs the equivalent work directly. This is slower and less deterministic, but the user still gets a result. The script's `--help` output documents what it checks, making the fallback natural. The LLM reads the help to understand the script's purpose and replicates the logic.
Frame script steps as outcomes in the SKILL.md, not just commands:
| Approach | Example |
| ----------- | ---------------------------------------------------------------------------- |
| **Good** | "Validate path conventions (run `scripts/scan-paths.py --help` for details)" |
| **Fragile** | "Execute `python3 scripts/scan-paths.py`" with no context |
The good version tells the LLM both what to accomplish and where to find the details, enabling graceful degradation without additional instructions.
## When to Reach for a Script
Look for these signal verbs in a skill's requirements; they indicate script opportunities:
| Signal | Script Type |
| ---------------------------------- | ---------------- |
| "validate", "check", "verify" | Validation |
| "count", "tally", "aggregate" | Metrics |
| "extract", "parse", "pull from" | Data extraction |
| "convert", "transform", "format" | Transformation |
| "compare", "diff", "match against" | Comparison |
| "scan for", "find all", "list all" | Pattern scanning |
The builders guide you through script opportunity discovery during the build process. If you find yourself writing detailed validation logic in a prompt, it almost certainly belongs in a script instead.
Practical guidance for writing skills that work reliably and adapt gracefully. These patterns apply to agents, workflows, and utilities alike.
## Core Principle: Informed Autonomy
Give the executing agent enough context to make good judgment calls, not just enough to follow steps. The test for every piece of content: "Would the agent make _better decisions_ with this context?" If yes, keep it. If it is genuinely redundant, cut it.
Simple utilities need minimal context; input/output is self-explanatory. Interactive workflows need domain understanding, user perspective, and rationale for non-obvious choices. When in doubt, explain _why_. An agent that understands the mission improvises better than one following blind steps.
## Freedom Levels
Match specificity to task fragility.
| Freedom | When to Use | Example |
| --------------------------------- | -------------------------------------------- | ------------------------------------------------------------- |
| **High** (text instructions) | Multiple valid approaches, context-dependent | "Analyze structure, check for issues, suggest improvements" |
| **Medium** (pseudocode/templates) | Preferred pattern exists, some variation OK | `def generate_report(data, format="markdown"):` |
| **Low** (exact scripts) | Fragile operations, consistency critical | `python scripts/migrate.py --verify --backup` (do not modify) |
**Analogy:** Narrow bridge with cliffs = low freedom. Open field = high freedom.
## Quality Dimensions
Six dimensions to keep in mind during the build phase. The quality scanners check these automatically during optimization.
| Dimension | What It Means |
| -------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Informed Autonomy** | Overview establishes domain framing, theory of mind, and design rationale, enough for judgment calls |
| **Intelligence Placement** | Scripts handle plumbing (fetch, transform, validate). Prompts handle judgment (interpret, classify, decide). If a script contains an `if` that decides what content _means_, intelligence has leaked |
| **Progressive Disclosure** | SKILL.md stays focused; stage instructions go in `prompts/`, reference data in `resources/` |
| **Description Format** | Two parts: `[5-8 word summary]. [Use when user says 'X' or 'Y'.]`. Default to conservative triggering |
| **Path Construction** | Use `{project-root}` for any project-scope path and `./` for same-folder references inside a skill. Cross-directory skill-internal paths are bare (e.g. `references/foo.md`). Config variables already contain `{project-root}`, so never double-prefix them |
| **Token Efficiency** | Remove genuine waste (repetition, defensive padding). Preserve context that enables judgment (domain framing, rationale) |
## Shipping a Customization Surface
When your skill's users come from varied contexts (different orgs, different domains, different taste in output formats), a `customize.toml` surface lets them override specific fields without forking. It's opt-in per skill, and the decision is deliberate: every knob you ship is a promise the resolver will carry across releases. Before you opt in during the build, read [Customization for Authors](/explanation/customization-for-authors.md) for the decision framework and [How to Make a Skill Customizable](/how-to/make-a-skill-customizable.md) for the mechanics.
## Common Patterns
### Soft Gate Elicitation
For guided workflows, use "anything else?" soft gates at natural transition points instead of hard menus.
```markdown
Present what you've captured so far, then:
"Anything else you'd like to add, or shall we move on?"
```
Users almost always remember one more thing when given a graceful exit ramp rather than a hard stop. This consistently produces richer artifacts than rigid section-by-section questioning. Use at every natural transition in collaborative discovery workflows. Skip in autonomous/headless execution.
### Intent-Before-Ingestion
Never scan artifacts or project context until you understand WHY the user is here. Without knowing intent, you cannot judge what is relevant in a 100-page document.
```markdown
1. Greet and understand intent
2. Accept whatever inputs the user offers
3. Ask if they have additional context
4. ONLY THEN scan artifacts, scoped to relevance
```
### Capture-Don't-Interrupt
When users provide information beyond the current scope (dropping requirements during a product brief, mentioning platforms during vision discovery), capture it silently for later use rather than redirecting them.
Users in creative flow share their best insights unprompted. Interrupting to say "we'll cover that later" kills momentum and may lose the insight entirely.
### Dual-Output: Human Artifact + LLM Distillate
Any artifact-producing workflow can output two complementary documents: a polished human-facing artifact AND a token-conscious, structured distillate optimized for downstream LLM consumption.
| Output | Purpose |
| -------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Primary** | Human-facing document: concise, well-structured |
| **Distillate** | Dense, structured summary for downstream LLM workflows: captures overflow, rejected ideas (so downstream does not re-propose them), detail bullets with enough context to stand alone |
The distillate bridges the gap between what belongs in the human document and what downstream workflows need. Always offered to the user, never forced.
### Three-Mode Architecture
Interactive workflows can offer three execution modes matching different user contexts.
| Mode | Trigger | Behavior |
| ------------------------- | --------------------------- | ---------------------------------------------------------------------------------------- |
| **Guided** | Default | Section-by-section with soft gates; drafts from what it knows, questions what it doesn't |
| **YOLO** | `--yolo` or "just draft it" | Ingests everything, drafts complete artifact upfront, then walks user through refinement |
| **Headless (Autonomous)** | `--headless` / `-H` | Headless; takes inputs, produces artifact, no interaction |
Not every workflow needs all three, but considering them during design prevents painting yourself into a single interaction model.
### Parallel Review Lenses
Before finalizing any significant artifact, fan out multiple reviewers with different perspectives.
| Reviewer | Focus |
| ----------------------- | ----------------------------------------------------------------------------------------------------- |
| **Skeptic** | What is missing? What assumptions are untested? |
| **Opportunity Spotter** | What adjacent value? What angles? |
| **Contextual** | LLM picks the best third lens for the domain (regulatory risk for healthtech, DX critic for devtools) |
Graceful degradation: if subagents are unavailable, the main agent does a single critical self-review pass.
### Graceful Degradation
Every subagent-dependent feature should have a fallback path. Skills run across different platforms, models, and configurations. A skill that hard-fails without subagents is fragile. One that falls back to sequential processing works everywhere.
### Verifiable Intermediate Outputs
For complex tasks: plan, validate, execute, verify.
1. Analyze inputs
2. Create `changes.json` with planned updates
3. Validate with script before executing
4. Execute changes
5. Verify output
Catches errors early, is machine-verifiable, and makes planning reversible.
## Writing Guidelines
| Do | Avoid |
| ------------------------------------------------------ | ------------------------------------------------------------- |
| Consistent terminology: one term per concept | Switching between "workflow" and "process" for the same thing |
| Third person in descriptions: "Processes files" | First person: "I help process files" |
| Descriptive file names: `form_validation_rules.md` | Sequence names: `doc2.md` |
| Forward slashes in all paths | Backslashes or platform-specific paths |
| One level deep for references: SKILL.md → resource.md | Nested references: SKILL.md → A.md → B.md |
| Table of contents for files over 100 lines | Long files without navigation |
## Anti-Patterns
| Anti-Pattern | Fix |
| ------------------------------------------- | -------------------------------------------------- |
| Too many options upfront | One default with escape hatch for edge cases |
| Deep reference nesting (A→B→C) | Keep references one level from SKILL.md |
| Inconsistent terminology | Choose one term per concept |
| Vague file names | Name by content, not sequence |
| Scripts that classify meaning via regex | Intelligence belongs in prompts, not scripts |
| Over-optimization that flattens personality | Preserve phrasing that captures the intended voice |
| Hard-failing when subagents are unavailable | Always include a sequential fallback path |
Subagents are isolated LLM instances that a parent skill spawns to handle specific tasks. Each gets its own context window, receives instructions, and returns results. Used well, they keep the parent context small while enabling parallel work at scale.
All patterns share one principle: **the filesystem is the single source of truth**. Parent context stays tiny (file pointers + high-level plan). Subagents are stateless black boxes: instructions in, response out, isolated context.
## Foundation: The Filesystem Blackboard
Every pattern below builds on this infrastructure. The filesystem acts as a shared database so the parent never bloats its context.
```
/output/
├── status.json ← task states, completion flags
├── knowledge.md ← accumulated findings (append-only)
└── task-queue.json ← pending work items
/tasks/{id}/
├── input.md ← instructions for this subagent
└── output/
├── result.json ← structured output (strict schema)
└── summary.md ← compact summary (≤200 tokens)
/artifacts/ ← final deliverables
```
One technique is to have every subagent prompt ends the same way: _"You are stateless. Read ONLY the files listed. Write ONLY result.json + summary.md. Do not echo data back."_
## Pattern 1: Delegated Data Access
The simplest pattern. Subagents read sources and return only distilled summaries. The parent never touches raw data.
| Aspect | Detail |
| -------------------- | ---------------------------------------------------------------------------------------------------------------------------- |
| **How it works** | Parent spawns readers in parallel; each reads a source and returns a compact summary; parent synthesizes from summaries only |
| **Critical rule** | Parent must delegate _before_ touching any source material. If it reads first, the tokens are already spent |
| **When to use** | 5+ documents, web research, large codebase exploration |
| **Not worth it for** | 1-2 files where the overhead exceeds the savings |
| **Token savings** | ~99%. Five docs at 15K tokens each = 75K raw vs ~350 tokens in summaries |
## Pattern 2: Temp File Assembly
For large-scale operations with potentially a lot of relevant data across multiple sources. Subagents write results to temp files. A separate assembler subagent combines them into a cohesive deliverable.
| Aspect | Detail |
| ---------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **How it works** | Parent spawns N worker subagents writing to `tmp/{n}.md`; after all complete, spawns an assembler subagent that reads all temp files and creates the final artifact |
| **When to use** | When summaries are still too large to return inline, or when assembly needs a dedicated agent with fresh context |
| **Example** | The BMad quality optimizer uses this: 5 parallel scanner subagents write temp JSON, then a report-creator subagent synthesizes them |
## Pattern 3: Shared-File Orchestration
Multiple subagents communicate through shared files, building on each other's work. The parent controls turn order.
| Aspect | Detail |
| ---------------- | --------------------------------------------------------------------------------------------------------------------------------------------- |
| **How it works** | Agent A writes to `shared.md`; Agent B reads it and adds; Agent A can be resumed to continue; the shared file grows incrementally |
| **Variants** | Shared file (multiple agents read/write a common file) or session resumption (reawaken a previous subagent to continue with its full context) |
| **When to use** | Pipeline stages where later work depends on earlier work, but each agent's context stays small |
## Pattern 4: Hierarchical Lead-Worker
A lead subagent analyzes the task once and writes a breakdown. The parent spawns workers from that plan. Mid-level sub-orchestrators can handle complex subtasks.
| Aspect | Detail |
| ---------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------- |
| **How it works** | Lead agent writes `plan.json` with task breakdown; parent reads plan and spawns workers in parallel; complex subtasks get their own sub-orchestrator |
| **When to use** | Tasks that need analysis before decomposition, or where the parent cannot predict the work structure upfront |
| **Variant** | Master-clone: spawn near-identical agents with slight persona tweaks exploring different branches of the same problem |
## Pattern 5: Persona-Driven Parallel Reasoning
The most powerful pattern for quality. Spawn diverse specialists in parallel, producing genuinely independent thinking from isolated contexts.
| Aspect | Detail |
| ---------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **How it works** | Parent spawns 3-6 agents with distinct personas (Architect, Red Teamer, Pragmatist, Innovator); each writes findings independently; an evaluator subagent scores and merges the best elements |
| **When to use** | Design decisions, code review, strategy, any task where diverse perspectives improve quality |
| **Key** | Heavy persona injection gives genuinely different outputs, not just paraphrases of the same analysis |
**Useful diversity packs:**
| Persona | Perspective |
| ----------------- | ------------------------------------------------- |
| **Architect** | Scale and elegance above all |
| **Red Teamer** | Break this. What fails? |
| **Pragmatist** | Ship it Friday. What is the minimum? |
| **Innovator** | What if we approached this entirely differently? |
| **User Advocate** | How does the end user actually experience this? |
| **Future-Self** | With 5 years of hindsight, what would you change? |
**Sub-patterns:**
| Sub-Pattern | How It Works |
| -------------------------- | --------------------------------------------------------------------------------------------------------- |
| **Multi-Path Exploration** | Same task, different personas. Each writes to `/explorations/path_N/`. Parent prunes or merges best paths |
| **Debate & Critique** | Round 1: parallel proposals. Round 2: critics attack proposals. Round 3: refinement |
| **Ensemble Voting** | Same subtask K times with persona variations. Evaluator scores. Weighted merge of winners |
## Pattern 6: Evolutionary & Emergent Systems
These turn stateless subagents into something that feels alive. All build on the filesystem blackboard.
| Variant | How It Works | Best For |
| ------------------------------ | --------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------- |
| **Evolutionary Optimization** | Spawn 8-20 agents as a "generation"; evaluator scores; "breeder" creates next-gen instructions from winners; run 5-10 generations | Optimizing algorithms, UI designs, strategies |
| **Stakeholder Simulation** | Agents are characters (customer, competitor, regulator) acting on shared "world state" files in turns | Product strategy, risk analysis |
| **Swarm Intelligence** | Dozens of lightweight agents explore solution space, depositing "pheromone" scores; later agents bias toward high-scoring paths | Broad coverage with minimal planning |
| **Recursive Meta-Improvement** | "Evolver" agents analyze past logs and propose improved system prompts, new roles, or better orchestration heuristics | System self-improvement across sessions |
## The Most Common Mistake: Parent Reads First
The single most important thing to get right with subagent patterns is **preventing the parent from reading the data it is delegating**. If the parent reads all the files before spawning subagents, the entire pattern is defeated. You have already spent the tokens, bloated the context, and lost the isolation benefit.
This happens often. You write a skill that should spawn subagents to each read a document and return findings. You run it. The parent agent helpfully reads every document first, then passes them to subagents, then collects distilled summaries. The subagents still provide fresh perspectives, but the context savings (the primary reason for the pattern) are gone.
**The fix is defensive language in your skill.** Explicitly tell the parent agent what it should and should not do. Be specific without being verbose.
:::note[Example from the BMad Quality Optimizer]
The optimizer's instructions say: **"DO NOT read the target skill's files yourself."** It then tells the parent exactly what it _should_ do: run scripts (which return structured JSON), spawn subagents (which do the reading), and synthesize from their outputs. The parent never touches the raw files.
:::
**Practical tips for getting this right:**
| Tip | Example Language |
| ---------------------------------------------- | --------------------------------------------------------------------------------------------------------------------- |
| **Tell the parent what to discover, not read** | "List all files in `resources/` by name to determine how many subagents to spawn. Do not read their contents" |
| **Tell subagents what to return** | "Return only findings relevant to [topic]. Output as JSON to `{output-path}`. Do not echo raw content" |
| **Use pre-pass scripts** | Run a lightweight script that extracts metadata (file names, sizes, structure) so the parent can plan without reading |
| **Be explicit about the boundary** | "Your role is ORCHESTRATION. Scripts and subagents do all analysis" |
**Test and watch what actually happens.** If the parent reads files it should be delegating, tighten the language. This is normal iteration. The builders are tuned with these patterns, but different models and tools may need more explicit guidance. Review the BMad quality optimizer prompts (`prompts/quality-optimizer.md`) and scanner agents (`agents/quality-scan-*.md`) for working examples of this defensive language.
## Choosing a Pattern
| Need | Pattern |
| ------------------------------------------------- | ------------------------------------- |
| Read multiple sources without bloating context | 1: Delegated Data Access |
| Combine many outputs into one deliverable | 2: Temp File Assembly |
| Pipeline where stages depend on each other | 3: Shared-File Orchestration |
| Task needs analysis before work can be decomposed | 4: Hierarchical Lead-Worker |
| Quality through diverse perspectives | 5: Persona-Driven Parallel Reasoning |
| Iterative optimization or simulation | 6: Evolutionary & Emergent |
## Implementation Notes
- Force **strict JSON schemas** on every subagent output for reliable parent parsing
- Use **git worktrees** or per-agent directories to prevent crosstalk
- Start small: one orchestrator that reads `plan.md` and spawns the first wave
- Patterns compose: use Delegated Access for data gathering, Persona-Driven for analysis, Temp File Assembly for the final report
- Always include **graceful degradation**. If subagents are unavailable, the main agent performs the work sequentially
BMad Agents are AI skills that combine a **persona**, **capabilities**, and optionally **persistent memory** into a conversational partner. They range from focused, stateless experts to evolving companions that remember you across sessions.
## What Makes an Agent an Agent
Agents are skill files with three additional traits that workflows lack.
| Trait | What It Means |
| ---------------- | ---------------------------------------------------------------------------------------------------------------------- |
| **Persona** | A defined role and voice (architect, coach, game master, muse) that shapes how the agent communicates |
| **Capabilities** | Actions the agent can perform, either as internal prompt commands, scripts, or by calling external skills |
| **Memory** | Optional persistent storage where the agent keeps what it learns about you, your preferences, and past interactions |
Together, they turn the interaction into a conversation with a specialist who knows your context.
## The Three Agent Types
Agents exist on a spectrum. The builder detects which type fits through natural conversation.
| Type | Memory | First Breath | Autonomous | Build For |
| -------------- | ------ | ------------ | ---------- | ------------------------------------------------------------ |
| **Stateless** | No | No | No | Isolated sessions, focused experts (code formatter, diagram generator, meeting summarizer) |
| **Memory** | Yes | Yes | No | Ongoing relationships where remembering adds value (code coach, writing partner, domain advisor) |
| **Autonomous** | Yes | Yes | Yes | Proactive value creation between sessions (idea incubation, project monitoring, content curation) |
### Stateless Agents
Everything lives in a single SKILL.md with supporting references. No memory directory, no initialization ceremony. The agent brings a persona and capabilities but treats every session as independent. Pick this type when prior session context wouldn't change the agent's behavior.
### Memory Agents
A lean bootloader SKILL.md (~30 lines) points to a **sanctum**: a set of persistent files the agent reads on every launch to become itself again. The sanctum holds the agent's identity, values, understanding of its owner, curated knowledge, and capability registry. On first launch, a **First Breath** conversation lets the agent discover who you are and calibrate itself to your needs.
Memory agents treat every session as a rebirth. They don't fake continuity; they read their sanctum files and become themselves again. If they don't remember something, they say so and check the files.
### Autonomous Agents
Everything a memory agent has, plus a PULSE file that defines what the agent does when no one's watching. Autonomous agents can wake on a schedule (cron, background task) and perform maintenance, from curating memory to checking on projects to running domain-specific tasks. With a human present, they're conversational. Headless, they work independently and exit.
## Capabilities: Internal, External, and Scripts
| Type | Description | Example |
| --------------------- | ----------------------------------------------------------- | ------------------------------------------------------------- |
| **Internal commands** | Prompt-driven actions defined inside the agent's skill file | A Dream Agent's "Dream Capture" command |
| **External skills** | Standalone skills or workflows the agent can invoke | Calling the `create-prd` workflow via a PM agent |
| **Scripts** | Deterministic operations offloaded from the LLM | Validation, data processing, file operations |
You choose the mix when you design the agent. Internal commands keep everything self-contained. External skills let you compose agents from shared building blocks, and scripts handle operations where determinism matters more than judgment.
### Evolvable Capabilities
Memory agents can optionally support **evolvable capabilities**. When enabled, the agent gets a capability-authoring reference and a "Learned" section in its capability registry. Users can teach the agent new prompt-based, script-based, or multi-file capabilities that it absorbs into its repertoire over time.
## How Memory Works
Memory agents store their persistent state in a **sanctum** at `_bmad/memory//`. The sanctum contains six core files that load on every session:
| File | Purpose |
| ------------------- | ----------------------------------------------------------- |
| **PERSONA.md** | Identity, communication style, traits, evolution log |
| **CREED.md** | Mission, values, standing orders, philosophy, boundaries |
| **BOND.md** | Owner understanding, preferences, things to remember/avoid |
| **MEMORY.md** | Curated long-term knowledge (kept under 200 lines) |
| **CAPABILITIES.md** | Built-in + learned capabilities registry |
| **INDEX.md** | Map of the sanctum structure (loaded first on every rebirth)|
:::tip[Memory Lives Outside the Skill]
Agent memory is stored in your project, not inside the skill folder. This keeps agents from modifying their own instructions and makes your data portable. The same agent can be used across different projects, each generating its own memory space.
:::
Sanctum architecture, First Breath, PULSE, and the two-tier memory system are covered in **[Agent Memory and Personalization](/explanation/agent-memory-and-personalization.md)**.
## When to Build an Agent vs. a Workflow
| Choose an Agent When | Choose a Workflow When |
| ------------------------------------------------- | ------------------------------------------------ |
| The user will return to it repeatedly | The process runs once and produces an output |
| Remembering context across sessions adds value | Stateless execution is fine |
| A strong persona improves the interaction | Personality is secondary to getting the job done |
| The skill spans many loosely related capabilities | All steps serve a single, focused goal |
If you're unsure, start with a workflow. You can always wrap it inside an agent later.
## Customization Surface
Every agent ships a `customize.toml` next to its `SKILL.md`. The metadata block (code, name, title, icon, description, agent_type) is always present; it's the install-time roster contract consumed by `module.yaml:agents[]` and the central agent config. Beyond metadata, an override surface (activation hooks, persistent facts, swappable scalars) is opt-in per skill.
For memory and autonomous agents, the sanctum is the primary customization surface. Persona, creed, bond, and capabilities all live there and evolve with the owner. A `customize.toml` override surface would compete with that, so it is disabled by default for those archetypes.
See [Customization for Authors](/explanation/customization-for-authors.md) for the decision guide, or [How to Customize BMad](https://docs.bmad-method.org/how-to/customize-bmad/) for the end-user view.
## Building Agents
The **BMad Agent Builder** (`bmad-agent-builder`) runs six phases of conversational discovery. The first phase detects which agent type fits your vision through natural questions, and the remaining phases adapt based on whether you're creating a stateless expert, a memory-backed companion, or an autonomous agent.
See the [Builder Commands Reference](/reference/builder-commands.md) for details on the build process phases and capabilities.
BMad modules package agents and workflows into installable units with shared configuration and help system registration. A module can be a full suite of related skills or a single standalone skill that wants to be discoverable and configurable.
## Distribution: Plugins and Marketplaces
At the distribution level, a BMad module is a **plugin**: a package of skills with a `.claude-plugin/` manifest. How you structure it depends on what you're shipping:
| Structure | When to Use | Manifest |
| ------------------- | ------------------------------------------------------------ | --------------------------------------------------------- |
| **Single plugin** | One module (standalone or multi-skill) | `.claude-plugin/marketplace.json` with one plugin entry |
| **Marketplace** | A repo that ships multiple modules | `.claude-plugin/marketplace.json` with multiple plugin entries |
The `.claude-plugin/` convention originates from Claude Code, but the format works across multiple skills platforms. The BMad installer supports installing custom modules from any Git host (GitHub, GitLab, Bitbucket, self-hosted) or local file paths. See the [BMad Method install guide](https://docs.bmad-method.org/how-to/install-custom-modules/) for details.
The Module Builder generates the appropriate `marketplace.json` during the Create Module (CM) step - but you will want to verify it lists the proper relative paths to the skills you want to deliver with your module.
This also means you can include remote URL skills in your own module to combine them.
## What a Module Contains
| Component | Multi-Skill Module | Standalone Module |
| ------------------- | ------------------------------------------------------- | ---------------------------------------------------------- |
| **Skills** | Two or more agents/workflows | A single agent or workflow |
| **Registration** | Dedicated `{code}-setup` skill | Built into the skill itself (`assets/module-setup.md`) |
| **module.yaml** | In the setup skill's `assets/` | In the skill's own `assets/` |
| **module-help.csv** | In the setup skill's `assets/` | In the skill's own `assets/` |
| **Distribution** | Plugin with multiple skill folders | Plugin with single skill folder + `marketplace.json` |
For multi-skill modules, the setup skill is the glue; it registers all capabilities in one step. For standalone modules, the skill handles its own registration on first run or when the user passes `setup`/`configure`.
## Agent vs. Workflow vs. Both
The first architecture decision when planning a module is whether to use a single agent, multiple workflows, or a combination.
| Architecture | When It Fits | Trade-offs |
| ---------------------------------- | ---------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------- |
| **Single agent with capabilities** | All capabilities serve the same user journey and benefit from shared context | Simpler to maintain, better memory continuity, seamless UX. Can feel monolithic if capabilities are unrelated |
| **Multiple workflows** | Capabilities serve different user journeys or require different tools | Each workflow is focused and composable. Users switch between skills explicitly |
| **Hybrid** | Some capabilities need persistent persona/memory while others are procedural | Best of both worlds but more skills to build and maintain |
:::tip[Agent-First Thinking]
Many users default to building multiple single-purpose agents. Consider whether one agent with rich internal capabilities and routing would serve users better. A single agent accumulates context, maintains memory across interactions, and provides a smoother experience.
:::
## Multi-Agent Modules and Memory
Modules with multiple agents introduce a memory architecture decision. BMad agents exist on a spectrum from stateless (no memory) through memory agents (personal sanctum) to autonomous agents (sanctum + PULSE). In a multi-agent module, you choose both the agent type for each skill and whether agents should share memory across the module.
| Pattern | When It Fits |
| ------------------------------------ | --------------------------------------------------------------------------------------- |
| **Personal memory only** | Agents have distinct domains with minimal overlap |
| **Personal + shared module memory** | Agents have their own context but also learn shared things about the user or project |
| **Shared memory only** | All agents serve the same domain; consider whether a single agent is the better design |
| **Mixed types** | Some agents need memory (coaches, companions) while others are stateless (formatters, validators) |
**Example:** A social creative module with a podcast expert, a viral video expert, and a blog expert. Each memory agent maintains its own sanctum with what it has done with the user (episode topics, video formats, blog themes). But they all also contribute to a module-level memory folder that captures the user's communication style, favorite catchphrases, content preferences, and brand voice.
Each agent should still be self-contained with its own capabilities, even if this means duplicating some common functionality. A podcast expert that can independently handle a full session without needing the blog expert is better than one that depends on shared state to function.
See **[What Are BMad Agents](/explanation/what-are-bmad-agents.md)** for the three agent types, and **[Agent Memory and Personalization](/explanation/agent-memory-and-personalization.md)** for details on how the sanctum architecture works.
## Standalone vs. Expansion Modules
| Type | Description |
| -------------- | ------------------------------------------------------------------------------------------------------------------------- |
| **Standalone** | Provides complete, independent value. Does not depend on another module being installed |
| **Expansion** | Extends an existing module with new capabilities. Should still provide utility even if the parent module is not installed |
Expansion modules can reference the parent module's capabilities in their help CSV ordering (before/after fields). This lets a new capability slot into the parent module's natural workflow sequence.
Even expansion modules should be designed to work independently. The parent module being absent should degrade gracefully, not break the expansion.
## Configuration and Registration
Modules register with a project through three files in `{project-root}/_bmad/`:
| File | Purpose |
| ------------------ | ---------------------------------------------------------------------- |
| `config.yaml` | Shared settings committed to git, module section keyed by module code |
| `config.user.yaml` | Personal settings (gitignored), user name, language preferences |
| `module-help.csv` | Capability registry, one row per action users can discover |
Registration is what makes a module visible to `bmad-help`. Without it, the help system cannot discover, recommend, or track completion of the module's capabilities.
Not every module needs configuration. If skills work with sensible defaults, registration can focus purely on help entries. See **[Module Configuration](/explanation/module-configuration.md)** for details on when configuration adds value and how the help CSV columns work.
## External Dependencies
Some modules depend on tools outside the BMad ecosystem.
| Dependency Type | Examples |
| ---------------- | ---------------------------------------------------- |
| **CLI tools** | `docker`, `terraform`, `ffmpeg` |
| **MCP servers** | Custom or third-party Model Context Protocol servers |
| **Web services** | APIs that require credentials or configuration |
When a module has external dependencies, the setup skill should check for their presence and guide users through installation or configuration.
## UI and Visualization
Modules can include user interfaces: dashboards, progress views, interactive visualizations, or even full web applications. A UI skill might show shared progress across the module's capabilities, provide a visual map of how skills relate, or offer an interactive way to navigate the module's features.
Not every module needs a UI. But for complex modules with many capabilities, a visual layer makes the experience much more accessible.
## Building a Module
The Module Builder (`bmad-module-builder`) provides three capabilities for the module lifecycle:
1. **Ideate Module (IM)**: Brainstorm and plan through creative facilitation
2. **Create Module (CM)**: Package skills as an installable module. Detects whether you have a folder of skills (generates a setup skill) or a single skill (embeds self-registration directly)
3. **Validate Module (VM)**: Verify structural integrity and entry quality for both multi-skill and standalone modules
See the **[Builder Commands Reference](/reference/builder-commands.md)** for detailed documentation on each capability.
Skills are the universal packaging format for everything the BMad Builder produces. Agents are skills. Workflows are skills. Simple utilities are skills. The format follows the [Agent Skills open standard](https://agentskills.io/home).
## Skills in BMad
The BMad Builder produces skills that conform to the open standard and adds a few BMad-specific conventions on top.
| Component | Purpose |
| -------------- | -------------------------------------------------------------------- |
| **SKILL.md** | The skill's instructions: persona, capabilities, and behavior rules |
| **resources/** | Reference data, templates, and guidance documents |
| **scripts/** | Deterministic validation and analysis scripts |
| **templates/** | Building blocks for generated output |
Not every skill needs all of these. A simple utility might be a single `SKILL.md`. A complex workflow or agent may use the full structure.
## Ready to Use on Build
The builders output a complete skill folder. Place it in your tool's skills directory (`.claude/skills`, `.codex/skills`, `.agent/skills`, or wherever your tool looks) and it's immediately usable.
See [What Are Agents](/explanation/what-are-bmad-agents.md) and [What Are Workflows](/explanation/what-are-workflows.md) for how agents and workflows each use this foundation differently.
BMad Workflows are skills that guide users through a **structured process** to produce a specific output. They do most of the heavy lifting in the BMad ecosystem. Focused, composable, and generally stateless.
## What Makes a Workflow a Workflow
Like agents, workflows are ultimately skill files. The difference is in emphasis: workflows prioritize **getting to an outcome** over maintaining a persistent identity.
| Trait | Workflow | Agent |
| ----------- | -------------------------------------------------- | ------------------------------------- |
| **Goal** | Complete a defined process and produce an artifact | Be an ongoing conversational partner |
| **Persona** | Minimal, enough to facilitate a good conversation | Central to the experience |
| **Memory** | Generally stateless between sessions | Persistent agent memory |
| **Scope** | All steps serve one cohesive purpose | Can span loosely related capabilities |
## Workflow Types
The BMad Builder classifies workflows into three tiers based on complexity.
| Type | Description | Example |
| -------------------- | ---------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------- |
| **Simple Utility** | A single-purpose tool that does one thing well | Validate a schema, convert a file format |
| **Simple Workflow** | A short guided process with a few sequential steps | Create a quick tech spec |
| **Complex Workflow** | A multi-stage process with branching paths, progressive disclosure, and potentially multiple outputs | Create and manage PRDs (covering create, edit, validate, convert, and polish) |
:::tip[Start Simple]
Most ideas start as a Simple Utility or Simple Workflow. Graduate to Complex only when you genuinely need branching paths or multiple related operations in one skill.
:::
## Progressive Disclosure
Complex workflows use **progressive disclosure** to handle multiple operations within a single skill. Rather than building five separate skills for create, edit, validate, convert, and polish, you build one workflow that detects the user's intent (from how they talk to it or what arguments they pass) and routes internally to the right path.
This is the same pattern that powers BMad's own multi-capability agents and workflows. It keeps the user's experience simple while the skill handles routing behind the scenes.
## YOLO Mode and Guided Mode
Both the Agent Builder and the Workflow Builder support two interaction styles when creating skills.
| Mode | How It Works | Best For |
| ---------- | ------------------------------------------------------------------------------------------------------- | ----------------------------------------- |
| **YOLO** | You brain-dump your idea; the builder guesses its way to a finished skill, asking only when truly stuck | Quick prototypes, experienced builders |
| **Guided** | The builder walks you through decisions, clarifies ambiguities, and ensures nothing is overlooked | Production workflows, first-time builders |
Guided mode is no longer the slow multi-step process of earlier BMad versions. It is conversational and adaptive, but produces significantly better results than YOLO for complex workflows.
## Headless (Autonomous) Mode
Like agents, workflows can support a **Headless Mode**. When invoked headless (through a scheduler, orchestrator, or another skill) the workflow skips interactive prompts and completes its process end-to-end without waiting for user input.
## When to Build a Workflow vs. an Agent
| Choose a Workflow When | Choose an Agent When |
| ------------------------------------- | -------------------------------------------- |
| The process has a clear start and end | The user will return to it across sessions |
| No need to remember past interactions | Remembering context adds value |
| All steps serve one cohesive goal | Capabilities are loosely related |
| You want a composable building block | You want a persistent conversational partner |
Workflows are also excellent as the **internal capabilities** of an agent. Build the workflow first, then wrap it in an agent if you need persona and memory on top.
## Customization Surface
Workflow customization is fully opt-in. If you don't need users to override anything, don't ship a `customize.toml` at all; the workflow runs with hardcoded paths and defaults. If you do opt in, the builder walks you through Configurability Discovery, where you name the scalars (templates, output paths, hooks) you want to expose. Users override them through the three-layer model: your shipped defaults at `{skill-root}/customize.toml`, team overrides at `_bmad/custom/{skill-name}.toml`, and personal overrides at `_bmad/custom/{skill-name}.user.toml`.
See [Customization for Authors](/explanation/customization-for-authors.md) for the decision guide and [How to Make a Skill Customizable](/how-to/make-a-skill-customizable.md) for the build-time steps.
## Building Workflows
The **BMad Workflow Builder** (`bmad-workflow-builder`) uses the same six-phase conversational discovery as the Agent Builder (intent, classification, requirements, drafting, building, and quality optimization) and produces a ready-to-use skill folder.
See the [Builder Commands Reference](/reference/builder-commands.md) for details on the build process phases and capabilities.
Reference for the three core BMad Builder skills: the Agent Builder (`bmad-agent-builder`), the Workflow Builder (`bmad-workflow-builder`), and the Module Builder (`bmad-module-builder`).
## Capabilities Overview
| Capability | Menu Code | Agent Builder | Workflow Builder |
| -------------------- | --------- | ------------------------------------- | ----------------------------------------------------------------------------------- |
| **Build Process** | BP | Build, edit, convert, or fix agents | Build, edit, convert, or fix workflows and utilities |
| **Quality Optimize** | QO | Validate and optimize existing agents | Validate and optimize existing workflows and utilities |
| **Convert** | CW | - | Convert any skill to BMad-compliant, outcome-driven equivalent with comparison report |
Both capabilities support autonomous/headless mode via `--headless` / `-H` flags.
## Skill Naming
| Context | Agent Pattern | Workflow Pattern |
| -------------- | -------------------------- | ---------------------- |
| **Standalone** | `agent-{name}` | `{name}` |
| **Module** | `{modulecode}-agent-{name}`| `{modulecode}-{name}` |
Names must be kebab-case and match the folder name. Agents should include `agent` in the name. For module-based skills, the user chooses the module code prefix during the build.
:::caution[Reserved Prefix]
The `bmad-` prefix is reserved for official BMad creations. User-built skills should not include it. If converting a skill that already has a `bmad-` prefix, retain it unless the user requests a rename.
:::
## Build Process (BP)
The core creative path. Six phases of conversational discovery take you from a rough idea to a complete, tested skill folder.
### Input Types
Both builders accept any of these as a starting point.
| Input | What Happens |
| --------------------------------- | --------------------------------------------------------- |
| A rough idea or description | Guided discovery from scratch |
| An existing BMad skill path | Edit mode. Analyze what exists, determine what to change |
| A non-BMad skill, tool, or code | Convert to BMad-compliant structure |
| Documentation, API specs, or code | Extract intent and requirements automatically |
### Interaction Modes
| Mode | Behavior | Best For |
| -------------- | -------------------------------------------------------------------------------------------- | -------------------------------------------- |
| **Guided** | The builder walks through decisions, clarifies ambiguities, ensures completeness | Production skills, first-time builders |
| **YOLO** | Brain-dump your idea; the builder guesses its way to a finished skill with minimal questions | Quick prototypes, experienced builders |
| **Autonomous** | Fully headless; no interactive prompts, proceeds with safe defaults | CI/CD, batch processing, orchestrated builds |
### Build Phases
| Phase | Agent Builder | Workflow Builder |
| ----- | ------------------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------------------- |
| 1 | **Discover Intent**: understand the vision; detect agent type (stateless, memory, or autonomous) through natural questions | **Discover Intent**: understand the vision; accepts any input format |
| 2 | **Capabilities Strategy**: internal commands, external skills, scripts; evolvable capability decision | **Classify Skill Type**: Simple Utility, Simple Workflow, or Complex Workflow; module membership |
| 3 | **Gather Requirements**: identity, persona memory seeds, First Breath territories, PULSE behaviors, folder dominion | **Gather Requirements**: name, description, stages, config variables, output artifacts, dependencies |
| 4 | **Draft & Refine**: present outline, iterate until ready | **Draft & Refine**: present plan, clarify gaps, iterate until ready |
| 5 | **Build**: generate skill structure per agent type, lint gate | **Build**: generate skill structure, lint gate |
| 6 | **Summary**: present results, offer Quality Optimize | **Summary**: present results, run unit tests if scripts exist, offer Quality Optimize |
### Agent Builder: Phase 1 Agent Type Detection
The builder determines the agent type through natural questions, not a menu:
| Question (asked naturally) | If No | If Yes |
| --------------------------------------------------- | -------------- | -------------------------- |
| Does this agent need to remember between sessions? | Stateless | Memory or Autonomous |
| Should the user be able to teach it new things? | Fixed capabilities | Evolvable capabilities |
| Does it operate autonomously between sessions? | Memory | Autonomous |
For memory and autonomous agents, the builder also determines **relationship depth**: deep (calibration-style First Breath with open-ended discovery) or focused (configuration-style First Breath with guided questions).
### Agent Builder: Phase 2 Capabilities Strategy
Determines the mix of internal and external capabilities, plus script opportunities.
| Capability Type | Description |
| ------------------------- | --------------------------------------------------------------------------------------- |
| **Internal commands** | Prompt-driven actions, each gets a file in `references/` |
| **External skills** | Standalone skills the agent invokes by registered name |
| **Scripts** | Deterministic operations offloaded from the LLM (validation, data processing, file ops) |
| **Evolvable capabilities**| If enabled: user can teach the agent new capabilities over time via authoring reference |
### Agent Builder: Phase 3 Requirements
Requirements differ by agent type. Stateless agents need identity and capabilities. Memory and autonomous agents need everything below.
**All agent types:**
| Requirement | Description |
| -------------------- | ----------------------------------------------------------------------------------- |
| **Identity** | Who is this agent? Communication style, decision-making philosophy |
| **Capabilities** | Internal commands, external skills, scripts |
| **Folder dominion** | Read boundaries, write boundaries, explicit deny zones |
**Memory and autonomous agents add:**
| Requirement | Description |
| ---------------------------- | ------------------------------------------------------------------------------ |
| **Identity seed** | 2-3 sentences of personality DNA for PERSONA.md |
| **Species-level mission** | Domain-specific purpose statement for CREED.md |
| **Core values** | 3-5 values that guide behavior |
| **Standing orders** | Surprise-and-delight + self-improvement, adapted to the domain with examples |
| **CREED seeds** | Philosophy, boundaries, anti-patterns (behavioral + operational) |
| **BOND territories** | Domain-specific areas to learn about the owner |
| **First Breath territories** | Discovery questions beyond the universal set |
**Autonomous agents add:**
| Requirement | Description |
| ------------------------ | ------------------------------------------------------------------------------ |
| **PULSE behaviors** | Default wake behavior, domain-specific autonomous tasks |
| **Named task routing** | Tasks invoked via `--headless {task-name}` or `-H {task-name}` |
| **Frequency & quiet hours** | How often to wake, when not to |
### Workflow Builder: Phase 2-3 Details
**Skill type classification** determines template and structure.
| Type | Signals | Structure |
| -------------------- | ----------------------------------------------------------------------------- | ----------------------------------------------------------------------------------- |
| **Simple Utility** | Composable building block, clear input/output, usually mostly script-driven | Single SKILL.md, scripts folder |
| **Simple Workflow** | Fits in one SKILL.md, a few sequential steps, optional autonomous | SKILL.md with inline steps, optional prompts and resources |
| **Complex Workflow** | Multiple stages, branching prompt flows, progressive disclosure, long-running | SKILL.md for routing, `prompts/` for stage details, `resources/` for reference data |
**Workflow-specific requirements** gathered in Phase 3:
| Requirement | Simple Utility | Simple Workflow | Complex Workflow |
| ----------------------- | -------------- | --------------- | ---------------------------------------- |
| **Input/output format** | Yes | - | - |
| **Composability** | Yes | - | - |
| **Steps** | - | Numbered steps | Named stages with progression conditions |
| **Headless mode** | - | Optional | Optional |
| **Config variables** | - | Core + custom | Core + module-specific |
| **Module sequencing** | Optional | Optional | Recommended |
### Build Output
The output structure depends on the agent type.
**Stateless agents:**
```
{skill-name}/
├── SKILL.md # Full identity + persona + capabilities
├── references/ # Capability prompts
├── agents/ # Subagent definitions (if needed)
├── scripts/ # Deterministic scripts
│ └── tests/ # Unit tests for scripts
└── assets/ # Templates (if needed)
```
**Memory and autonomous agents:**
```
{skill-name}/
├── SKILL.md # Lean bootloader (~30 lines of content)
├── references/
│ ├── first-breath.md # First Breath conversation guide
│ ├── memory-guidance.md # Session close and curation practices
│ ├── capability-authoring.md # If evolvable capabilities enabled
│ └── {capability}.md # Outcome-focused capability prompts
├── assets/ # Sanctum seed templates
│ ├── INDEX-template.md
│ ├── PERSONA-template.md
│ ├── CREED-template.md
│ ├── BOND-template.md
│ ├── MEMORY-template.md
│ ├── CAPABILITIES-template.md
│ └── PULSE-template.md # Autonomous agents only
├── agents/ # Subagent definitions (if needed)
└── scripts/
├── init-sanctum.py # Creates sanctum folder, copies templates, generates CAPABILITIES.md
└── tests/
```
The seed templates contain real content from the discovery phases, not placeholders. The init script is parameterized with the skill name, file lists, and evolvable flag.
**Workflow builder** output remains the same regardless of agent type:
```
{skill-name}/
├── SKILL.md # Skill instructions
├── prompts/ # Stage prompts for complex workflows
├── resources/ # Reference data
├── agents/ # Subagent definitions for parallel processing
├── scripts/ # Deterministic scripts
│ └── tests/ # Unit tests for scripts
└── templates/ # Building blocks for generated output
```
### Lint Gate
Before completing the build, both builders run deterministic validation.
| Script | What It Checks |
| ------------------------ | ----------------------------------------------------------------------------------------- |
| `scan-path-standards.py` | Path conventions: `{project-root}` for project-scope, `./` for same-folder references, bare paths for cross-directory skill-internal, no double-prefix |
| `scan-scripts.py` | Script portability, PEP 723 metadata, agentic design, unit test presence |
Critical issues block completion. Warnings are noted but don't block.
## Quality Optimize (QO)
Validation and optimization for existing skills. Runs deterministic lint scripts for instant structural checks and LLM scanner subagents for judgment-based analysis, all in parallel.
### Pre-Scan Checks
In interactive mode, the optimizer:
1. Checks for uncommitted changes and recommends committing first
2. Asks if the skill is currently working as expected
In autonomous mode, both checks are skipped and noted as warnings in the report.
### Scan Pipeline
The optimizer runs three tiers of analysis.
**Tier 1: Lint scripts** (deterministic, zero tokens, instant):
| Script | Focus |
| ------------------------ | -------------------------------- |
| `scan-path-standards.py` | Path convention violations |
| `scan-scripts.py` | Script portability and standards |
**Tier 2: Pre-pass scripts** (extract metrics for LLM scanners):
| Script | Agent Builder | Workflow Builder |
| ----------------------------- | ----------------------------------- | ------------------------------- |
| Structure/integrity pre-pass | `prepass-structure-capabilities.py` | `prepass-workflow-integrity.py` |
| Prompt metrics pre-pass | `prepass-prompt-metrics.py` | `prepass-prompt-metrics.py` |
| Execution dependency pre-pass | `prepass-execution-deps.py` | `prepass-execution-deps.py` |
**Tier 3: LLM scanners** (judgment-based, run as parallel subagents):
| Scanner | Agent Builder Focus | Workflow Builder Focus |
| ----------------------------- | -------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------- |
| **Structure / Integrity** | Structure, capabilities, identity, memory setup, consistency | Logical consistency, description quality, progression conditions, type-appropriate structure |
| **Prompt Craft** | Token efficiency, anti-patterns, persona voice, overview quality | Token efficiency, anti-patterns, overview quality, progressive disclosure |
| **Execution Efficiency** | Parallelization, subagent delegation, memory loading, context optimization | Parallelization, subagent delegation, read avoidance, context optimization |
| **Cohesion** | Persona-capability alignment, gaps, redundancies | Stage flow coherence, purpose alignment, complexity appropriateness |
| **Enhancement Opportunities** | Script automation, autonomous potential, edge cases, delight | Creative edge-case discovery, experience gaps, assumption auditing |
### Report Synthesis
After all scanners complete, the optimizer synthesizes results into a unified report saved to `{bmad_builder_reports}/{skill-name}/quality-scan/{timestamp}/`.
In interactive mode, it presents a summary with severity counts and offers next steps:
- Apply fixes directly
- Export checklist for manual fixes
- Discuss specific findings
In autonomous mode, it outputs structured JSON with severity counts and the report file path.
### Optimization Guidance
Not every suggestion should be applied. The optimizer communicates these decision rules:
- **Keep phrasing** that captures the intended voice. Leaner is not always better for persona-driven skills
- **Keep content** that adds clarity for the AI even if a human finds it obvious
- **Prefer scripting** for deterministic operations; **prefer prompting** for creative or judgment-based tasks
- **Reject changes** that flatten personality unless a neutral tone is explicitly wanted
## Convert (CW)
One-command conversion of any existing skill into a BMad-compliant, outcome-driven equivalent. Takes a non-conformant skill (bloated, poorly structured, or just not following BMad practices) and produces a clean version. Unlike the Build Process's edit/rebuild modes, `--convert` always runs headless and produces a visual comparison report.
### Usage
```
--convert [-H]
```
The `--convert` flag implies headless mode. Accepts a local skill path or a URL (not limited to remote; local file paths work too).
### Process
| Step | What Happens |
| ---- | ------------ |
| **1. Capture** | Fetch or read the original skill, save a copy for comparison |
| **2. Rebuild** | Full headless rebuild from intent: extract what the skill achieves, apply BMad outcome-driven best practices |
| **3. Report** | Measure both versions, categorize what changed and why, generate an interactive HTML comparison report |
### Comparison Report
The HTML report includes:
| Section | Content |
| ------- | ------- |
| **Hero banner** | Overall token reduction percentage |
| **Metrics table** | Lines, words, characters, sections, files, estimated tokens, with visual bars |
| **What changed** | Categorized differences (bloat removal, structural reorganization, best-practice alignment) with severity and examples |
| **What survived** | Content that earns its place: instructions the LLM wouldn't follow correctly without being told |
| **Verdict** | One-sentence summary of the conversion |
Reports are saved to `{bmad_builder_reports}/convert-{skill-name}/`.
### When to Use Convert vs Build Process
| Scenario | Use |
| -------- | --- |
| You have any non-BMad-compliant skill and want it converted fast | `--convert` |
| You have a bloated skill and want a lean replacement with a comparison report | `--convert` |
| You want to interactively discuss what to change | Build Process (Edit mode) |
| You want to rethink a skill from scratch with full discovery | Build Process (Rebuild mode) |
| You want a detailed quality analysis without rebuilding | Quality Optimize |
## Module Builder
The Module Builder (`bmad-module-builder`) handles module-level planning, scaffolding, and validation. It operates at a higher level than the Agent and Workflow Builders; it orchestrates what those builders produce into a cohesive, installable module.
### Capabilities Overview
| Capability | Menu Code | What It Does |
| ------------------- | --------- | --------------------------------------------------------------------------------------------------------------- |
| **Ideate Module** | IM | Brainstorm and plan a module through creative facilitation |
| **Create Module** | CM | Package skills as an installable module: setup skill for multi-skill, self-registration for standalone |
| **Validate Module** | VM | Check structural integrity and entry quality for both multi-skill and standalone modules |
### Ideate Module (IM)
A brainstorming session that helps you plan your module from scratch. The builder acts as a creative collaborator, drawing out ideas, exploring possibilities, and guiding you toward the right architecture.
| Aspect | Detail |
| --------------- | ----------------------------------------------- |
| **Interaction** | Interactive only; no headless mode |
| **Input** | An idea or rough description |
| **Output** | Plan document saved to `{bmad_builder_reports}` |
**What it covers:**
- Problem space exploration and creative brainstorming
- Architecture decision: single agent with capabilities vs. multiple skills vs. hybrid
- Standalone module or expansion of an existing module
- External dependencies (CLI tools, MCP servers)
- UI and visualization opportunities
- Setup skill extensions beyond configuration
- Per-skill capability definitions with help CSV metadata
- Configuration variables and sensible defaults
The plan document uses a resumable template with YAML frontmatter, so long brainstorming sessions survive context compaction.
**After ideation:** Build each planned skill using the Agent Builder (BA) or Workflow Builder (BW), then return to Create Module (CM) to scaffold the module.
### Create Module (CM)
Packages built skills as an installable BMad module. Auto-detects single-skill vs. multi-skill input and recommends the appropriate approach. Supports `--headless` / `-H`.
| Aspect | Detail |
| --------------- | ------------------------------------------------------------------------------------------- |
| **Interaction** | Guided or headless |
| **Input** | Path to a skills folder or single skill (or SKILL.md file), optional plan document |
| **Output** | Setup skill for multi-skill modules, or self-registration files for standalone modules |
**What it does:**
1. Reads the SKILL.md files to understand each skill
2. Detects single vs. multi-skill and confirms the packaging approach with the user
3. Collects module identity (name, code, description, version, greeting)
4. Defines help CSV entries: capabilities, menu codes, ordering, relationships
5. Captures configuration variables and external dependencies
6. Scaffolds the module infrastructure
**Multi-skill output:** A dedicated `{code}-setup/` folder with merge scripts, cleanup scripts, and a generic SKILL.md.
**Standalone output:** `assets/module-setup.md`, `assets/module.yaml`, and `assets/module-help.csv` embedded in the skill, plus merge scripts in `scripts/` and a `.claude-plugin/marketplace.json` for distribution. The skill's SKILL.md is updated to check for registration on activation.
### Validate Module (VM)
Verifies that a module's structure is complete and accurate. Auto-detects multi-skill modules (with setup skill) and standalone modules (with self-registration). Combines a deterministic validation script with LLM-based quality assessment.
| Aspect | Detail |
| --------------- | ------------------------------------------------------ |
| **Interaction** | Interactive |
| **Input** | Path to the module's skills folder or single skill |
| **Output** | Validation report |
**Structural checks** (script-driven):
| Check | What It Catches |
| ---------------------- | ------------------------------------------------------------------------------------------- |
| Module structure | Missing setup skill or standalone files (`module-setup.md`, merge scripts) |
| Coverage | Skills without CSV entries, orphan entries for nonexistent skills |
| Menu codes | Duplicate codes across the module |
| References | Before/after fields pointing to nonexistent capabilities |
| Required fields | Missing skill name, display name, menu code, or description in CSV rows |
| module.yaml | Missing code, name, or description |
**Quality assessment** (LLM-driven):
- Description accuracy: does each entry match what the skill actually does?
- Description quality: concise, action-oriented, specific, not overly verbose
- Completeness: are all distinct capabilities registered as separate rows?
- Ordering: do before/after relationships make sense?
- Menu codes: are they intuitive and memorable?
## Trigger Phrases
| Intent | Phrases | Builder | Route |
| --------- | ------------------------------------------------------- | -------- | --------------------------------- |
| Build new | "create/build/design an agent" | Agent | `prompts/build-process.md` |
| Build new | "create/build/design a workflow/skill/tool" | Workflow | `prompts/build-process.md` |
| Edit | "edit/modify/update an agent" | Agent | `prompts/build-process.md` |
| Edit | "edit/modify/update a workflow/skill" | Workflow | `prompts/build-process.md` |
| Convert | "convert this to a BMad agent" | Agent | `prompts/build-process.md` |
| Convert | "convert this to a BMad skill" | Workflow | `prompts/build-process.md` |
| Convert | `--convert ` | Workflow | `./references/convert-process.md` |
| Optimize | "quality check/validate/optimize/review agent" | Agent | `prompts/quality-optimizer.md` |
| Optimize | "quality check/validate/optimize/review workflow/skill" | Workflow | `prompts/quality-optimizer.md` |
| Ideate | "ideate module/plan a module/brainstorm a module" | Module | `./references/ideate-module.md` |
| Create | "create module/build a module/scaffold a module" | Module | `./references/create-module.md` |
| Validate | "validate module/check module" | Module | `./references/validate-module.md` |
# Reference
Technical documentation for BMad Builder configuration and schemas.
| Reference | Description |
| ---------------------------------------------------------------- | --------------------------------------------------------------------- |
| **[Builder Skills](/reference/builder-commands.md)** | Agent Builder and Workflow Builder skills, commands, and capabilities |
| **[Workflow & Skill Patterns](/reference/workflow-patterns.md)** | Structure types, design patterns, and execution models |
Reference for how the BMad Builder classifies and structures skills. Every skill falls into one of three types, each with a distinct structure and set of signals.
## Skill Type Taxonomy
| Type | Description | Structure |
| -------------------- | ----------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------- |
| **Simple Utility** | Input/output building block. Headless, composable, often script-driven. May opt out of config loading for true standalone use | SKILL.md + `scripts/` |
| **Simple Workflow** | Multi-step process contained in a single SKILL.md. Loads config directly from module config.yaml. Minimal or no `prompts/` | SKILL.md + optional `resources/` |
| **Complex Workflow** | Multi-stage with progressive disclosure, stage prompts in `prompts/`, config integration. May support headless mode | SKILL.md (routing) + `prompts/` stages + `resources/` |
## Decision Tree
```
1. Is it a composable building block with clear input/output?
└─ YES → Simple Utility
└─ NO ↓
2. Can it fit in a single SKILL.md without progressive disclosure?
└─ YES → Simple Workflow
└─ NO ↓
3. Does it need multiple stages, long-running process, or progressive disclosure?
└─ YES → Complex Workflow
```
## Classification Signals
### Simple Utility
- Clear input → processing → output pattern
- No user interaction needed during execution
- Other skills and workflows call it
- Deterministic or near-deterministic behavior
- Could be a script but needs LLM judgment
- Examples: JSON validator, format converter, file structure checker
### Simple Workflow
- 3-8 numbered steps
- User interaction at specific points
- Uses standard tools (gh, git, npm, etc.)
- Produces a single output artifact
- No need to track state across compactions
- Examples: PR creator, deployment checklist, code review
### Complex Workflow
- Multiple distinct phases or stages
- Long-running (likely to hit context compaction)
- Progressive disclosure needed (too much for one file)
- Routing logic in SKILL.md dispatches to stage prompts
- Produces multiple artifacts across stages
- May support headless/autonomous mode
- Examples: agent builder, module builder, project scaffolder
## Structure Patterns
### Simple Utility
```
bmad-my-utility/
├── SKILL.md # Complete instructions, input/output spec
└── scripts/ # Core logic
├── process.py
└── tests/
```
### Simple Workflow
```
bmad-my-workflow/
├── SKILL.md # Steps inline, config loading, output spec
└── resources/ # Optional reference data
```
### Complex Workflow
```
bmad-my-complex-workflow/
├── SKILL.md # Routing logic, dispatches to prompts/
├── prompts/ # Stage instructions
│ ├── 01-discovery.md
│ ├── 02-planning.md
│ ├── 03-execution.md
│ └── 04-review.md
├── resources/ # Reference data, templates, schemas
├── agents/ # Subagent definitions for parallel work
└── scripts/ # Deterministic operations
└── tests/
```
## Execution Models
| Model | Applicable Types | Description |
| ------------------------- | --------------------------------- | ---------------------------------------------------------------- |
| **Interactive** | All | User invokes skill and interacts conversationally |
| **Headless / Autonomous** | Simple Utility, Complex Workflow | Runs without user interaction; takes inputs, produces outputs |
| **YOLO** | Simple Workflow, Complex Workflow | User brain-dumps; builder drafts the full artifact, then refines |
| **Guided** | Simple Workflow, Complex Workflow | Section-by-section discovery with soft gates at transitions |
## Module Context
Module membership is orthogonal to skill type. Any type can be standalone or part of a module.
| Context | Naming | Init |
| ---------------- | ------------------------------- | ------------------------------------------------------------------ |
| **Module-based** | `{modulecode}-{skillname}` | Loads config from module config.yaml |
| **Standalone** | `{skillname}` | Loads config from module config.yaml; simple utilities may opt out |
The page you're looking for doesn't exist or has been moved.
[Return to Home](/index.md)