Writing TEA Features with TEA: A Meta-Development Approach#
Fabricio Ceolin
Principal Engineer, The Edge Agent Project
Abstract#
This article presents a meta-development methodology where The Edge Agent (TEA) orchestrates its own feature development. By combining BMad structured workflows for story creation with TEA-powered parallel execution, we achieve higher code quality and faster iteration cycles. The key insight is separating concerns: BMad agents excel at human-guided elicitation and documentation, while TEA excels at clean-context parallel execution. We demonstrate this approach using a real example—implementing an 8-story game feature epic—showing how dependency analysis enables 25% reduction in execution time through intelligent parallelization.
Keywords: TEA, BMad, Meta-Development, Parallel Execution, Story Validation, DOT Workflows
Reproducibility#
This article uses code from a specific commit in The Edge Agent repository. To follow along:
git clone https://github.com/fabceolin/the_edge_agent.git
cd the_edge_agent
git checkout 2f5bdbb4e70bd47bdc7f28022656d7a58b69a817
Commit: 2f5bdbb4e70bd47bdc7f28022656d7a58b69a817
Repository: github.com/fabceolin/the_edge_agent
1. Introduction#
When developing complex features for The Edge Agent, we face a paradox: the very tool we’re building could help us build it better. This article explores how we leverage TEA to accelerate TEA development, creating a virtuous cycle of meta-improvement.
The traditional AI-assisted development workflow looks like this:
Human → AI Chat → Code → Review → Iterate
Each iteration carries context from previous attempts, accumulating cognitive debt. The AI’s context window fills with failed approaches, abandoned directions, and incremental fixes.
What if we could guarantee clean context for each execution?
BMad: Create perfect story specifications
TEA: Execute each story with fresh context
DOT: Orchestrate parallel execution
This separation of concerns gives us the best of both worlds: BMad’s human-in-the-loop elicitation produces high-quality specifications, while TEA’s workflow engine guarantees isolated, reproducible execution.
1.1 The Problem with Context Accumulation#
Consider developing a feature over multiple chat sessions:
Session |
Context State |
Quality Impact |
|---|---|---|
1 |
Fresh context |
High quality decisions |
2 |
Previous mistakes loaded |
May repeat patterns that failed |
3 |
Accumulated corrections |
Defensive coding, over-engineering |
4 |
Context window pressure |
Forgotten requirements, inconsistencies |
By Session 4, the AI is juggling too much history. It may remember that “we tried X and it failed” but forget why it failed, leading to suboptimal alternatives.
2. The BMad Method: Getting Stories Right#
BMad (Breakthrough Method for Agile AI-Driven Development) provides the structured foundation for high-quality AI-assisted development.
2.1 What Makes BMad Different#
Unlike traditional AI coding assistants that “do the thinking for you, producing average results,” BMad positions AI agents as expert guides through structured workflows:
flowchart LR
A[Human Intent] --> B[BMad Agent]
B --> C{Elicitation}
C --> D[Structured Artifact]
D --> E[Validation]
E --> F[Ready for Implementation]
2.2 BMad Agent Roles#
BMad deploys specialized agents for different concerns:
Agent |
Role |
Key Contribution |
|---|---|---|
PO (Sarah) |
Product Owner |
Story refinement, acceptance criteria |
Architect |
System Design |
Technical decisions, dependency analysis |
QA |
Quality Assurance |
Test design, risk identification |
SM |
Scrum Master |
Story validation, Definition of Ready |
Dev |
Developer |
Implementation, code quality |
2.3 The One-Shot Advantage#
BMad’s elicitation process ensures stories are complete before implementation begins. This is critical because:
No mid-implementation pivots - Requirements are validated upfront
Clear acceptance criteria - The AI knows exactly what “done” looks like
Dependency mapping - Stories can be parallelized safely
Test design included - QA validates before code is written
Here’s an example of BMad-produced story structure:
# Story created by BMad PO Agent
name: TEA-GAME-001.1
title: Rust Game Engine Core
status: Draft
acceptance_criteria:
- AC-1: GameSession struct with required fields
- AC-2: GameRound struct with phrase and choices
- AC-3: generate_username() returns random pattern
- AC-4: calculate_score() implements weighted formula
- AC-5: adjust_difficulty() uses rolling window
- AC-6: Difficulty bounded to [0.1, 0.95]
- AC-7: Unit tests for all calculations
tasks:
- Create rust/src/games/mod.rs module
- Implement GameSession and GameRound structs
- Implement generate_username() with word lists
# ... detailed subtasks
3. The Context Problem: Why TEA Instead of BMad Workflows#
BMad agents run within a conversational context (Claude Code, Cursor, etc.). While this enables rich human interaction, it creates a problem for autonomous parallel execution.
3.1 BMad’s Conversational Context#
┌─────────────────────────────────────────────────────────┐
│ CLAUDE CODE SESSION │
├─────────────────────────────────────────────────────────┤
│ Turn 1: /po *create-story TEA-GAME-001.1 │
│ Turn 2: [PO creates story, asks clarifying questions] │
│ Turn 3: User provides answers │
│ Turn 4: [PO refines story] │
│ ... │
│ Turn 50: Context window filling up │
│ Turn 100: Earlier decisions forgotten │
└─────────────────────────────────────────────────────────┘
When running 8 stories sequentially in one session, context from Story 1 affects Story 8. This is sometimes desirable (learning from patterns) but often problematic (accumulated noise).
3.2 TEA’s Clean Context Guarantee#
TEA workflows execute in isolated contexts:
┌─────────────────────────────────────────────────────────┐
│ TEA WORKFLOW │
├─────────────────────────────────────────────────────────┤
│ Node 1: Load story file (fresh context) │
│ Node 2: Execute validation (isolated) │
│ Node 3: Write results (clean output) │
│ [Process terminates, context released] │
└─────────────────────────────────────────────────────────┘
Each story validation runs in a completely fresh subprocess:
# bmad-story-validation.yaml
nodes:
- name: run_qa_test_design
uses: llm.call
with:
provider: shell
shell_provider: claude
model: claude
messages:
- role: user
content: |
CRITICAL: ACTIVATE PERSONA MODE.
{{ state.agent_persona }}
TASK: Perform *test-design for story {{ state.arg }}.
{{ state.task_definition }}
MODE: YOLO - Do NOT ask permission. Execute commands.
The shell_provider: claude spawns a fresh Claude Code instance for each node execution. No context bleeds between stories.
3.3 Comparison Table#
Aspect |
BMad (Conversational) |
TEA (Workflow) |
|---|---|---|
Context |
Accumulated across session |
Fresh per execution |
Parallelization |
Sequential only |
Parallel via DOT |
Human Interaction |
Rich, real-time |
Pre-configured |
Best For |
Elicitation, refinement |
Execution, validation |
Reproducibility |
Varies by session |
Deterministic |
4. Parallelizing Story Validation with DOT#
With BMad stories ready and TEA workflows defined, we can orchestrate parallel execution using DOT (Graphviz) graphs.
4.1 Dependency Analysis#
Given an epic with 8 stories, the first step is mapping dependencies:
Story |
Depends On |
Can Parallelize With |
|---|---|---|
1. Rust Core |
None |
- |
2. DuckDB Schema |
Story 1 |
Stories 4, 8 |
3. Embeddings |
Story 2 |
- |
4. LLM Phrase Gen |
Story 1 |
Stories 2, 8 |
5. Game Engine |
Stories 3, 4 |
- |
6. WASM Port |
Story 5 |
- |
7. Browser UI |
Story 6 |
- |
8. Opik Integration |
Story 1 |
Stories 2, 4 |
4.2 The DOT Workflow#
digraph tea_game_001_validation {
rankdir=TB;
node [shape=box];
Start [label="Start", shape=ellipse];
End [label="End", shape=ellipse];
// Phase 1: Foundation
subgraph cluster_phase1 {
label="1. Foundation";
story_1 [label="TEA-GAME-001.1",
command="tea-python run examples/workflows/bmad-story-validation.yaml
--input-timeout 54000
--input '{\"arg\": \"docs/stories/TEA-GAME-001.1-rust-game-engine-core.md\"}'"];
}
// Phase 2: Parallel Track - Stories 2, 4, 8 run simultaneously
subgraph cluster_phase2 {
label="2. Parallel Track A";
story_2 [label="TEA-GAME-001.2", command="..."];
story_4 [label="TEA-GAME-001.4", command="..."];
story_8 [label="TEA-GAME-001.8", command="..."];
}
// Edges define execution order
Start -> story_1;
story_1 -> story_2;
story_1 -> story_4;
story_1 -> story_8;
// Story 8 completes independently
story_8 -> End;
// Continue dependency chain...
story_2 -> story_3;
story_3 -> story_5;
story_4 -> story_5;
story_5 -> story_6;
story_6 -> story_7;
story_7 -> End;
}
4.3 Visual Dependency Graph#
┌─────────────────┐
│ Start │
└────────┬────────┘
│
┌────────▼────────┐
│ Story 1 │ Phase 1
│ (Rust Core) │
└────────┬────────┘
│
┌─────────────────┼─────────────────┐
│ │ │
┌──────▼──────┐ ┌──────▼──────┐ ┌──────▼──────┐
│ Story 2 │ │ Story 4 │ │ Story 8 │ Phase 2
│ (DuckDB) │ │ (LLM Gen) │ │ (Opik) │ PARALLEL
└──────┬──────┘ └──────┬──────┘ └──────┬──────┘
│ │ │
┌──────▼──────┐ │ │
│ Story 3 │ │ │ Phase 3
│ (Embeddings)│ │ │
└──────┬──────┘ │ │
│ │ │
└────────┬────────┘ │
│ │
┌──────▼──────┐ │
│ Story 5 │ │ Phase 4
│ (GameEngine)│ │
└──────┬──────┘ │
│ │
┌──────▼──────┐ │
│ Story 6 │ │ Phase 5
│ (WASM Port) │ │
└──────┬──────┘ │
│ │
┌──────▼──────┐ │
│ Story 7 │ │ Phase 6
│ (Browser UI)│ │
└──────┬──────┘ │
│ │
└────────────┬─────────────┘
│
┌────────────▼────────────┐
│ End │
└─────────────────────────┘
4.4 Efficiency Gain#
Metric |
Sequential |
Parallelized |
Improvement |
|---|---|---|---|
Total Phases |
8 |
6 |
25% reduction |
Max Parallel |
1 |
3 |
3x throughput |
Story 8 (Opik) |
Waits for 1-7 |
Runs with 2,4 |
~70% faster |
5. Setup: Configuring Claude Code as Backend#
Before running TEA workflows that use Claude Code as the LLM backend, you need to configure your environment.
5.1 Prerequisites#
TEA Python installed:
cd python pip install -e .[dev]
Claude Code CLI installed and authenticated:
# Install Claude Code (if not already installed) npm install -g @anthropic-ai/claude-code # Authenticate with your Anthropic API key claude auth
BMad Core files present in the repository:
.bmad-core/ ├── agents/ │ ├── qa.md # QA agent persona │ ├── sm.md # Scrum Master agent persona │ └── dev.md # Developer agent persona └── tasks/ └── test-design.md # Test design task definition
5.2 The Shell Provider Configuration#
TEA’s llm.call action supports a shell_provider that spawns external CLI tools. For Claude Code:
nodes:
- name: run_validation
uses: llm.call
with:
provider: shell # Use shell-based LLM provider
shell_provider: claude # Spawn Claude Code CLI
model: claude # Model selection handled by Claude Code
timeout: 600 # 10 minutes per node
messages:
- role: user
content: |
Your prompt here...
How it works:
TEA invokes
claudeCLI as a subprocessThe prompt is passed via stdin
Claude Code executes with fresh context (no history)
Output is captured and returned to TEA state
5.3 Environment Variables#
Set these environment variables for optimal execution:
# Required: Anthropic API key (Claude Code will use this)
export ANTHROPIC_API_KEY="sk-ant-..."
# Optional: Control Claude Code behavior
export CLAUDE_CODE_AUTO_ACCEPT=1 # Auto-accept tool calls
export CLAUDE_CODE_NO_CONFIRM=1 # Skip confirmation prompts
5.4 Verify Setup#
Test that Claude Code works as a shell provider:
# Simple test
echo "Say hello" | claude --print
# Test with TEA
tea-python run examples/workflows/bmad-story-validation.yaml \
--input '{"arg": "docs/stories/TEA-GAME-001.1-rust-game-engine-core.md"}' \
--dry-run
6. Execution: From DOT to Running Workflow#
6.1 Generate YAML from DOT#
# Convert DOT to executable YAML workflow
tea-python from dot examples/dot/tea-game-001-validation.dot \
--use-node-commands \
-o examples/dot/tea-game-001-validation.yaml
6.2 Execute the Workflow#
# Run with extended timeout (15 hours for large epics)
tea-python run examples/dot/tea-game-001-validation.yaml \
--input-timeout 54000
6.3 What Happens During Execution#
┌──────────────────────────────────────────────────────────────┐
│ tea-python run tea-game-001-validation.yaml │
├──────────────────────────────────────────────────────────────┤
│ │
│ [Phase 1] Story 1 ─────────────────────────► Complete │
│ │
│ [Phase 2] Story 2 ─────────────────────────►┐ │
│ Story 4 ─────────────────────────►├─► All Done │
│ Story 8 ─────────────────────────►┘ │
│ │
│ [Phase 3] Story 3 ─────────────────────────► Complete │
│ │
│ [Phase 4] Story 5 ─────────────────────────► Complete │
│ │
│ [Phase 5] Story 6 ─────────────────────────► Complete │
│ │
│ [Phase 6] Story 7 ─────────────────────────► Complete │
│ │
│ ═══════════════════════════════════════════════════════════ │
│ WORKFLOW COMPLETE: 8 stories validated, 6 phases │
└──────────────────────────────────────────────────────────────┘
7. Future: Parallelized Development#
The same pattern applies to implementation, not just validation:
7.1 Development Workflow Structure#
# bmad-story-development.yaml (future)
nodes:
- name: load_dev_context
description: Load developer agent and story
run: |
# Load BMad dev agent persona
# Load story requirements
- name: implement_story
uses: llm.call
with:
provider: shell
shell_provider: claude
messages:
- role: user
content: |
ACTIVATE: Developer Agent
STORY: {{ state.story_file }}
MODE: YOLO - Implement all acceptance criteria
- name: run_tests
uses: shell.run
with:
command: "cargo test --features game"
- name: update_status
description: Mark story as implemented
7.2 Parallel Development DOT#
digraph tea_game_001_development {
// Same dependency structure as validation
// But running bmad-story-development.yaml
story_1 [command="tea-python run bmad-story-development.yaml
--input '{\"arg\": \"TEA-GAME-001.1\"}'"];
// Phase 2: Develop 3 stories in parallel
story_1 -> story_2;
story_1 -> story_4;
story_1 -> story_8;
// Each story gets fresh context
// No cross-contamination between implementations
}
8. Best Practices#
8.1 When to Use Each Approach#
Scenario |
Recommended Approach |
|---|---|
Initial story creation |
BMad (human elicitation) |
Story refinement |
BMad (interactive) |
Bulk validation |
TEA (parallel DOT) |
Implementation |
TEA (clean context) |
Debugging failures |
BMad (conversational) |
8.2 DOT Workflow Guidelines#
Simple Labels - Use story IDs, not descriptions
Explicit Dependencies - Every edge represents a real constraint
Timeout Planning - Use
--input-timeout 54000(15 hours) for complex storiesCluster Phases - Group parallel stories in
subgraph cluster_*blocks
8.3 Context Isolation Principles#
Each story = one execution - Never batch stories in one context
State via files - Stories communicate through file system, not memory
Idempotent operations - Re-running a story should produce same result
Explicit inputs - All dependencies declared in DOT edges
9. Conclusion#
The meta-development approach—using TEA to build TEA—provides several key advantages:
Quality through separation - BMad ensures story quality; TEA ensures execution quality
Parallelization through analysis - Dependency graphs enable safe concurrent execution
Clean context guarantee - Each story executes in isolation
Reproducibility - DOT workflows are version-controlled and repeatable
The TEA-GAME-001 epic demonstrates this in practice: 8 stories, 6 phases, 25% efficiency gain through parallelization, zero context contamination.
As TEA continues to evolve, this meta-development cycle accelerates: better TEA enables better TEA development, which produces better TEA.
10. References#
BMad Method - Breakthrough Method for Agile AI-Driven Development
TEA Documentation - The Edge Agent official docs
DOT Workflow Guide - DOT-to-YAML conversion guide
Graphviz DOT Language - DOT syntax reference