Agent Design Strategies
Four architecture patterns cover the majority of multi-agent workflows. Choose based on task complexity, required tool access, and output quality needs.
Pattern 1: Single-purpose agent
Section titled “Pattern 1: Single-purpose agent”A single-purpose agent does one thing. It has the minimum tool set required for its task and a system prompt scoped tightly to its role.
Example: Code Reviewer
Section titled “Example: Code Reviewer”| Setting | Value |
|---|---|
| System prompt | ”You are a code reviewer. Analyze the provided code for bugs, security vulnerabilities, and performance issues. Rate each finding as critical, warning, or info. Do not suggest stylistic changes unless they affect readability.” |
| Tools | File read, thinking/reasoning |
| Temperature | 0 |
| Context strategy | None (stateless — each review is independent) |
When to Use
Section titled “When to Use”- The task has a clear start and end
- The agent does not need to coordinate with other agents
- You want deterministic, reproducible output
- The tool set is small (1–4 tools)
Design Tips
Section titled “Design Tips”- Remove every tool the agent does not need. Fewer tools means fewer distractions for the model.
- Write the system prompt as if briefing a specialist. State what they review, what they ignore, and how they format output.
- Set temperature to 0 for factual/analytical tasks.
Pattern 2: Orchestrator
Section titled “Pattern 2: Orchestrator”An orchestrator agent receives a high-level request, breaks it into sub-tasks, and dispatches each sub-task to a specialist agent-tool. The orchestrator does not do the work itself — it plans, delegates, and synthesizes.
Example: Project Manager
Section titled “Example: Project Manager”Orchestrator — Project Manager
| Setting | Value |
|---|---|
| System prompt | ”You coordinate research and content creation. Break requests into research and writing phases. Dispatch the Research Agent for information gathering and the Writing Agent for document production. Synthesize results into a final deliverable.” |
| Tools | Research Agent (agent-tool), Writing Agent (agent-tool) |
| Temperature | 0.3 |
| Context strategy | Auto-compact |
Sub-agent — Research Agent (agents_only)
| Setting | Value |
|---|---|
| Input | topic (string), depth (string: “shallow” | “deep”) |
| Output | findings (string), source_count (number) |
| Tools | Web search, URL reader |
| Max recursion depth | 1 |
Sub-agent — Writing Agent (agents_only)
| Setting | Value |
|---|---|
| Input | content (string), format (string: “report” | “summary” | “email”) |
| Output | document (string), word_count (number) |
| Tools | File write, markdown formatter |
| Max recursion depth | 1 |
When to Use
Section titled “When to Use”- The task involves 2+ distinct skill sets (research + writing, analysis + visualization)
- Sub-tasks benefit from different models, tools, or temperature settings
- You want clear separation of concerns — each agent’s scope is auditable
Design Tips
Section titled “Design Tips”- The orchestrator’s system prompt should define the workflow, not the domain expertise. Domain knowledge lives in the sub-agents.
- Set sub-agents to
agents_onlyavailability unless they are also useful standalone. - Keep sub-agent recursion depth at 1 unless they genuinely need to delegate further.
Pattern 3: Chain
Section titled “Pattern 3: Chain”Chain agents when work must flow through distinct stages — each stage transforming the data before passing it to the next. Unlike an orchestrator, there is no central coordinator. The chain is defined by the input/output schemas linking each stage.
Example: Research → Analysis → Writing Pipeline
Section titled “Example: Research → Analysis → Writing Pipeline”Stage 1 — Research Agent
| Setting | Value |
|---|---|
| Input | query (string) |
| Output | raw_findings (string), source_urls (string) |
| Tools | Web search, URL reader |
| Temperature | 0.2 |
Stage 2 — Analysis Agent
| Setting | Value |
|---|---|
| Input | raw_findings (string), analysis_type (string: “compare” | “trend” | “swot”) |
| Output | structured_analysis (string), key_insights (string) |
| Tools | Thinking/reasoning |
| Temperature | 0.3 |
Stage 3 — Writing Agent
| Setting | Value |
|---|---|
| Input | structured_analysis (string), format (string), audience (string) |
| Output | document (string), word_count (number) |
| Tools | File write |
| Temperature | 0.5 |
An orchestrator agent wires these 3 stages together, calling each in sequence and passing the output of one as the input to the next.
When to Use
Section titled “When to Use”- Work flows in one direction with clear stage boundaries
- Each stage benefits from a different temperature, model, or tool set
- You want to inspect or debug intermediate outputs between stages
Design Tips
Section titled “Design Tips”- Design output schemas at each stage to contain everything the next stage needs. Do not rely on the orchestrator to bridge gaps.
- Keep each stage stateless — use context strategy
nonefor pipeline stages. - Name parameters consistently across stages. If Stage 1 outputs
raw_findings, Stage 2 should inputraw_findings, notresearch_data.
Pattern 4: Reviewer
Section titled “Pattern 4: Reviewer”Two agents alternate: a creator produces work, a reviewer critiques it, and the creator revises based on the critique. The cycle repeats until the reviewer approves or a maximum iteration count is reached.
Example: Writer + Editor Cycle
Section titled “Example: Writer + Editor Cycle”Creator — Writer Agent
| Setting | Value |
|---|---|
| Input | brief (string), feedback (string, optional — empty on first pass) |
| Output | draft (string), revision_number (number) |
| Tools | File write |
| Temperature | 0.6 |
Reviewer — Editor Agent
| Setting | Value |
|---|---|
| Input | draft (string), criteria (string) |
| Output | approved (boolean), feedback (string), issues_found (number) |
| Tools | Thinking/reasoning |
| Temperature | 0.2 |
An orchestrator manages the cycle:
- Call Writer Agent with the brief (first pass: no feedback)
- Call Editor Agent with the draft
- If
approvedisfalse, call Writer Agent again with the editor’sfeedback - Repeat until
approvedistrueor 3 iterations complete
When to Use
Section titled “When to Use”- Output quality matters more than speed
- The domain has clear quality criteria that can be expressed in a system prompt
- You want to separate the creative and evaluative roles to avoid self-approval bias
Design Tips
Section titled “Design Tips”- The reviewer must have a lower temperature than the creator. Critique should be consistent and precise.
- Define the
criteriaparameter explicitly: “Check for factual accuracy, logical coherence, and adherence to the style guide.” - Set a maximum iteration count (2–4) to prevent infinite revision loops.
- The reviewer’s
feedbackfield should contain actionable instructions, not vague opinions.
Context Strategy by Role
Section titled “Context Strategy by Role”Match the context strategy to how the agent operates:
| Agent Role | Strategy | Reasoning |
|---|---|---|
| Stateless tool (code reviewer, formatter) | None | Each invocation is independent — no history needed |
| Research agent | Auto-compact | Research accumulates context over many tool calls — auto-compact prevents window overflow |
| Cost-sensitive agents | Token budget | Hard ceiling on context size controls per-invocation cost |
| Orchestrator | Auto-compact | Orchestrators track multiple sub-agent results — auto-compact keeps the window manageable |
| Pipeline stage | None | Stages are stateless — they receive all needed context via input parameters |
System Prompt Best Practices
Section titled “System Prompt Best Practices”- State the role first. “You are a [specific role] that [specific action].”
- Define constraints. What the agent must NOT do is as important as what it should do.
- Specify output format. If the agent is a tool, define the exact structure of the expected output.
- Include examples. For complex output formats, include 1–2 examples in the system prompt.
- Set boundaries. “If you cannot complete the task with available tools, return an error with field
reasonexplaining what is missing.”
Temperature Guidance
Section titled “Temperature Guidance”| Temperature | Behavior | Use Cases |
|---|---|---|
| 0 | Deterministic — same input produces same output | Code review, factual analysis, data extraction, formatting |
| 0.3 | Low variance — mostly consistent with minor variation | Planning, structured writing, technical documentation |
| 0.5 | Balanced — reliable structure with creative phrasing | General-purpose writing, summaries, reports |
| 0.7 | Higher variance — more diverse phrasing and ideas | Creative writing, brainstorming alternatives, marketing copy |
| 1.0+ | High variance — unexpected connections and novel approaches | Free brainstorming, ideation, exploring unusual angles |
Input/Output Schema Design Tips
Section titled “Input/Output Schema Design Tips”When designing schemas for agent-tools:
- Name parameters descriptively.
analysis_typeovertype.target_audienceoveraudience. - Use the description field. The parent agent reads these descriptions when deciding how to call the tool. Vague descriptions produce vague inputs.
- Mark optional parameters as optional. Do not force the parent agent to provide values it cannot determine.
- Keep output schemas flat. Nested objects increase parsing complexity. Prefer multiple string/number fields over a single complex object.
- Include a status or confidence field in outputs. This gives the parent agent a signal to decide whether to retry, escalate, or proceed.