examhub .cc The most efficient path to the most valuable certifications.
Vol. I
In this note ≈ 27 min

Task Decomposition Strategies for Complex Workflows

5,400 words · ≈ 27 min read

Task decomposition strategies for complex workflows is the architecture-level lens that Domain 1 of the Claude Certified Architect Foundations (CCA-F) exam applies to any non-trivial agent system. Task statement 1.6 asks candidates to "design task decomposition strategies for complex workflows," and because 27% of the exam weight sits in Agentic Architecture and Orchestration, task decomposition strategies appears directly or indirectly in most scenarios an architect will face on the 60-question, 120-minute, 720-to-pass test. The Code Generation with Claude Code and Multi-Agent Research System scenario clusters in particular lean heavily on correct decomposition choices — fixed sequential pipelines versus dynamic adaptive plans, per-file local passes versus cross-file integration passes, coordinator-as-planner versus statically wired chains.

This study note walks through the full surface of task decomposition strategies a CCA-F architect is expected to master: why decomposition exists at all (reducing per-agent cognitive load, enabling parallelism, and mitigating attention dilution), the two dominant decomposition approaches (top-down hierarchical versus bottom-up functional), granularity calibration (tasks too large overwhelm a single agent, tasks too small drown in coordination overhead), dependency mapping (sequential chains versus independent parallel tracks), subtask interface design (clean input/output contracts between decomposed steps), recursive decomposition (when a subtask is itself too complex), the coordinator-as-planner pattern, plan validation, the static-versus-dynamic decomposition split that is the single most-tested axis on the exam, fan-out parallelism, anti-patterns (chatty subtasks, circular dependencies, context loss), and the practical scenario anchors (CI/CD code review, legacy test coverage expansion, multi-agent research synthesis) the exam draws from. A clear CCA-F recognition depth note marks where architecture judgment ends and implementation-level detail begins.

Why Decompose? Reducing Cognitive Load Per Agent and Enabling Parallelism

Every non-trivial workflow you would hand to Claude has an implicit choice baked into it: pour the whole problem into a single prompt, or slice it into a sequence (or graph) of smaller problems. Task decomposition strategies live in the space between those extremes. Anthropic's own prompt-engineering guidance is explicit — chaining complex prompts improves performance because each sub-step gets Claude's full attention on a narrowed objective, instead of dividing attention across many loosely related asks at once.

There are three load-bearing reasons an architect decomposes a workflow:

  1. Attention focus. A single large prompt asking Claude to read twelve files, check security issues, rewrite three modules, draft a migration plan, and write the PR description will produce an answer that is shallowly correct on every axis and deeply correct on none. Decomposing into focused sub-steps gives each step the full context budget and reasoning budget.
  2. Reliability. Smaller steps are easier to validate. If a sub-step fails, you retry or reroute just that sub-step rather than re-running the whole pipeline. Error surfaces stay localized.
  3. Parallelism. Independent sub-steps can run concurrently on separate subagent sessions, cutting wall-clock time dramatically for tasks such as "review every file in this repository in parallel."

Task decomposition is the architectural practice of breaking a complex workflow into smaller, well-scoped sub-tasks that are composed through prompt chaining, subagent spawning, or coordinator-planner patterns. Decomposition reduces per-agent cognitive load, enables parallel execution of independent sub-tasks, isolates failure surfaces, and makes the overall system observable and debuggable. Source ↗

The CCA-F exam repeatedly tests whether you recognize that decomposition is an architecture decision, not an ergonomic preference. When a scenario says "a single Claude Code session is asked to review thirty files and produce a combined summary," the right answer pattern on the exam involves decomposition — not a longer system prompt, not a bigger context window, not lowering temperature.

Decomposition Approaches: Top-Down Hierarchical vs Bottom-Up Functional

CCA-F recognizes two dominant approaches to shaping a decomposition:

Top-Down Hierarchical Decomposition

Start from the goal. Break it into major phases. Break each phase into concrete sub-tasks. Keep going until every leaf is a unit of work that a single agent can complete with high confidence. This is the natural shape for well-defined deliverables: "ship a feature," "audit a codebase," "produce a research brief."

Top-down decomposition is prescriptive — the architect (or a coordinator agent acting as architect) lays out the tree before execution starts. It pairs well with prompt chaining and plan mode.

Bottom-Up Functional Decomposition

Start from the available capabilities — the tools, subagent definitions, and MCP servers you already have. Ask "what useful compositions do these primitives enable?" and work upward toward the goal. This is the natural shape for exploratory workflows where the goal is fuzzy: "find anomalies in this log stream," "explore ways to improve test coverage."

Bottom-up pairs well with dynamic, adaptive decomposition because it embraces the possibility that the plan will reshape itself as more is learned during execution.

Picking Between Them

Top-down when the deliverable is crisp and the path is known. Bottom-up when the deliverable requires discovery or the tool mix is itself the most interesting variable. A mature system usually blends both: a top-down skeleton with adaptive bottom-up branching inside specific phases.

Granularity Calibration: Tasks Too Large vs Tasks Too Small

The single hardest judgment in task decomposition strategies is choosing the right granularity for a subtask.

Too Large

Signs a sub-task is too large:

  • A single prompt has to juggle several unrelated concerns (security + performance + style).
  • The subagent is given more than a handful of tools and cannot tell which to reach for.
  • The output becomes shallow because attention dilutes across too many goals.
  • You cannot write a one-sentence pass/fail criterion for the sub-task.

Too Small

Signs a sub-task is too small:

  • Coordination overhead (spawning a session, passing context, collecting results) dominates the actual work.
  • Sub-tasks have to hand large amounts of state back and forth because nothing can be decided locally.
  • The graph is bushy with nodes that do one-line transformations a single agent could have done inline.

The Calibrated Middle

A well-sized sub-task has a single clear objective, a focused tool set (two to five tools is a useful rule of thumb), an input contract that fits in a few hundred tokens, and an output contract that is either structured (JSON) or a short natural-language artifact. A sub-task that produces exactly one reviewable artifact — a per-file review, a function-level patch, a section of a research brief — is almost always correctly sized.

CCA-F scenarios that complain about "agents getting confused" or "inconsistent outputs across runs" almost always point toward granularity problems. Before reaching for a bigger model or a longer system prompt, ask whether the sub-task has too many concerns packed into it. The exam's preferred fix is almost always a tighter decomposition, not a more powerful model. Source ↗

Dependency Mapping: Sequential Dependencies vs Independent Parallel Tracks

Once sub-tasks are identified, the next architectural question is how they depend on each other.

Sequential Dependencies

Sub-task B cannot start until sub-task A produces its output. Example: you cannot write the integration plan until you have the per-file analysis summaries. Sequential dependencies are naturally served by prompt chaining — the output of step one becomes part of the input to step two.

Independent Parallel Tracks

Sub-tasks that do not consume each other's outputs can run concurrently. Example: analyzing thirty files for code quality issues — each file review is independent. Parallel tracks fan out to multiple subagent sessions and then fan back in to a coordinator that aggregates results.

Mixed (DAG) Workflows

Real workflows almost always form a directed acyclic graph: some early sub-tasks fan out in parallel, their results converge into a synthesis step, that synthesis fans out again into the next parallel wave. Accurate dependency mapping is what makes a decomposition executable.

Mapping Drives Execution Shape

  • Pure sequential chain → prompt chaining with explicit handoff between steps.
  • Pure parallel fan-out → coordinator spawns N subagents via the Task tool and aggregates.
  • Mixed DAG → coordinator orchestrates waves, respecting dependency edges.

Subtask Interface Design: Clean Input/Output Contracts

A decomposition is only as reliable as the contracts between its steps. Sloppy interfaces produce cascading errors.

Input Contract

What does a sub-task need to do its job, and no more? Over-stuffing the input with irrelevant context dilutes the sub-agent's attention. Under-providing forces the sub-agent to infer or hallucinate. A good input contract names the concrete artifact (file path, ticket ID, prior summary), the focused goal in one sentence, and the expected output shape.

Output Contract

The output of a sub-task is the input of whatever consumes it next. Structured outputs (JSON via tool use, or explicit XML sections in text) are preferred whenever a downstream step must parse the result. Natural-language outputs are fine when the downstream step is another Claude call that can interpret prose.

Why Contracts Matter

Claude Code subagents run with isolated context — they do not inherit the coordinator's full conversation history. Everything the subagent needs must cross the contract boundary explicitly. Forgetting this is one of the most common architect mistakes and a pain point flagged repeatedly in community pass reports for the CCA-F exam.

Recursive Decomposition: When a Subtask Is Still Too Complex

Sometimes the decomposition tree has to go deeper than one level. A "review this service" sub-task might itself need to decompose into "review the controller layer," "review the domain layer," "review the persistence layer" — each of which decomposes again into per-file passes.

When to Recurse

Recurse when a sub-task still fails the single-objective test or still has too many tools to reason about. The recursion naturally maps to coordinator-subagent nesting: a top-level coordinator spawns mid-level coordinators for each major phase; each mid-level coordinator spawns leaf subagents for individual units of work.

When to Stop Recursing

Stop when a sub-task can be completed in a single focused Claude session with a clear pass/fail criterion. Further recursion adds coordination overhead without adding focus.

Depth Limits

In practice, three levels of nesting is usually enough — root coordinator, phase coordinators, leaf workers. Going deeper quickly creates context-passing complexity that outweighs the focus benefit.

Coordinator-as-Planner Pattern: Claude Generates Its Own Plan

A powerful pattern in task decomposition strategies is letting Claude itself generate the decomposition plan instead of hardcoding it.

How It Works

You give a coordinator agent the overall goal, the available tools, and optionally a sketch of constraints. The coordinator uses extended thinking (or plan mode) to enumerate sub-tasks, order them by dependency, and pick which tool or subagent handles each. The coordinator then executes the plan, spawning subagents or calling tools as the plan dictates.

When to Use It

Use coordinator-as-planner when the exact shape of the work cannot be known in advance — e.g., "add comprehensive tests to this legacy codebase." The coordinator must first map the codebase, identify high-impact areas, and then decide which tests to write where. No static plan can predict the right answer without first seeing the code.

When Not to Use It

When the workflow is predictable and repeated — e.g., "review every PR on main for secrets and license issues" — a static, pre-written chain is cheaper, faster, and more auditable than re-planning each time.

Plan Validation: Verifying Decomposition Completeness Before Execution

A plan produced by a coordinator (or written by a human architect) should be inspected before execution starts on expensive long-running work.

What to Validate

  • Completeness: does every goal in the original ask map to at least one sub-task?
  • Non-overlap: do any two sub-tasks duplicate work?
  • Feasibility: does each sub-task have the tools, context, and permissions it needs?
  • Coverage of edge cases: are failure paths explicit (what happens if a sub-task returns an error)?

How to Validate

  • Plan mode in Claude Code shows the proposed plan before any tool executes, giving a human (or a second agent) the chance to approve, reject, or revise.
  • Plan-checker agents — a secondary agent reviews the plan and flags gaps before the first executor starts.
  • Dry-runs on a small representative slice — run the decomposition on three files before turning it loose on three hundred.

Dynamic Decomposition vs Static Decomposition: The Exam's Favorite Axis

This is the single most-tested distinction inside task statement 1.6 and the one candidates most often get wrong.

Static (Fixed Sequential Pipeline) Decomposition

The plan is fully determined before execution starts. Every step, every sub-task boundary, every tool assignment is known in advance. Prompt chaining is the canonical implementation: step 1's output feeds step 2, step 2's output feeds step 3.

Static decomposition is ideal for:

  • Multi-aspect reviews with predictable shape. Example: "for each file in this PR, run a security pass, a style pass, and a performance pass, then produce an integration summary." The work surface is known; the shape is stable.
  • CI/CD pipelines where reproducibility and auditability outweigh flexibility. Every run should traverse the same steps for the same kind of input.
  • High-volume repetitive work where you want cost and latency to be predictable.

Dynamic (Adaptive) Decomposition

The plan emerges during execution. Early sub-tasks produce information that reshapes later sub-tasks. The coordinator is an active planner that revises the remaining plan after each result.

Dynamic decomposition is ideal for:

  • Open-ended exploration. Example: "add comprehensive tests to this legacy codebase." The coordinator first maps the module structure, identifies high-impact areas that lack coverage, builds a prioritized plan, and then delegates per-module test authoring. No static plan could have chosen the right files without first looking.
  • Multi-agent research systems. The coordinator reads an initial set of sources, discovers unexpected threads, and spawns subagents to chase the most promising threads.
  • Incident response and debugging. The next step depends on what the current step finds.

The Architectural Trade-Off

Static is predictable and auditable; dynamic is flexible and context-aware. The CCA-F exam rewards choosing based on scenario cues, not preference:

  • "Predictable, repeatable, every PR goes through the same checks" → static / prompt chaining.
  • "Open-ended, we don't know the structure until we look" → dynamic / adaptive plan.

The static-versus-dynamic decomposition axis is the most frequently tested distinction inside task statement 1.6 on the CCA-F exam. The cue that selects static is predictability of the work surface. The cue that selects dynamic is discovery — you cannot know the right plan until early sub-tasks return information that shapes later sub-tasks. "Add comprehensive tests to this legacy codebase" is the canonical dynamic scenario; "review every file in this PR with a consistent checklist" is the canonical static scenario. Source ↗

Decomposition for Parallelism: Identifying Fan-Out Opportunities

Parallelism is one of the largest practical wins from good decomposition.

Fan-Out Patterns

When sub-tasks are independent, the coordinator spawns them simultaneously via the Task tool (or the Agent SDK's subagent spawning APIs). Each subagent runs in its own isolated context. The coordinator collects results and synthesizes them in a downstream step.

Per-File Local Analysis + Cross-File Integration Pass

This pattern deserves explicit callout because it is the dominant exam pattern for code-generation scenarios. When a task requires reviewing many files:

  1. Per-file local pass (parallel fan-out). Each file gets its own subagent with focused context. The subagent analyzes that single file in depth, producing a structured local summary.
  2. Cross-file integration pass (sequential aggregation). A single coordinator reads all the per-file summaries and synthesizes cross-cutting findings — shared utilities, API contracts, inconsistent patterns — that no single per-file pass could have seen.

This two-stage shape beats both "review everything in one prompt" (attention dilutes) and "review each file entirely independently with no integration" (cross-file bugs are missed).

Limits on Parallelism

Fan-out is not free. Each subagent consumes API quota, each spawned session incurs setup latency, and aggregation cost grows with the number of results. A useful mental ceiling is "parallelize when N is large enough that wall-clock matters, but not so large that aggregation becomes its own load problem."

Attention Dilution: The Load-Bearing Risk in Single-Pass Review

Attention dilution is the failure mode that motivates much of task decomposition strategies, and it is called out explicitly in the CCA-F core technical beats.

Attention dilution is the degradation in output quality that happens when a single Claude session is asked to attend to too many files, too many goals, or too many concerns at once. Each additional focus target reduces the share of reasoning attention any one target receives. Symptoms include shallow analysis that misses details within any given file, cross-file inconsistencies, and outputs that are plausible at a glance but fall apart on careful inspection. Decomposition into per-file or per-concern sub-tasks restores per-task attention. Source ↗

How Attention Dilution Shows Up

  • A single-pass review of twenty files produces one paragraph per file that reads like a generic checklist, not a genuine inspection.
  • The coordinator repeats the same high-level observation in different words across many files because it cannot drill into any one of them.
  • Cross-file bugs are missed because the session cannot keep every file in working attention simultaneously.

How Decomposition Fixes It

Split into per-file sub-tasks, each with the full context budget focused on one file. Then add a dedicated integration pass whose only job is to look at the per-file outputs and find cross-cutting patterns. Two narrow passes beat one wide pass.

When Not to Decompose for Attention

Tiny workloads (two or three files) do not need decomposition — the coordination overhead exceeds the attention benefit. Decomposition becomes the clear winner somewhere around five-plus files, earlier when files are large or the checklist is rich.

Prompt Chaining: The Canonical Static Decomposition Tool

Prompt chaining is the simplest and most-tested implementation of static decomposition.

Prompt chaining is a technique where a complex task is split into a sequence of smaller prompts, each focused on a single sub-goal, with the output of one prompt feeding into the next. Prompt chaining is the canonical implementation of static, sequential decomposition: the full chain is defined in advance and every execution follows the same steps. It improves accuracy on multi-aspect tasks by giving Claude's full attention to each narrowed sub-goal. Source ↗

When Prompt Chaining Fits

  • The task has a predictable sequence (analyze → summarize → translate → format).
  • Each step has a clear input and output.
  • Reliability and auditability matter more than flexibility.

When Prompt Chaining Is Wrong

  • The next step genuinely depends on what the current step discovers (use dynamic decomposition instead).
  • The chain is a single trivial transformation best handled in one prompt.
  • The sub-steps are independent and could run in parallel (use fan-out instead of serial chaining).

Dynamic (Adaptive) Decomposition: Plan Emerges During Execution

Dynamic decomposition (also called adaptive decomposition) is a strategy where the coordinator generates and revises its plan during execution, based on what earlier sub-tasks return. Unlike static prompt chaining, no full plan exists at the start — the coordinator alternates between planning, executing a step, observing the result, and re-planning the remaining work. Dynamic decomposition is the right tool when the work surface is not known up front and when late information must reshape early decisions. Source ↗

The Adaptive Loop

A dynamic decomposition agent runs a loop:

  1. Plan the next one-to-three sub-tasks based on current state.
  2. Execute the next sub-task (directly or via a spawned subagent).
  3. Observe the result and update the mental model of the remaining work.
  4. Re-plan the remaining work in light of what was just learned.
  5. Exit when the overall goal criterion is met.

Practical Example: Comprehensive Test Coverage for a Legacy Codebase

The canonical dynamic decomposition scenario on the CCA-F exam. The coordinator first maps the directory structure, identifies modules with the lowest existing coverage and the highest production risk, builds a prioritized plan, and then delegates per-module test authoring to subagents. A static plan could not have picked the right priorities without first reading the code.

Dynamic Decomposition and Plan Mode

Plan mode is a lightweight form of dynamic decomposition: Claude Code plans what it will do before touching tools, surfaces the plan for human approval, and then executes. Plan mode is useful when a single human reviewer should see and approve the plan. Fully autonomous dynamic decomposition is useful when the loop must run without a human in the hot path.

The Task Tool and Subagent Spawning as Decomposition Primitives

Both Claude Code and the Agent SDK expose a Task tool (and related subagent-spawning APIs) that are the mechanical implementation of decomposition.

The Task Tool

The Task tool lets a coordinator agent spawn a subagent to handle a well-scoped sub-task. The coordinator supplies a description of what the subagent should do, and optionally a set of tools the subagent is allowed to use. The subagent runs in isolated context — it does not see the coordinator's conversation history, only what the coordinator explicitly passes in.

Subagents as Decomposition Units

Each subagent invocation corresponds to one sub-task in the decomposition tree. Good architecture aligns the decomposition with the subagent boundary — one leaf sub-task per subagent invocation, clean input and output contracts.

Context Isolation Is a Feature, Not a Bug

The CCA-F exam repeatedly tests whether candidates understand that subagent context isolation is intentional. Isolation prevents the coordinator's accumulated chatter from diluting the subagent's focus. Everything the subagent needs must be passed in explicitly. If you find yourself needing to "leak" context across the boundary, your decomposition is probably wrong.

Subagents spawned via the Task tool (or Agent SDK subagent APIs) operate with isolated context. They do NOT automatically inherit the coordinator's full conversation history. Every piece of context the subagent needs must be passed in explicitly through the sub-task description. This is one of the top pain points flagged in CCA-F community pass reports — architects who assume context flows down through the hierarchy design systems that silently fail. Source ↗

Cross-File Integration: The Pass That Ties Per-File Results Together

After per-file local analysis fans out and completes, a dedicated cross-file integration pass synthesizes findings that no single per-file pass could have produced.

What Cross-File Integration Does

  • Detects patterns that span files: an API contract used inconsistently across consumers, a utility re-implemented in multiple places, a security pattern applied in some modules and forgotten in others.
  • Reconciles contradictions: if one per-file review flagged an import as unused while another flagged it as critical, the integration pass resolves which is correct.
  • Produces the final aggregated artifact — the PR comment, the architecture memo, the prioritized action list.

Why It Must Be a Separate Pass

If you ask a single agent to read every file and then also produce cross-file synthesis in the same prompt, attention dilutes across both jobs. Two narrow passes — each with the whole attention budget for its single job — outperforms one wide pass.

Integration Pass Inputs

The integration pass consumes the structured outputs of the per-file passes, not the raw files. This keeps the integration pass's context budget manageable even when the underlying codebase is large.

Decomposition Anti-Patterns: Chatty Subtasks, Circular Dependencies, Lost Context

Mature task decomposition strategies are partly defined by the failure modes they avoid.

Anti-Pattern 1: Overly Chatty Subtasks

Sub-tasks that hand megabytes of context back and forth to compensate for over-fine granularity. Fix: coarsen the granularity so more work can be decided locally inside one sub-task.

Anti-Pattern 2: Circular Dependencies

Sub-task A needs output from B, which needs output from A. This usually means the decomposition boundary is in the wrong place. Fix: merge A and B into a single sub-task, or re-cut the boundary so the dependency flows in one direction.

Anti-Pattern 3: Lost Context at the Boundary

The coordinator forgets to pass a critical piece of context into the subagent's input, and the subagent either asks back (adding a round trip) or fabricates a plausible substitute. Fix: explicitly document and audit the input contract for every sub-task type.

Anti-Pattern 4: Decomposition That Is Never Reassembled

A pipeline that produces per-file outputs and then stops, expecting a human to do the cross-file integration. Fine if the human is always there; a bug if the pipeline is supposed to be autonomous. Fix: always include an explicit aggregation step in the design.

Anti-Pattern 5: One-Size-Fits-All Decomposition

Using the same fixed decomposition pattern for every scenario regardless of whether the work is predictable or exploratory. Fix: pick static-versus-dynamic per scenario based on the cues outlined earlier.

Plain-Language Explanation: Task Decomposition Strategies

Abstract decomposition patterns become intuitive when grounded in physical systems. Three analogies cover the full sweep of task decomposition strategies.

Analogy 1: The Restaurant Kitchen — Static Prompt Chaining

A busy restaurant kitchen during dinner service is static, sequential decomposition in physical form. The prep cook washes and chops ingredients. The sauté station cooks the proteins. The sauce station finishes with the pan sauce. The garnish station plates. Each station has one job, its own tools, and a predictable hand-off to the next station. The menu defines the chain in advance; every order of steak frites traverses the same stations in the same order. No station tries to do every other station's job — that would be attention dilution on a plate.

This is prompt chaining: each Claude sub-step is a kitchen station with a narrow objective, a focused tool set, and a clean hand-off artifact. Multi-aspect PR review works the same way — a security station, a style station, a performance station, and a final plating pass that produces the integrated review.

Analogy 2: Renovating a House — Dynamic Adaptive Decomposition

Adding comprehensive tests to a legacy codebase is like renovating an old house you have never been inside. You cannot write the full contractor schedule on day one — you do not know which walls are load-bearing, which wiring is rotten, which rooms are structurally sound. Instead, the general contractor spends the first week surveying the building, identifying the highest-risk problems, prioritizing them, and only then starts allocating sub-contractors to specific rooms. As the electrician finds unexpected knob-and-tube wiring, the plan re-shuffles again.

This is dynamic decomposition. The plan emerges in layers — survey first, identify priorities, delegate concrete tasks — and reshapes itself whenever a sub-task returns information that changes the picture. A statically scheduled renovation of a house you have never seen would be an expensive disaster.

Analogy 3: The Open-Book Exam with Many Chapters — Attention Dilution

Imagine sitting an open-book exam covering ten different chapters, with ninety minutes to write one single essay that addresses every chapter. A student trying to attend to all ten chapters in one continuous sweep produces a shallow essay that touches each topic superficially and misses the deep connections between them. The smarter strategy is to split the time: twenty minutes per chapter on focused notes, then a ten-minute synthesis pass that ties the chapter notes together into one coherent essay.

This is exactly per-file local analysis plus cross-file integration. Each "chapter pass" gets full attention on one file. The synthesis pass gets full attention on the integration. Two narrow passes beat one wide pass, because attention is a finite resource that cannot be subdivided without loss.

Which Analogy to Use on Exam Day

  • "Predictable pipeline, every run is the same shape" → kitchen analogy → prompt chaining / static decomposition.
  • "Open-ended, you have to look before you can plan" → renovation analogy → dynamic / adaptive decomposition.
  • "Reviewing many files, outputs feel shallow or miss cross-file issues" → open-book exam analogy → per-file decomposition plus integration pass.

Scenario Walkthrough: Code Generation with Claude Code

The Code Generation with Claude Code scenario cluster is one of the six CCA-F scenarios and a frequent home for task 1.6 questions.

Scenario Shape

A Claude Code session is asked to review a pull request spanning a dozen files and produce a single combined review comment covering security, style, and maintainability.

The Wrong Decomposition

A single prompt: "Review every file for security, style, and maintainability, then write the combined comment." Attention dilutes across twelve files times three concerns, producing shallow observations and missing cross-file issues.

The Right Decomposition

  • Per-file fan-out: one subagent per file, each producing a structured local review covering the three concerns for that file alone.
  • Cross-file integration pass: a single coordinator reads the twelve structured reviews, identifies cross-cutting themes, reconciles contradictions, and writes the combined PR comment.

This shape directly mirrors the per-file local plus cross-file integration pattern and is the preferred architecture in Claude's own documentation.

Scenario Walkthrough: Multi-Agent Research System

The Multi-Agent Research System scenario is the canonical home for dynamic decomposition questions.

Scenario Shape

A coordinator is asked to produce a research brief on an open-ended topic. The coordinator has access to web search, internal document retrieval, and the ability to spawn research subagents.

The Wrong Decomposition

A static chain: search → summarize → write. The coordinator cannot know in advance which threads are worth chasing without first reading the initial search results. A static chain forces a shallow brief because the promising threads are discovered too late to pursue.

The Right Decomposition

  • Initial scan: the coordinator runs a broad search and reads top-level results to map the territory.
  • Thread identification: the coordinator identifies three to five most-promising threads.
  • Parallel deep dives: the coordinator spawns a research subagent per thread, each with its own isolated context and focused tool set.
  • Synthesis: the coordinator aggregates thread summaries into the final brief.

The plan is generated after the initial scan, not before. That is dynamic decomposition.

Scenario Walkthrough: Claude Code for Continuous Integration

The CI/CD scenario cluster tests whether an architect recognizes when static, reproducible decomposition is the right call.

Scenario Shape

A team wants Claude Code to run on every pull request in CI and enforce a consistent review standard across the organization.

The Right Decomposition

A static prompt chain is almost always correct here. The steps are the same for every PR. Reproducibility and auditability dominate. Dynamic re-planning on every PR would make CI results non-deterministic and the pipeline un-reviewable.

Why Not Dynamic?

Dynamic decomposition in CI means every run could do something different, which breaks the core CI contract of "same inputs, same outputs." The -p (non-interactive) flag, static chains, and explicit step definitions line up with CI's need for reproducibility.

Do not default to "more adaptive is always better" on CI/CD scenarios. The exam rewards matching decomposition style to scenario requirements. Adaptive decomposition in a CI pipeline is an architecture smell, not a feature — CI needs deterministic, reproducible runs, and a static prompt chain delivers that. The same architect answer changes sign when the scenario switches to open-ended research or legacy test coverage expansion, where dynamic decomposition is correct. Read the scenario cues first, then pick the pattern. Source ↗

Common Exam Traps: Decomposition Misconceptions CCA-F Exploits

Trap 1: Decomposition Does Not Automatically Eliminate Context Limits

A common distractor claims that decomposing a workflow always solves context window problems. Decomposition does help by giving each sub-task focused context, but it does not magically make context infinite. Each subagent still has a context window; passing fifty megabytes of code into one subagent's input fails just as it would in a monolithic agent. Decomposition is a focus tool, not a storage trick.

Plan mode is a Claude Code feature for reviewing a proposed plan before execution. Task decomposition is the architectural design choice about how to slice a workflow. You can use plan mode on a decomposed plan or on a monolithic plan. The exam sometimes presents plan mode as if it were the whole of decomposition — it is not.

Trap 3: Dynamic Decomposition Is Not Always Better

Community pass reports repeatedly flag the instinct to choose "adaptive" on every scenario. Adaptive is wrong for CI/CD, wrong for high-volume structured extraction, and wrong for any scenario where predictability outweighs flexibility. Read the cues — predictability selects static, discovery selects dynamic.

Trap 4: Subagent Context Is Not Inherited

Candidates who assume the coordinator's history flows into every subagent design broken systems. The Task tool and Agent SDK subagent APIs both isolate context by default. This is a feature that improves focus, not a bug to work around.

Trap 5: Fan-Out Is Not Free

Parallelism looks "free" on paper, but each subagent consumes API quota, each spawn has setup latency, and aggregation complexity grows with the number of branches. The exam tests whether the candidate recognizes when fan-out actually saves wall-clock time and when it is overhead for overhead's sake.

Trap 6: Cross-File Integration Is Not Optional for Per-File Fan-Out

A decomposition that fans out to per-file subagents and then stops, expecting the user to read twelve separate reports, has not finished the job. The architect must always design the integration pass. Leaving it out is a frequent wrong answer pattern.

Practice Anchors: Task 1.6 Scenario Question Templates

CCA-F practice questions on task decomposition strategies cluster into five shapes. Detailed multi-question drills live in the ExamHub question bank.

Template A: Static vs Dynamic Selection (Code Generation)

A team wants Claude Code to review every pull request against a consistent checklist: secrets detection, license compliance, test coverage threshold. The checks are the same for every PR. Which decomposition strategy fits? Correct answer: static prompt chaining — predictable work surface, reproducibility and auditability dominate. Distractors claim dynamic / adaptive (wrong because the scenario signals predictability).

Template B: Static vs Dynamic Selection (Legacy Test Expansion)

An engineering team wants Claude Code to add comprehensive tests to a legacy Python codebase they did not write. The target modules, priorities, and test scopes are not known up front. Which decomposition strategy fits? Correct answer: dynamic adaptive decomposition — the coordinator must first map the codebase and identify high-impact areas before a plan is possible. Distractors claim static chaining (wrong because the plan cannot be written before the codebase is seen).

Template C: Attention Dilution Diagnosis (Multi-File Review)

A single Claude Code session is asked to review twenty files for security, style, and maintainability in one prompt. The output is superficial and misses cross-file issues. What is the most likely architectural fix? Correct answer: per-file decomposition plus a cross-file integration pass — restore per-task attention through fan-out, then synthesize. Distractors claim "longer system prompt" or "larger context window" (neither addresses attention dilution).

Template D: Subagent Context Isolation

A coordinator spawns a subagent to analyze a specific module and assumes the subagent can see the coordinator's earlier discussion of the project conventions. The subagent ignores the conventions. Root cause? Correct answer: subagent context is isolated; the coordinator did not pass the conventions explicitly through the Task tool input. Distractor claims "model is too small to follow instructions" (wrong root cause).

Template E: CI/CD Decomposition Shape

A CI pipeline using Claude Code must run the same review steps on every PR with deterministic behavior. Which decomposition pattern should the architect recommend? Correct answer: static prompt chain with -p non-interactive mode — reproducibility dominates. Distractor claims "coordinator-as-planner with dynamic re-planning per PR" (wrong because it breaks reproducibility).

CCA-F Depth: What Task 1.6 Tests and What It Does Not

CCA-F is an architecture-level, recognition-depth certification for task decomposition. You are expected to identify the right decomposition pattern for a given scenario, recognize the static-versus-dynamic axis, spot attention dilution, and design clean sub-task boundaries with explicit contracts.

What Task 1.6 Expects of You

  • Recognize when to decompose at all vs when a single prompt is fine.
  • Choose static (prompt chaining) vs dynamic (adaptive) based on scenario cues.
  • Design per-file local plus cross-file integration patterns.
  • Spot attention dilution symptoms and apply decomposition as the fix.
  • Recognize subagent context isolation as intentional and designed-for.
  • Identify decomposition anti-patterns (chatty, circular, lost-context, never-reassembled).
  • Match the CI/CD, Code Generation, and Multi-Agent Research scenarios to the correct decomposition shape.

What Task 1.6 Does NOT Expect of You

  • Implement a custom coordinator-planner loop in Python from scratch.
  • Tune specific parameters of the Task tool or Agent SDK spawning APIs at code level.
  • Choose between specific model weights or fine-tuned variants.
  • Handle streaming API details, vision inputs, or cloud-provider-specific deployments (Bedrock / Vertex / Azure).
  • Compute token budgets or rate-limit quotas for decomposed sub-tasks.

If your study drifts into those items, you have crossed into out-of-scope territory. Pull back to architecture-level judgment and move on.

Task Decomposition Strategies Frequently Asked Questions (FAQ)

What is the difference between prompt chaining and dynamic decomposition on the CCA-F exam?

Prompt chaining is static: the full sequence of sub-prompts is defined before execution and every run traverses the same steps. Dynamic (adaptive) decomposition is reshape-as-you-go: the coordinator plans the next one or two sub-tasks, executes them, observes results, and re-plans the remaining work based on what it learned. Prompt chaining fits predictable pipelines such as per-PR code review with a fixed checklist. Dynamic decomposition fits open-ended work such as expanding test coverage on a legacy codebase where the right plan cannot be known until the coordinator has surveyed the code.

When should an architect choose per-file decomposition plus a cross-file integration pass?

Choose this pattern whenever the work spans more than a handful of files and requires both within-file depth (security, style, maintainability per file) and cross-file synthesis (API contract consistency, shared utility duplication). A single-pass review of many files triggers attention dilution — the output becomes shallow and cross-file bugs are missed. Per-file fan-out restores focused attention per file; a dedicated integration pass restores cross-cutting synthesis. Two narrow passes beat one wide pass whenever N is roughly five files or larger, and earlier when files are large.

What is attention dilution and how does decomposition address it?

Attention dilution is the degradation in output quality that happens when a single Claude session is asked to attend to too many files, goals, or concerns at once. The reasoning budget gets split across every focus target, and each target receives only a fraction of Claude's actual capacity. Symptoms include shallow analysis, generic observations repeated across files, and missed cross-file patterns. Decomposition addresses it directly by splitting the overloaded prompt into focused sub-tasks, each with the full attention budget for its single objective, then adding an explicit integration pass if synthesis is required.

Do subagents spawned via the Task tool inherit the coordinator's conversation history?

No. Subagents spawned via the Task tool, or via the Agent SDK's subagent-spawning APIs, operate with isolated context. They see only what the coordinator explicitly passes in through the sub-task description. This is intentional: isolation prevents coordinator chatter from diluting subagent focus, and it keeps the sub-task's context budget clean. Architects who assume inheritance design systems that silently fail — the subagent follows wrong conventions, fabricates missing context, or asks unnecessary clarifying questions. Always treat the Task tool input as the complete context delivery for that sub-task.

Plan mode and task decomposition are related but distinct. Plan mode is a Claude Code feature that lets Claude propose its plan before touching tools, giving a human reviewer the chance to approve, reject, or revise. Task decomposition is the architectural design decision about how to slice a complex workflow into sub-tasks. Plan mode can be applied to a decomposed plan (useful) or to a monolithic plan (less interesting). Decomposition can proceed with or without plan mode depending on whether a human reviewer is part of the loop. The CCA-F exam frequently uses the two as distractors for each other — they are not substitutes.

Does dynamic decomposition always outperform static prompt chaining?

No. Dynamic decomposition excels on open-ended, discovery-heavy tasks where the plan cannot be known up front. It is actively wrong for CI/CD pipelines, high-volume structured extraction, and any workload where predictability, reproducibility, and audit trails matter more than flexibility. Choosing "adaptive" on every scenario is one of the most consistent wrong-answer patterns flagged in CCA-F pass reports. Read the scenario cues: predictability and repeatability select static; discovery and open-endedness select dynamic. Both are correct patterns for different jobs, and the exam rewards matching style to scenario.

What are the main anti-patterns to avoid in task decomposition?

Five anti-patterns recur in CCA-F scenarios. First, overly chatty sub-tasks that hand huge context back and forth because granularity is too fine — coarsen the cuts. Second, circular dependencies where A needs B and B needs A — re-cut the boundary or merge. Third, lost context at the boundary where the coordinator forgets to pass something critical — audit sub-task input contracts. Fourth, decomposition that fans out but never reassembles — always design the integration pass. Fifth, using one fixed decomposition pattern for every scenario regardless of whether the work is predictable or exploratory — pick static versus dynamic per scenario.

Further Reading

Related ExamHub topics: Multi-Agent Orchestration with Coordinator-Subagent Pattern, Multi-Step Workflows with Enforcement and Handoff, Agentic Loops for Autonomous Task Execution, Plan Mode vs Direct Execution.

Official sources