Task statement 2.3 of the Claude Certified Architect — Foundations (CCA-F) exam reads: "Distribute tools appropriately across agents and configure tool choice." It sits inside Domain 2 (Tool Design & MCP Integration, 18 % of the scored exam) and, despite the modest domain weight, produces some of the densest trap-question yields in the community pass pool. The reason is straightforward: tool distribution and tool_choice sit at the intersection of agent architecture (Domain 1), prompt engineering (Domain 4), and reliability (Domain 5). A single wrong design decision here cascades into tool-selection errors, hallucinated function calls, structured-output drift, and runaway agentic loops — all of which are scored elsewhere on the exam. Getting Task 2.3 right is leverage.
This study note walks through the full surface a CCA-F candidate is expected to design at the architecture level: the principles that govern which tools each agent should own, the coordinator-versus-subagent split, the four values of the tool_choice parameter (auto, any, a forced tool object, none) and the situations that call for each, forced-tool-call patterns for schema enforcement, dynamic tool availability across workflow states, access-control mechanisms for keeping agents inside their lane, and the measurable performance impact of oversized tool catalogues. A Common Exam Traps section, Practice Anchors tied to the named exam scenarios, and an FAQ with five candidate questions close the note.
Tool Distribution Principles — Assign Only Relevant Tools to Each Agent
Tool distribution is the architectural decision of which tools each agent in your system is allowed to see. It is not a runtime concern; it is the schema you pass in the tools array on every Messages API call and the allowed_tools list you set in each subagent definition. The principle is almost comically simple in statement and surprisingly difficult in practice: give each agent only the tools it needs to complete its assigned task, and no more.
Why "Only Relevant Tools" Is a Production Requirement
Every tool an agent is allowed to see consumes three scarce resources:
- Context tokens — Each tool definition (name, description,
input_schema) is injected into the system context on every request. A 40-tool catalogue adds several thousand tokens to every turn of every loop iteration. - Selection-cognitive load — Claude must read the full catalogue, compare it against the user's goal, and pick. The larger the catalogue, the higher the probability of near-miss selection (picking
search_customerswhen the correct call wassearch_accounts). - Blast radius — Every tool an agent can call is a tool an agent can misuse. A research subagent that also has write access to the production database is a research subagent that can corrupt the production database.
Reducing the tool set shrinks all three pressures at once. This is why the official Anthropic tool-writing guidance treats aggressive scoping as a first-class design principle, not a performance micro-optimization.
Tool distribution is the architectural assignment of tools to agents in a multi-agent system. Each agent receives a deliberately scoped tools array (or allowed_tools list in a subagent definition) containing only the tools it requires for its task. The distribution decision is made at design time, not runtime, and it directly controls context cost, tool-selection accuracy, and blast-radius safety. CCA-F task 2.3 tests whether candidates can justify distribution decisions in scenario form, not merely recite the principle.
Source ↗
The Role-First Distribution Heuristic
Before enumerating tools, write a one-sentence description of the agent's role. "The billing-refund agent resolves customer refund requests." From that sentence, list the nouns and verbs that must be reachable — refund, charge, customer, account, policy_document — and map each to exactly one tool. Anything that is not called out by the role description should be omitted. Candidates who start from "here is our tool library, which agent can use which" end up with oversized catalogues; candidates who start from the role description end up with tight ones.
Why Fewer Tools per Agent — Reduces Ambiguity, Improves Selection Accuracy
The empirical observation behind "fewer tools per agent" is that Claude's tool-selection accuracy degrades as the tool catalogue grows, and the degradation is not linear — it accelerates once the catalogue exceeds the model's effective selection span. This is both intuitive (humans confronted with forty nearly-identical buttons also pick wrong more often) and directly observable in production traces.
Three Mechanisms That Drive Accuracy Loss
- Near-duplicate confusion — Tools with overlapping names or descriptions (
search_usersvslookup_accountsvsfind_customers) cause Claude to pick the wrong one when the user's phrasing is ambiguous. - Description dilution — Long catalogues push each tool's description further down the system context; the "lost in the middle" effect degrades recall of tool boundaries.
- Over-eager tool calls — With many tools visible, Claude is more likely to call some tool even when the correct behaviour is to answer directly from context. This is the tool equivalent of "give a carpenter a bigger toolbox and they find more things to hammer."
The Fewer-Tools Rule Composes with Good Descriptions
Tool distribution is necessary but not sufficient. Even a tight catalogue fails if the descriptions are vague — this is the Domain 2.1 concern. The correct mental model is: Task 2.1 makes each tool's boundary clear; Task 2.3 makes sure only the right tools are visible to each agent in the first place. Both levers must be pulled.
Community pass reports consistently cite "oversizing agent tool counts" as a top-tier CCA-F mistake. When a scenario answer includes the option "give this agent access to all fifty platform tools so it can handle edge cases," that option is almost always wrong. The correct answer shrinks the catalogue to the minimum that covers the role. Source ↗
Coordinator Tool Set vs Subagent Tool Set — Separation of Concerns
In a coordinator-subagent architecture, the coordinator and each subagent run with their own isolated tool catalogues. The distribution decision is therefore two-level: one catalogue for the coordinator and one catalogue per subagent role. Treating them as a single shared catalogue is a common design failure.
What the Coordinator Should Own
The coordinator's job is to decompose the task, dispatch subagents, and assemble the final answer. Its tool set should reflect this:
- Subagent-dispatch tools (the coordinator's primary interface — spawn research subagent, spawn extraction subagent).
- State-management tools for tracking the overall plan (task list read/write, session checkpointing).
- Final-answer-assembly tools that operate on aggregated subagent outputs.
The coordinator should almost never have direct access to the low-level tools its subagents use (file read, database write, external API calls). If the coordinator can do the subagent's job directly, the subagent is redundant and the architecture has collapsed.
What Each Subagent Should Own
Each subagent's catalogue is scoped to its single responsibility. A research subagent gets web search and summarization tools. An extraction subagent gets document read and structured-output tools. A verification subagent gets lookup and comparison tools. Subagents should not have spawn permissions for other subagents unless the architecture is explicitly hierarchical beyond two levels.
Why the Split Matters on CCA-F
Exam scenarios regularly present a multi-agent research system and ask which agent should own which tool. A coordinator with direct file-read access in a scenario that otherwise splits work cleanly is a trap answer — the correct design pushes file-read into the research subagent. Similarly, a subagent with access to spawn other subagents is a trap answer unless the scenario explicitly describes a three-level hierarchy.
When diagramming a coordinator-subagent scenario on the exam, label each tool with its rightful owner before reading the answer options. Candidates who choose distribution based on the options rather than on first principles are easier to trick with near-correct distractors. Source ↗
tool_choice Configuration — auto, any, and Forced (Specific Tool Name) Modes
tool_choice is a Messages API parameter that controls whether and which tool Claude is permitted to call on the next turn. The parameter has four valid shapes: auto, any, an explicit tool object (forced), and none. Misunderstanding the semantic difference between any two of them is one of the most repeatable traps in Domain 2.
The Four Shapes
{ "type": "auto" }— Claude decides whether to use a tool, and which one, based on the conversation. This is the default.{ "type": "any" }— Claude must use some tool from the catalogue; it cannot respond with plain text this turn.{ "type": "tool", "name": "<tool_name>" }— Claude must call the specific named tool on this turn.{ "type": "none" }— Claude must not call any tool; it must respond with plain text.
tool_choice is the Messages API parameter that constrains Claude's tool-calling behaviour on the next response. Its four values — auto, any, a forced tool object { type: "tool", name: "..." }, and none — each express a different architectural intent. auto is the default and lets Claude decide; any forces a tool call without specifying which; the forced object removes Claude's choice entirely by naming the tool; none disables tool use for this turn. The parameter is set per API call, so it can vary across turns within the same agentic loop.
Source ↗
tool_choice Is Per-Call, Not Per-Agent
A frequent misconception on the exam is that tool_choice is a one-time agent configuration. It is not. tool_choice is set independently on every Messages API call, which means a single agentic loop can legitimately set tool_choice: any on the first turn to force an initial tool call, then switch to auto on subsequent turns to let Claude decide, then switch to a forced object on the final turn to emit a schema-compliant answer. Dynamic tool_choice across the loop is the design lever behind several task 2.3 and task 4.3 scenarios.
tool_choice: auto — Model Decides Whether and Which Tool to Use
tool_choice: auto is the default and the right answer for most open-ended agentic scenarios. Under auto, Claude reads the user's goal, compares it against the tool catalogue, and autonomously decides:
- Whether any tool call is required at all, or whether a direct text answer suffices.
- Which specific tool to call.
- Whether to emit multiple parallel tool calls in the same message.
When auto Is the Right Choice
auto is the right choice when the workflow is exploratory or multi-branch — the customer-support-resolution-agent working through an unpredictable ticket, the code-generation-with-claude-code agent investigating a bug across unknown files, the multi-agent-research-system coordinator deciding which subagent to dispatch. In each case, Claude's judgment about whether and which tool to call is a feature, not a bug.
When auto Is the Wrong Choice
auto is wrong when your architecture requires a tool call — for example, the final turn of a structured-extraction workflow where the only acceptable output shape is a specific tool's schema. Under auto, Claude may decide the answer is complete and emit a plain-text response that does not conform to any tool schema. This is why extraction pipelines switch to a forced tool_choice on the final turn.
auto and Parallel Tool Calls
Under auto, modern Claude models aggressively parallelize independent tool calls. A research agent looking up four unrelated facts will typically emit four tool_use blocks in one message. This is a default behaviour; it is not something tool_choice toggles.
tool_choice: any — Model Must Use Some Tool (Prevents Pure Text Response)
tool_choice: any says "Claude must call some tool, but I am not telling you which one." It is weaker than a forced tool object but stronger than auto. It is the correct choice in a narrow band of scenarios and a trap answer in a much wider set.
The Legitimate Use Case for any
any is appropriate when:
- The workflow must take a tool-mediated action this turn (for example, "always log this event via some logging tool") but the specific action depends on context.
- The agent must not respond with plain text yet because the business rule is "no direct-to-user response until an investigation tool has been invoked."
- You want to prevent Claude from short-circuiting a research phase with a confident-sounding but under-investigated text answer.
What any Does Not Guarantee
any does not guarantee which tool Claude picks. If the catalogue contains both search_knowledge_base and escalate_to_human, Claude may choose either, and your application must handle both outcomes. Answers that describe any as "forces Claude to use the right tool" or "forces Claude to stay on task" over-state the guarantee. any only removes the text-response option; it leaves tool selection inside Claude's judgement.
When any Is the Wrong Choice
any is the wrong answer on the exam when:
- The scenario requires a specific tool (use a forced tool object instead).
- The scenario merely wants good behaviour (use
autowith better tool descriptions instead —anydoes not improve selection quality, it only removes the text option). - The scenario is an open-ended investigation where refusing text responses would block reasonable outputs (use
auto).
The highest-frequency tool_choice trap on the CCA-F community pool is treating any as a way to force a specific tool. any only forces some tool; it does not specify which one. If the scenario needs exact tool enforcement, the correct answer is { type: "tool", name: "<specific>" }, not any. Answers that equate any with "force the correct tool" are always wrong.
Source ↗
tool_choice: forced — Model Must Call a Specific Named Tool
The forced tool_choice value is an object of the shape { "type": "tool", "name": "<tool_name>" }. It removes Claude's selection judgment entirely for the current turn: the next response must contain exactly one tool_use block for the named tool. There is no negotiation, no fallback to text, no substitute tool.
What Forced tool_choice Is For
Forced tool_choice is the canonical mechanism for:
- Final-step schema enforcement — At the end of an extraction workflow, force the
emit_structured_recordtool to guarantee the output conforms to the tool'sinput_schema. - Deterministic pipeline stages — In a fixed multi-step pipeline where step N must always invoke
classify_intent, force that tool on step N. - Breaking oscillation loops — When Claude alternates between two tools, forcing one of them terminates the oscillation.
What Forced tool_choice Is Not For
Forced tool_choice is not a way to make Claude think harder about a tool; it explicitly bypasses Claude's selection judgment. If the forced tool is wrong for the current state, Claude will still call it with whatever inputs it can construct, and those inputs may be nonsense. Forced tool_choice is a constraint on shape, not a constraint on correctness.
Interaction with Parallel Tool Calls
When tool_choice is a forced tool object, Claude emits exactly one tool_use block for the named tool. Parallel multi-tool emission is suppressed. This matters for scenario questions that compare latency of a forced-tool pipeline versus an auto pipeline — parallelism is lost under forced mode.
Interaction with Strict Mode
Forced tool_choice pairs naturally with strict: true on the tool definition. strict: true guarantees that the tool's input_schema is enforced exactly; forced tool_choice guarantees that this specific tool is what gets called. Together, the pair produce a schema-guaranteed structured output — the canonical pattern for Task 4.3 scenarios.
Forced tool_choice is the { type: "tool", name: "..." } form of the parameter. It removes Claude's tool-selection judgment entirely for one turn by mandating that the next response call exactly the named tool. It is used to enforce schema-compliant final outputs, to pin deterministic pipeline stages, and to break oscillation in agentic loops. It is distinct from any, which only forbids a text response without picking the tool. It bypasses — does not enhance — Claude's selection logic.
Source ↗
Forced Tool Choice Use Cases — Schema Enforcement, Structured Output via Tool
The primary CCA-F use case for forced tool_choice is structured-output enforcement via tool use. This is the canonical Task 4.3 pattern that Domain 2.3 is responsible for configuring.
The Pattern: Define a Tool Whose input_schema Is Your Output Schema
Instead of asking Claude to "return JSON matching this schema" in a prompt, define a tool whose input_schema is literally the output schema you want. Name the tool after the action it represents — emit_customer_record, submit_bug_report, record_extracted_entities. Set tool_choice to the forced-tool object naming this tool, and (ideally) set strict: true on the tool definition.
Because forced tool_choice makes Claude call exactly that tool, and strict: true guarantees that the tool inputs conform to the schema, the resulting tool-use block's input field is a schema-valid JSON object you can consume directly. There is no parsing ambiguity, no prose-wrapped JSON, no missing fields.
Why This Beats Prompt-Based JSON Instructions
Prompt-based "return JSON like this" instructions carry three recurring failure modes: missing or extra fields, trailing prose wrapped around the JSON, and soft schema drift on edge cases. Forced tool_choice plus strict: true eliminates all three by moving enforcement from prompt-instruction compliance (soft) to API-level schema validation (hard). Community pass reports consistently cite this as one of the "five mental models that matter most" on CCA-F.
When to Apply the Pattern
Apply forced tool_choice for structured output whenever:
- The downstream consumer requires strict schema compliance (pipeline parsing, data warehouse loads, contract APIs).
- You are running at high volume and cannot afford per-record validation-and-retry costs.
- The output shape is stable and known in advance.
Do not apply the pattern when the output shape varies per request (use auto and let Claude pick the appropriate tool from a small set) or when the output should include free-form explanation alongside the structured data (consider a two-turn pattern — forced tool for the record, auto for the explanation).
The combination of forced tool_choice and strict: true on the named tool is the CCA-F-canonical answer for "guarantee structured output compliance." Answers that propose "add stricter instructions to the system prompt" or "validate JSON after the fact and retry" are almost always inferior to the forced-tool pattern because they rely on prompt compliance rather than API-level enforcement. Programmatic enforcement beats prompt-based guidance.
Source ↗
Dynamic Tool Availability — Enabling/Disabling Tools Based on Workflow State
Tool availability does not have to be static across the life of an agentic loop. A well-designed loop frequently varies the visible tool set from turn to turn, so that only the tools appropriate to the current workflow state are exposed.
Why Dynamic Availability Exists
A single agent may progress through several distinct phases — plan, execute, verify, report. Different tools are appropriate to each phase. Exposing execution tools during the plan phase invites premature action; exposing report tools during execution invites Claude to short-circuit and emit a partial summary. Scoping the visible tools to the phase's intent keeps Claude inside the right conceptual lane.
Two Mechanisms for Varying Tool Availability
- Per-call
toolsarray — The simplest mechanism. Your application passes a differenttoolsarray on each Messages API call based on the current workflow state. The smaller the state machine, the simpler this becomes. - Per-call
tool_choice— A softer mechanism. Leave thetoolsarray constant but varytool_choice—autoin the plan phase, forced-tool-object for a named "execute" call in the execution phase,nonein the final summary phase. Both mechanisms work in concert.
A Concrete Example — A Three-Phase Extraction Pipeline
Turn 1 (plan): tools = [list_documents, read_document], tool_choice = auto. Claude lists and reads the relevant documents. Turn 2 (extract): tools = [read_document, emit_record], tool_choice = { type: "tool", name: "emit_record" }. Claude is forced into schema-compliant record emission. Turn 3 (close): tools = [], tool_choice = none. Claude writes a human-readable summary with no further tool calls possible.
This example illustrates that dynamic tool availability is not a micro-optimization; it is a structural technique for keeping Claude's behaviour aligned with the workflow's current intent.
On CCA-F scenarios, the phrase "the agent must not take action until the investigation phase completes" telegraphs a dynamic-tool-availability answer. Remove execution tools from the catalogue (or set tool_choice: none) during investigation, then restore them afterwards. Answers that try to achieve the same effect with more-forceful system prompts are inferior because they rely on prompt compliance rather than API-level guarantees.
Source ↗
Tool Access Control — Preventing Agents from Accessing Out-of-Scope Tools
Tool distribution decisions double as access-control decisions. An agent that cannot see a tool cannot invoke that tool, regardless of what a clever prompt persuades it to do. This is why access control in an agent system is implemented at the tool-definition layer and not at the prompt layer.
The Three Layers of Tool Access Control
- Distribution-time scoping — The
toolsarray on each Messages API call, or theallowed_toolslist on each subagent definition, enumerates exactly which tools the agent can see. A tool not in the array is invisible to Claude. - Server-side authorization — Your tool implementations still authorize every call. A
delete_usertool that trusts its caller because "the agent is supposed to be scoped away from it" is a liability. Defense in depth. - Hook-based interception — Agent SDK
PreToolUsehooks can reject specific tool invocations at runtime (for example, reject anyBashinvocation that referencesrm -rf). Hooks are a Task 1.5 concept but they close the access-control loop.
Why Prompt-Based Access Control Fails
"Do not call the delete_user tool unless the user is an admin" is a prompt-based constraint. Claude will usually obey it, but "usually" is not an access-control guarantee. The same constraint expressed as distribution (the non-admin agent simply does not have delete_user in its tools array) is a hard boundary. CCA-F consistently prefers the hard-boundary answer.
Subagent allowed_tools as an Access-Control Surface
Custom subagent definitions in Claude Code expose an allowed_tools list. This is the structural way to scope a subagent's capabilities. A "research" subagent defined with allowed_tools: [Read, Grep, Glob] physically cannot call Write, Edit, or Bash, even if its behaviour prompt accidentally invites them. This is the Task 2.3 lever that feeds Task 1.3 (subagent invocation).
Three tool access-control mechanisms, one sentence each:
toolsarray /allowed_tools— Distribution-time scoping; invisible tools cannot be called.- Server-side authorization in the tool implementation — Runtime enforcement; defense in depth.
PreToolUsehooks — Programmatic veto of specific invocations before execution.
Rule of thumb: prefer distribution-time scoping for structural constraints (agent roles, multi-tenant isolation), server-side authorization for data-level constraints, and hooks for dynamic policy checks. Source ↗
Tool Set Size Impact on Performance — Cognitive Load of Large Tool Catalogues
There is no official published threshold at which tool-selection accuracy collapses, and the exam does not test exact numbers. What the exam does test is the direction of the effect and the mitigation pattern.
The Direction of the Effect
As the number of tools in the catalogue grows, three measurable things get worse at the same time:
- Selection accuracy — The probability that Claude picks the correct tool for a given user goal declines.
- Input-schema adherence — With more tools in the context, tool-input field names and types are more often cross-contaminated between similar tools.
- Over-tool-call frequency — Claude is more likely to call a tool when a direct answer would have sufficed.
These effects interact. A 40-tool catalogue with several near-duplicates and vague descriptions produces qualitatively worse agent behaviour than a 10-tool catalogue with crisp descriptions — and fixing the tool descriptions in the 40-tool catalogue recovers some, but not all, of the lost accuracy. The fundamental constraint is catalogue size itself.
Three Mitigations in Order of Preference
- Consolidate overlapping tools — If
search_usersandlookup_accountsare both in the catalogue, collapse them into onesearch_accountstool with a clearer boundary. - Split the agent — If a single agent needs many tools because it plays several roles, split it into a coordinator plus per-role subagents and distribute tools accordingly.
- Use dynamic tool availability — If the same agent plays the same role but passes through phases, vary the visible tools per phase.
When You Cannot Shrink the Catalogue
Some legitimate use cases require large catalogues — IDE-style code assistants, for example, expose dozens of built-in tools. In these cases, the mitigation shifts to tool-search patterns: the MCP connector supports tool-search that defers tool definition loading until Claude queries for them. This keeps the in-context catalogue small while retaining access to a large physical library. Tool search is a Task 2.4 topic but its motivation lives here.
On CCA-F, any answer that solves "the agent keeps picking the wrong tool" by "add more tools to cover more cases" is wrong. The correct direction of the fix is fewer, better-scoped tools — either by consolidation, by splitting into subagents, or by dynamic availability. More tools never makes selection accuracy better. Source ↗
Interaction with Strict Mode — The Forced-Plus-Strict Pattern
Strict mode (strict: true on a tool definition) is an orthogonal mechanism that guarantees the tool's input_schema is enforced exactly at the API layer. It composes with tool_choice in the most important CCA-F structured-output pattern.
What Strict Mode Does
With strict: true:
- The tool's
input_schemamust be closed (no additional properties beyond those declared). - Type and enum constraints are enforced by the API; Claude cannot emit invalid tool inputs.
- Fields marked required must be present.
The Forced-Plus-Strict Pattern
The combination tool_choice: { type: "tool", name: "emit_record" } plus strict: true on emit_record produces three guarantees:
- Claude will call
emit_record, not some other tool. - Claude's
inputfor that call will conform to the schema exactly. - Your application can consume the
inputdirectly without validation overhead.
This is the architectural answer to Task 4.3 ("Enforce structured output using tool use and JSON schemas") and it lives inside Task 2.3 because the forced tool_choice is the selection-level half of the pattern. Candidates who learn these two pieces separately miss the exam-relevant combination.
When Strict Mode Is Not Enough
Strict mode enforces shape, not semantics. A strict: true tool with a discount_percent: number field will reject a string input but will happily accept -500 as a number. Semantic validation — discount must be between 0 and 100 — still belongs in your tool's server-side implementation. Claude can be trusted to produce well-shaped inputs; it cannot be trusted to produce always-correct inputs.
tool_choice and None — Suppressing Tool Calls Entirely
tool_choice: none forbids Claude from calling any tool on the next turn. The response must be plain text. This value exists to cleanly disable tool use without having to modify the tools array.
When to Use none
- Final-answer summary phase — After investigation and structured emission, the final human-readable summary should not trigger any more tool calls.
- Safety-sensitive turns — When an agent is producing a response a human is about to read and act on, disabling tools prevents any last-second action.
- Explanation turns — "Explain what you did" responses should not take further action.
When Not to Use none
- To prevent tool use in the "plan" phase — use dynamic tool availability (shrink the
toolsarray) instead, becausenonecan be overridden per-call whereas distribution is a structural boundary. - To force a plain-text response when any tool call would be inappropriate — this is exactly what
noneis for, but double-check thatautowould not do the same job with less ceremony (Claude will already emit text when tools are not useful).
Plain-English Explanation
Tool distribution and tool_choice are two sides of the same design coin: one determines which tools each agent can see, and the other determines whether and which tool the agent is permitted to call on a given turn. Three analogies from different domains cover the full sweep.
Analogy 1: The Hospital Department — Distribution as Role Scoping
Picture a hospital emergency department. The triage nurse has access to vitals monitors, the patient record system, and an internal "dispatch to specialist" form. The triage nurse does not have access to the surgical suite's scalpels, the radiology department's CT scanner, or the pharmacy's controlled-substance dispenser — not because the nurse is untrusted, but because those tools belong to other roles and including them in the triage workstation would cause confusion and expose surfaces that the nurse does not need to do their job. The cardiologist, the radiologist, and the pharmacist each have their own tool set, scoped to their role. This is exactly tool distribution in a coordinator-subagent architecture. Each agent (role) sees only the tools relevant to its responsibilities. The ED physician (coordinator) dispatches to specialists (subagents) but does not directly wield the specialist's tools. A hospital where every clinician had every tool would be catastrophic; an agent system where every agent has every tool is merely inefficient and error-prone, but the same principle applies.
Analogy 2: The Vending Machine Selector — tool_choice Modes
A vending machine has three different "purchase enforcement" configurations that map cleanly to tool_choice. In auto mode, you are free to walk up, browse the catalogue, and decide whether to buy anything at all — you might just be looking. In any mode, someone has given you a gift card that must be spent before you leave; you have to buy something from the machine, but you still pick which snack. In the forced mode, someone has reached over, typed the button code for you, and said "you are getting the peanut M&Ms — no negotiation." none is the fourth configuration: the machine is out of service and no selection of any kind is permitted. Each mode exists for a reason. auto is right when the purchase is optional; any is right when something must be bought but the choice is yours; forced is right when the specific item is non-negotiable; none is right when you should walk past the machine entirely. Confusing "any" with "forced" is like thinking a gift card locks you to one specific snack — it does not; it only forces a purchase.
Analogy 3: The Construction Site Permit — Access Control at the Permit Desk
On a regulated construction site, each worker carries a permit badge that enumerates exactly which equipment they are certified to operate. The framing crew has permits for saws and nailers. The electrical crew has permits for conduit and breakers. The crane operator has a permit for the crane and nothing else. When a worker scans into a piece of equipment, the reader checks the permit against the machine. A worker with a mismatched permit physically cannot start the equipment — the interlock refuses. This is tool access control expressed as permit-at-the-gate. The badge (the allowed_tools list) decides what a worker (an agent) can touch. "Please do not operate the crane unless you are certified" painted on the crane is a prompt-based constraint; permit-enforcement at the ignition is a distribution-based constraint. The exam consistently prefers the second because the first relies on compliance and the second is structurally incapable of violation. An agent cannot misuse a tool it does not have a permit for.
Which Analogy Fits Which Exam Question
- Questions about which tool each agent should own → hospital-department analogy.
- Questions about choosing
auto,any, forced, ornone→ vending-machine analogy. - Questions about preventing out-of-scope tool use → construction-permit analogy.
Common Exam Traps
CCA-F Task 2.3 exhibits two named traps in the outline plus several high-frequency implicit traps documented in community pass reports.
Trap 1: Treating any as "Force the Specific Tool"
The outline names this explicitly: tool_choice: any does not guarantee a specific tool. It only forbids a text response. If two or more tools are visible and the business logic requires exactly one of them, any is insufficient — the correct value is the forced tool object { type: "tool", name: "<specific>" }. Distractor answers regularly phrase any as "forces the agent to stay on task" or "forces the right tool call." Both framings are wrong. any is weaker than forced and must not be substituted for it.
Trap 2: Treating Forced tool_choice as "Claude Thinks Harder About This Tool"
Again from the outline: forced tool_choice bypasses Claude's selection judgment entirely. It does not improve Claude's reasoning about whether the tool is correct for the current state; it simply mandates the call and trusts your architecture to have set the state correctly. If the forced tool is wrong for the turn, Claude will still emit a best-effort invocation with whatever inputs it can construct. Answers that describe forced mode as "helps Claude choose the right tool" misstate the mechanism.
Trap 3: Oversized Tool Catalogues as the "Safe" Choice
A recurring distractor says "give the agent access to all platform tools so it can handle edge cases." Community pass reports consistently identify this as wrong. Oversized catalogues degrade selection accuracy, inflate context cost, and widen blast radius. The correct direction of the fix is always tighter scoping — consolidation, subagent split, or dynamic availability.
Trap 4: Prompt-Based Access Control as a Substitute for Distribution
Another recurring distractor proposes "add a system prompt instructing the agent not to call delete_user." This is a prompt-based constraint that relies on compliance. The correct answer removes delete_user from the agent's tools array entirely. Distribution is a hard boundary; prompts are a soft one. The exam prefers hard boundaries for access control.
Trap 5: Using any When auto Would Do
Candidates sometimes reach for any as a defensive default — "to make sure the agent uses its tools." In open-ended workflows, auto is almost always the right answer. Claude will use tools when tools are useful and will respond with text when text is appropriate. Forcing a tool call via any in scenarios where no tool is needed produces low-quality tool invocations with poor inputs. any is a narrow-purpose setting, not a safety default.
Practice Anchors
Tool distribution and tool_choice concepts appear in every CCA-F scenario cluster but carry the highest weight in two.
Developer-Productivity-With-Claude Scenario
A developer uses an SDK-driven agent to perform autonomous tasks across a large codebase. Different roles require different tool sets: an investigation agent needs Read, Grep, and Glob; a refactor agent needs Edit and Write; a test-authoring agent needs Write, Read, and Bash. Expect scenario questions that test distribution decisions — which agent in a coordinator-subagent split should own Bash, which should own Edit, which should own only Read. Expect tool_choice questions on pipeline stages: the investigation stage uses auto, the final-summary stage uses none, and any stage that must emit a specific structured artefact (for example, a refactor plan record) uses forced tool choice plus strict: true.
Multi-Agent-Research-System Scenario
In the multi-agent-research-system scenario, a coordinator dispatches research subagents, collects their findings, and assembles the final report. The tool-distribution decisions are the architectural spine: the coordinator owns subagent-dispatch tools and state management; each research subagent owns web-search and summarization tools; neither should cross-contaminate. Expect questions that test the coordinator-versus-subagent tool-set split, the decision to use forced tool_choice on the final report-emission step to enforce a schema, and the decision to use dynamic tool availability to prevent subagents from spawning further subagents. Customer-support-resolution-agent scenarios also routinely test distribution (does the refund agent own the issue_refund tool or only propose it?) and tool_choice (force escalate_to_human when confidence is low).
FAQ — Tool Distribution and tool_choice Top 5 Questions
What is the difference between tool_choice: any and a forced tool object?
tool_choice: any says "Claude must call some tool from the catalogue, but I am not specifying which." Claude still exercises selection judgment across all visible tools. The forced tool object { type: "tool", name: "<specific>" } says "Claude must call exactly this specific tool." There is no selection judgment under the forced form. The distinction matters because any is appropriate when a tool call is required but any of several would be acceptable, whereas forced is appropriate when schema enforcement or deterministic pipeline stages require exactly one tool. Treating them as interchangeable is one of the highest-frequency traps on CCA-F Task 2.3 — any never guarantees a specific tool.
How do I enforce a structured JSON output from Claude?
The CCA-F-canonical pattern combines three mechanisms: (1) define a tool whose input_schema matches the output schema you want, (2) set strict: true on that tool definition, and (3) on the final turn of the pipeline, set tool_choice to a forced tool object naming that tool. Together these produce three guarantees: Claude calls the right tool, the tool's inputs conform to the schema exactly, and your application can consume the output directly without parsing or validation overhead. Prompt-based instructions like "return JSON matching this schema" are inferior because they rely on prompt compliance rather than API-level enforcement — programmatic constraints beat prompt guidance.
How many tools should I give a single agent?
The exam does not specify a numeric threshold, but the direction is unambiguous: fewer is better, up to the point of covering the agent's role. Start from the agent's one-sentence role description, enumerate the nouns and verbs that must be reachable, and assign exactly one tool per reachable concept. Catalogues that exceed the role's needs degrade selection accuracy, inflate context cost, and widen blast radius. If an agent requires a large tool set, consider splitting it into a coordinator plus subagents, consolidating overlapping tools, or using dynamic tool availability to scope visible tools to the current workflow phase.
Can I vary tool_choice across turns in the same agentic loop?
Yes, and this is a first-class design pattern. tool_choice is set per Messages API call, so a single loop can legitimately use auto during exploratory investigation turns, switch to a forced tool object to enforce a schema-compliant emission turn, and switch to none for a final natural-language summary turn. The pattern is used in structured-extraction pipelines, multi-phase research agents, and any workflow where different turns carry different behavioural constraints. Dynamic tool_choice is orthogonal to dynamic tool availability (varying the tools array per call); both can be used together for strong state-dependent constraints.
How do I prevent an agent from calling a tool it should not have access to?
Remove the tool from the agent's tools array (or from its subagent allowed_tools list). Invisibility at the distribution layer is a hard, structural boundary — the tool cannot be called because Claude cannot see it. Prompt-based instructions such as "do not call the delete_user tool unless the user is an admin" are a soft boundary that relies on compliance and fails occasionally by design. For defense in depth, combine distribution-time scoping with server-side authorization inside the tool implementation and (optionally) PreToolUse hooks that can veto specific invocations at runtime. CCA-F consistently prefers the hard-boundary answer over prompt-based access control.
Further Reading
- Define tools — best practices for tool descriptions and tool_choice: https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/implement-tool-use
- Tool use with Claude — overview and agentic loop: https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/overview
- Handle tool calls — tool_result format and error responses: https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/handle-tool-calls
- Create custom subagents — Claude Code Docs: https://docs.anthropic.com/en/docs/claude-code/sub-agents
- Strict tool use — schema-guaranteed output: https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/strict-tool-use
- Writing tools for agents — Anthropic Engineering Blog: https://www.anthropic.com/engineering/writing-tools-for-agents
Related ExamHub topics: Tool Interface Design — Descriptions and Boundaries, Multi-Agent Orchestration with Coordinator-Subagent Patterns, Subagent Invocation, Context Passing, and Spawning, MCP Server Integration into Claude Code.