IAM and Bedrock security is the set of AWS Identity and Access Management constructs, Amazon Bedrock resource policies, network controls, and encryption settings that decide who can invoke a foundation model, which foundation model they can invoke, where the inference traffic travels, and how the data flowing into model customization jobs and knowledge bases stays protected. On Amazon Bedrock, identity and access management is never optional — every InvokeModel, InvokeAgent, GetFoundationModel, and CreateKnowledgeBase API call is routed through AWS IAM, optionally filtered by a VPC Interface Endpoint policy, and optionally encrypted at rest with an AWS KMS customer-managed key. If any of those four layers is misconfigured, a generative AI workload can leak prompt data, over-spend on expensive foundation models, or expose proprietary fine-tuning data to the wrong principal.
On the AIF-C01 exam, Domain 5 (Security, Compliance, and Governance for AI Solutions) expects you to recognize the exact IAM actions Amazon Bedrock publishes, know when to attach a Bedrock resource-based policy vs an identity-based IAM policy, understand how a Bedrock VPC Interface Endpoint keeps inference traffic off the public internet, and be able to explain why AWS KMS customer-managed keys are preferred for fine-tuning datasets and knowledge base embeddings. This guide walks the full IAM and Bedrock security stack in plain language, layers in three distinct analogies, and closes with a least-privilege playbook and FAQ tuned for AIF-C01.
What is IAM and Bedrock Security?
IAM and Bedrock security is the customer-owned layer of the AWS Shared Responsibility Model that governs access to Amazon Bedrock foundation models, agents, knowledge bases, and customization jobs. AWS operates and secures the underlying Amazon Bedrock service plane — the inference fleet, the model weights AWS hosts for you, the physical data centers — but you are responsible for deciding which IAM principals can call bedrock:InvokeModel, which foundation model ARNs they can reach, whether traffic rides a VPC Interface Endpoint, and whether data at rest is wrapped in an AWS KMS customer-managed key.
Every Amazon Bedrock API request follows the same three checkpoints. First, AWS IAM evaluates the identity-based policy attached to the calling principal and any Bedrock resource-based policies attached to the target model, agent, or knowledge base. Second, if the request arrives through a VPC Interface Endpoint, the endpoint policy applies an additional allow/deny filter. Third, when the operation reads or writes persistent data — training files, custom model artifacts, knowledge base vector embeddings, provisioned throughput metadata — AWS KMS enforces the chosen encryption key policy. A failure at any one of the three checkpoints denies the request. This is exactly why IAM and Bedrock security is heavily weighted inside the AIF-C01 Security, Compliance, and Governance domain.
- IAM and Bedrock security: the end-to-end customer-owned control plane for authenticating, authorizing, encrypting, and auditing every Amazon Bedrock API call.
bedrock:InvokeModel: the IAM action that permits synchronous inference against a foundation model.bedrock:InvokeAgent: the IAM action that permits invoking an Amazon Bedrock Agent, which can orchestrate multi-step tool use.- Bedrock VPC Interface Endpoint: an AWS PrivateLink endpoint that keeps Amazon Bedrock API traffic on the AWS private network instead of traversing the public internet.
- AWS KMS customer-managed key (CMK): an encryption key you own, rotate, and gate with a key policy — preferred over AWS-managed keys for Bedrock model customization data and knowledge base embeddings.
- Reference: https://docs.aws.amazon.com/bedrock/latest/userguide/security-iam.html
Why IAM and Bedrock security matters for AIF-C01
The AIF-C01 Security, Compliance, and Governance for AI Solutions domain is weighted at 14% of the exam, and IAM and Bedrock security questions dominate the domain because they cut across every other generative AI topic. A prompt-engineering question turns into a security question the moment an IAM policy blocks a specific foundation model. A knowledge base question turns into a security question the moment you encrypt the embeddings with an AWS KMS customer-managed key. Mastering IAM and Bedrock security therefore pays dividends across multiple domains, not just Domain 5.
Plain-Language Explanation: IAM and Bedrock Security
Abstract IAM policies are easier to remember when they map to something physical. Here are three distinct analogies that together cover every major concept in IAM and Bedrock security.
Analogy 1: The Restaurant Kitchen with a Locked Walk-In
Picture a high-end restaurant kitchen where Amazon Bedrock is the kitchen pass — the counter where cooked dishes leave the kitchen. Each foundation model (Anthropic Claude, Meta Llama, Amazon Titan, Cohere Command) is a specialist chef standing at that pass. An IAM policy is the printed order ticket that tells the head waiter which chefs they may order from and which ingredients they may request. If the ticket says "you may order from chef Claude only, and only dishes under $30", the waiter literally cannot walk over to chef Llama, no matter how politely they ask.
The walk-in refrigerator holds the raw ingredients — this is the AWS S3 bucket of training data, the knowledge base vector store, and the custom model artifacts. It is padlocked with an AWS KMS customer-managed key, and only staff whose badges appear on the key policy can open the door. The back alley delivery gate is the Bedrock VPC Interface Endpoint — deliveries of raw ingredients arrive through that private gate, never through the front dining room, so competitors cannot photograph the shipment. AWS CloudTrail is the kitchen's CCTV system, recording every time a waiter pulled a ticket, every time the walk-in opened, and every time a specialist chef was booked.
Analogy 2: The Library Rare-Books Room
Amazon Bedrock foundation models are like rare books locked behind the librarian's desk. An IAM policy with bedrock:InvokeModel scoped to a specific model ARN is the call slip that names a single title — "Claude 3 Sonnet, 3 February 2024 edition" — and nothing else. bedrock:GetFoundationModel is merely the card-catalog lookup (you can read the metadata, but you still cannot open the book). bedrock:CreateKnowledgeBase is the librarian's permission to compile a new reference index from a set of source documents — a very different privilege from simply reading a book, which is why it lives on a separate IAM action.
A Bedrock resource-based policy is the handwritten note taped to the book itself: "Loan permitted only to Research Team, only through the internal mail chute". A VPC Interface Endpoint is that internal mail chute — it is the library's private tube system that bypasses the public lobby. The AWS KMS customer-managed key is the vault combination used for one-of-a-kind manuscripts (fine-tuning datasets, knowledge base embeddings); you — not the library — hold the combination, and you can re-key the vault at any time to instantly cut off access.
Analogy 3: The Corporate Mailroom and Courier Service
Treat Amazon Bedrock as a corporate courier service and each IAM principal as an employee who wants to send packages. An identity-based IAM policy is the sender authorization form — it lists which courier services the employee may book, to which destinations, on which accounts. A Bedrock resource-based policy is the recipient-side acceptance rule — the receiving department can refuse deliveries that don't arrive from a pre-approved sender account. Together they form a two-way handshake that is essential for cross-account model access patterns.
The VPC Interface Endpoint is the internal inter-office mail tube — packages ride the private pneumatic system and never leave the building. AWS KMS is the tamper-evident courier pouch — only someone with the matching key can open the pouch at the destination. AWS CloudTrail is the mailroom logbook — every InvokeModel, InvokeAgent, and GetFoundationModel call is stamped with caller, timestamp, source IP, and request ID, and the logbook itself is immutable.
When an AIF-C01 question mentions an IAM policy scoped to a model ARN, mentally picture the library call slip that names a single title. When a question mentions a VPC Interface Endpoint for Bedrock, picture the back-alley delivery gate or the internal mail tube. The physical image defends against the most common distractor pattern on AIF-C01: answers that conflate InvokeModel with GetFoundationModel, or conflate a VPC Interface Endpoint with a public Bedrock API call. Reference: https://docs.aws.amazon.com/bedrock/latest/userguide/security-iam.html
The Amazon Bedrock IAM Action Surface
Amazon Bedrock publishes dozens of IAM actions, but a small subset shows up again and again on AIF-C01. Knowing which action each API call maps to — and which resource ARN format each action accepts — is the single biggest lever on your IAM and Bedrock security score.
Inference actions
bedrock:InvokeModel— synchronous inference against a foundation model. Resource ARN format:arn:aws:bedrock:<region>::foundation-model/<model-id>for AWS-provided models, orarn:aws:bedrock:<region>:<account>:custom-model/<base-model>/<custom-model-id>for your fine-tuned models. This is the single most-tested IAM action in IAM and Bedrock security questions.bedrock:InvokeModelWithResponseStream— the streaming variant; grantingInvokeModeldoes not automatically grant the streaming version. Least-privilege policies should list both when streaming is required.bedrock:InvokeAgent— invoke an Amazon Bedrock Agent, which can call action groups and knowledge bases on your behalf. Resource ARN format:arn:aws:bedrock:<region>:<account>:agent-alias/<agent-id>/<alias-id>.bedrock:Retrieveandbedrock:RetrieveAndGenerate— knowledge base retrieval actions used by Retrieval-Augmented Generation (RAG) patterns.
Read-only and discovery actions
bedrock:GetFoundationModel— returns metadata about a specific foundation model (context window, modalities, licensing). No inference is performed.bedrock:ListFoundationModels— lists models the calling account has access to in the current Region.bedrock:GetModelInvocationLoggingConfiguration— reads the account-level invocation logging configuration.
Customization and knowledge base actions
bedrock:CreateModelCustomizationJob— starts a fine-tuning or continued-pretraining job. This action needs access to the training data in Amazon S3 plus an IAM service role Bedrock can assume.bedrock:CreateKnowledgeBase— creates a new knowledge base and the associated vector store configuration.bedrock:AssociateAgentKnowledgeBase— attaches a knowledge base to an agent.bedrock:CreateProvisionedModelThroughput— provisions dedicated throughput for a base or custom model.
An IAM policy that grants bedrock:InvokeModel on Resource: "*" lets a principal invoke every foundation model in the account's current Region — including premium models that cost orders of magnitude more per 1,000 tokens. Always scope the Resource to an explicit foundation-model ARN list so the policy both enforces approved models and caps runaway inference spend. Reference: https://docs.aws.amazon.com/bedrock/latest/userguide/api-permissions-reference.html
Identity-Based IAM Policies Scoped to Bedrock Actions
The default pattern for IAM and Bedrock security is an identity-based IAM policy attached to the principal that needs to call Amazon Bedrock — typically an IAM role assumed by an AWS Lambda function, an Amazon ECS task, an Amazon EKS pod via IRSA, or an Amazon EC2 instance profile.
A minimal InvokeModel policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "InvokeClaudeSonnetOnly",
"Effect": "Allow",
"Action": [
"bedrock:InvokeModel",
"bedrock:InvokeModelWithResponseStream"
],
"Resource": "arn:aws:bedrock:us-east-1::foundation-model/anthropic.claude-3-sonnet-20240229-v1:0"
}
]
}
This policy grants exactly one foundation model, in exactly one Region, to the attached principal. It intentionally omits bedrock:*, Resource: "*", and GetFoundationModel. If the application later needs to list available models for a drop-down UI, add a separate bedrock:ListFoundationModels statement rather than widening the invocation scope.
Adding model-ARN conditions
IAM conditions let you express even tighter rules. The Bedrock condition key bedrock:FoundationModelId and global condition keys like aws:RequestedRegion, aws:SourceVpce, and aws:PrincipalTag combine into powerful guardrails.
{
"Effect": "Allow",
"Action": "bedrock:InvokeModel",
"Resource": "arn:aws:bedrock:*::foundation-model/*",
"Condition": {
"StringEquals": {
"aws:RequestedRegion": ["us-east-1", "us-west-2"]
},
"ForAllValues:StringLike": {
"bedrock:FoundationModelId": [
"anthropic.claude-3-*",
"amazon.titan-*"
]
},
"StringEquals": {
"aws:SourceVpce": "vpce-0123456789abcdef0"
}
}
}
The policy now requires three simultaneous conditions: the call must target either the US East (N. Virginia) or US West (Oregon) Region, the foundation model must match the Anthropic Claude 3 family or any Amazon Titan model, and the call must originate from a specific Bedrock VPC Interface Endpoint. Miss any one condition and the request is denied.
Separating InvokeAgent from InvokeModel
Granting bedrock:InvokeAgent does not grant bedrock:InvokeModel, and vice versa. Amazon Bedrock Agents invoke their own foundation model internally using the agent's service role, so the calling principal only needs bedrock:InvokeAgent on the agent alias ARN. This separation keeps the blast radius of the calling principal narrow — compromise of the front-end Lambda function cannot directly invoke any foundation model outside the agent's approved list.
CreateKnowledgeBase and the service role pattern
bedrock:CreateKnowledgeBase needs three things: the caller's identity-based IAM policy must allow bedrock:CreateKnowledgeBase, the caller must have iam:PassRole permission for the Bedrock service role that the knowledge base will assume at runtime, and that Bedrock service role must itself grant access to the underlying Amazon S3 data source, the Amazon OpenSearch Serverless collection (or other vector store), and any AWS KMS customer-managed key involved.
A recurring AIF-C01 trap shows a scenario where bedrock:CreateKnowledgeBase is granted but knowledge base creation still fails. The missing permission is almost always iam:PassRole on the Bedrock service role, not another bedrock:* action. Any IAM principal that creates or updates a knowledge base, an agent, or a model customization job must be allowed to pass the service role Bedrock will later assume. Reference: https://docs.aws.amazon.com/bedrock/latest/userguide/security_iam_id-based-policy-examples.html
Bedrock Resource-Based Policies
A Bedrock resource-based policy is attached to the Bedrock resource itself — a custom model, a provisioned throughput unit, or an agent — rather than to the calling principal. Resource-based policies are the mechanism Amazon Bedrock uses to enable cross-account model access without requiring the external account to assume a role in the model-owning account.
Where resource-based policies apply
Amazon Bedrock supports resource-based policies primarily on custom models (fine-tuned variants of a base foundation model). When Account A fine-tunes a model and wants Account B to invoke it, Account A attaches a resource-based policy to the custom model ARN allowing bedrock:InvokeModel from a specific principal in Account B. Account B still needs an identity-based IAM policy that allows bedrock:InvokeModel on the same custom model ARN — cross-account access always requires both sides of the handshake.
Example: allowing a peer account to invoke a custom model
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowPartnerAccountInvoke",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::222233334444:role/PartnerInferenceRole"
},
"Action": "bedrock:InvokeModel",
"Resource": "arn:aws:bedrock:us-east-1:111122223333:custom-model/anthropic.claude-3-haiku-20240307-v1:0/my-custom-model-id",
"Condition": {
"StringEquals": {
"aws:SourceVpce": "vpce-0abcdef1234567890"
}
}
}
]
}
This resource-based policy grants exactly one external IAM role the right to invoke exactly one custom model, and only from one specific Bedrock VPC Interface Endpoint. Combine with an identity-based IAM policy in Account B that mirrors the same Resource and the cross-account model access pattern is complete.
When to use a resource-based policy vs an assume-role pattern
- Resource-based policy — best when the external account simply needs to invoke the custom model inline, without managing additional STS sessions. Lower operational overhead.
- Assume-role pattern — best when the external account needs broader access (list, invoke, update tags, monitor metrics) and you want a single IAM role to capture all of that.
Network Isolation with Bedrock VPC Interface Endpoints
By default, the Amazon Bedrock API is reached through the public endpoint (bedrock.<region>.amazonaws.com for runtime, bedrock-runtime.<region>.amazonaws.com for inference). Public in this context means resolved through public DNS — traffic from an Amazon EC2 instance without a public IP still routes through a NAT gateway to reach the public endpoint. For regulated workloads, that extra hop is unacceptable.
What a Bedrock VPC Interface Endpoint does
An Amazon Bedrock VPC Interface Endpoint, powered by AWS PrivateLink, creates private IP addresses inside your VPC for the Bedrock runtime API and the Bedrock agent runtime API. Requests from principals inside the VPC resolve to those private IPs, travel across the AWS backbone, and never traverse the public internet or a NAT gateway. Two separate endpoint service names cover the common use cases:
com.amazonaws.<region>.bedrock-runtime— forInvokeModelandInvokeModelWithResponseStream.com.amazonaws.<region>.bedrock-agent-runtime— forInvokeAgent,Retrieve, andRetrieveAndGenerate.
Endpoint policies as an extra guardrail
A VPC Interface Endpoint carries its own endpoint policy, a small IAM-syntax document evaluated in addition to identity-based and resource-based policies. A common hardening pattern is an endpoint policy that only allows the exact foundation model ARNs approved for that VPC, so a compromised principal inside the VPC cannot invoke any unapproved model even if its identity-based policy is overly generous.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "bedrock:InvokeModel",
"Resource": [
"arn:aws:bedrock:us-east-1::foundation-model/anthropic.claude-3-sonnet-20240229-v1:0",
"arn:aws:bedrock:us-east-1::foundation-model/amazon.titan-text-express-v1"
]
}
]
}
Keeping inference traffic off the internet
Pair the VPC Interface Endpoint with two supporting controls to truly keep inference traffic private. First, attach an IAM condition like aws:SourceVpce or aws:SourceVpc on the identity-based IAM policy so calls from outside the expected VPC or endpoint are denied even with valid credentials. Second, remove any NAT gateway egress for the subnets that host the generative AI workload — if a misconfigured principal somehow tries to reach the public Bedrock endpoint, the packet has nowhere to go.
Generative AI prompts frequently contain customer data. A Bedrock VPC Interface Endpoint is the single most important network control for keeping that prompt traffic off the public internet end to end. For AIF-C01, remember that PrivateLink does not by itself encrypt payloads beyond what TLS already provides — it changes the network path, not the cryptography. Pair it with TLS 1.2+ and AWS KMS at rest for defense in depth. Reference: https://docs.aws.amazon.com/bedrock/latest/userguide/vpc-interface-endpoints.html
Encryption with AWS KMS Customer-Managed Keys
Amazon Bedrock encrypts all persistent data at rest by default using AWS-managed keys in AWS KMS. That is fine for low-sensitivity workloads, but for regulated data — proprietary fine-tuning corpora, embeddings derived from confidential documents, provisioned throughput associations — AWS best practice and AIF-C01 guidance both point to AWS KMS customer-managed keys (CMKs).
Which Bedrock assets can use a customer-managed key
- Model customization job input — the training and validation data you upload to Amazon S3 should already live in a bucket encrypted with a customer-managed key; Bedrock reads those objects through the service role's
kms:Decryptpermission. - Custom model artifacts — the output of a fine-tuning job. Configure the customization job with a customer-managed key and Bedrock will wrap the resulting custom model weights with that key.
- Knowledge base embeddings — vectors stored in Amazon OpenSearch Serverless, Amazon Aurora PostgreSQL (pgvector), or other supported stores should be encrypted with a customer-managed key whose key policy aligns with the knowledge base service role.
- Agent session state — for Amazon Bedrock Agents that retain state, the session data can be encrypted with a customer-managed key.
- Provisioned throughput metadata — configuration data associated with provisioned throughput units.
The key policy is as important as the IAM policy
A customer-managed key is only useful if its key policy correctly authorizes the Bedrock service role to call kms:Decrypt, kms:GenerateDataKey, and kms:DescribeKey. A common mistake is granting the principal (the IAM role) access in the IAM policy but forgetting to grant the same principal access in the KMS key policy. Without the key policy grant, the decrypt fails and the knowledge base ingestion or the customization job errors out.
{
"Sid": "AllowBedrockServiceRoleToDecrypt",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::111122223333:role/AmazonBedrockExecutionRoleForKnowledgeBase"
},
"Action": [
"kms:Decrypt",
"kms:GenerateDataKey",
"kms:DescribeKey"
],
"Resource": "*"
}
Customer-managed keys vs AWS-managed keys
| Aspect | AWS-managed key | Customer-managed key (CMK) |
|---|---|---|
| Who owns the key policy? | AWS | You |
| Can you rotate on demand? | No (AWS auto-rotates annually) | Yes, automatic or on-demand |
| Can you audit every use? | Limited | Full AWS CloudTrail events |
| Can you schedule deletion to revoke access? | No | Yes (7-30 day window) |
| AIF-C01 default answer for sensitive AI data | Rarely | Almost always |
If the scenario mentions proprietary training data, PII in a knowledge base, regulatory audit requirements, the ability to revoke all access immediately, or per-tenant isolation in a multi-tenant generative AI SaaS — the answer is AWS KMS customer-managed key. AWS-managed keys are the right answer only when the workload has no special residency or audit requirement and you want zero operational overhead. Reference: https://docs.aws.amazon.com/bedrock/latest/userguide/encryption.html
Cross-Account Model Access Patterns
Large organizations typically run one AWS account for shared machine-learning assets and separate AWS accounts for each product team that consumes those assets. IAM and Bedrock security has two patterns for that split.
Pattern A: Resource-based policy on the custom model
Account A (the producer) fine-tunes a custom model and attaches a Bedrock resource-based policy that names specific IAM principals in Account B (the consumer). Account B attaches an identity-based IAM policy that also references the same custom model ARN. Requests from Account B carry Account B's credentials, reach the custom model directly, and both policies are evaluated — the union-of-allows rule requires both to grant access for the call to succeed.
- Pros: simple, no STS sessions, low operational cost.
- Cons: only works for resources that support resource-based policies (primarily custom models); does not extend to every Bedrock API.
Pattern B: Cross-account IAM role with sts:AssumeRole
Account A creates an IAM role with a trust policy that allows principals from Account B to assume it. The role has an identity-based IAM policy with bedrock:InvokeModel scoped to the relevant model ARNs. Account B's workload calls sts:AssumeRole, receives short-term credentials, and invokes Amazon Bedrock using those credentials — the call appears to originate from Account A.
- Pros: works for every Bedrock API, centralizes observability in Account A's AWS CloudTrail, enables attribute-based access control using session tags.
- Cons: extra hop (STS), short-term credential management on the consumer side.
Hybrid pattern with Amazon Bedrock Agents
For Amazon Bedrock Agents, a frequent pattern is to deploy the agent in a central account with action groups that target resources in consumer accounts. The agent's service role assumes cross-account IAM roles into each consumer account using STS. This keeps the generative AI reasoning layer centralized and auditable while still allowing the agent to reach data that legally must live in a specific consumer account.
Least-Privilege Playbook for Generative AI Applications
A concrete playbook turns the scattered IAM and Bedrock security concepts into a reproducible checklist. Apply the steps in order for any new generative AI workload on AWS.
Step 1 — Map the foundation models your workload actually needs
List the exact foundation model IDs (and versions) your application depends on. Do not include models "just in case" — each extra model widens the blast radius and inflates the potential bill. If the product manager wants optionality, list two or three approved models, not the entire catalog.
Step 2 — Author an identity-based IAM policy per workload role
Create one IAM role per logical workload (web tier, background worker, fine-tuning job, knowledge base ingestion) and attach an identity-based IAM policy that lists only the Bedrock actions that role needs, scoped to the model ARNs from Step 1. Avoid bedrock:* everywhere except in emergency break-glass roles that are normally disabled.
Step 3 — Add deny guardrails for expensive or unapproved models
Layer an explicit-deny statement that blocks specific high-cost or policy-restricted foundation models. Because an explicit deny in any applicable IAM policy overrides every allow, a single deny statement at the permissions boundary or SCP level is a durable cost-control lever.
Step 4 — Require the VPC Interface Endpoint
Add aws:SourceVpce or aws:SourceVpc conditions on the identity-based IAM policy so every successful call must have traversed the approved Bedrock VPC Interface Endpoint. Combine with an endpoint policy that mirrors the allowed model ARN list.
Step 5 — Encrypt persistent data with a customer-managed key
Create (or reuse) an AWS KMS customer-managed key for the workload, grant the Bedrock service role kms:Decrypt and kms:GenerateDataKey in the key policy, and point every Amazon S3 bucket, Amazon OpenSearch Serverless collection, and knowledge base at that key.
Step 6 — Turn on Bedrock invocation logging and AWS CloudTrail data events
Enable model invocation logging in Amazon Bedrock (writes prompts and completions to Amazon CloudWatch Logs or Amazon S3 with your customer-managed key) and ensure the account-level AWS CloudTrail trail captures management events. This closes the audit loop so every InvokeModel, InvokeAgent, CreateKnowledgeBase, and GetFoundationModel is attributable.
Step 7 — Review quarterly with IAM Access Analyzer
Run AWS IAM Access Analyzer unused-access analysis against the generative AI IAM roles every quarter and trim any action or model ARN that has not been exercised. Least privilege is not a one-time activity; it is a recurring hygiene task.
Every time an engineer asks for a new Bedrock action or model ARN to be added to the IAM policy, require an AWS CloudTrail event or a documented design decision as evidence. Widening an IAM policy without evidence is how generative AI workloads drift back to bedrock:* on Resource: "*". Reference: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html
AWS CloudTrail Logging for Amazon Bedrock
AWS CloudTrail is the audit system of record for every Amazon Bedrock API call. For IAM and Bedrock security, CloudTrail provides two distinct capabilities that should never be confused.
Management events vs data events
- Management events are control-plane API calls —
CreateKnowledgeBase,CreateAgent,CreateModelCustomizationJob,PutModelInvocationLoggingConfiguration,DeleteCustomModel. Management events are recorded by default on every AWS account's default AWS CloudTrail trail. - Data events are high-volume data-plane calls. For Amazon Bedrock,
InvokeModel,InvokeModelWithResponseStream,InvokeAgent,Retrieve, andRetrieveAndGenerateare data events and must be explicitly enabled in a CloudTrail trail if you want them captured.
Model invocation logging
In addition to AWS CloudTrail, Amazon Bedrock offers model invocation logging, a separate logging feature that captures the full prompt, the full model response, and token usage metrics. Model invocation logging can write to Amazon CloudWatch Logs, Amazon S3, or both, and the destination should itself be encrypted with an AWS KMS customer-managed key so prompts containing sensitive data are protected end to end.
Detecting abuse
Common abuse patterns that AWS CloudTrail and model invocation logging together surface include: a burst of InvokeModel calls against a premium model from a principal that normally uses a cheaper model, GetFoundationModel enumeration across dozens of model IDs (reconnaissance), CreateModelCustomizationJob from an unexpected principal (data exfiltration through fine-tuning), and AssumeRole chains ending in Bedrock calls from IP addresses outside the approved VPC Interface Endpoint.
For AIF-C01, remember that the high-value Bedrock API calls (InvokeModel, InvokeAgent, Retrieve) are data events and are not captured by the default CloudTrail trail. A common audit finding is an organization that believed it had full Bedrock visibility but had only captured management events. Enabling CloudTrail data events for Amazon Bedrock is an explicit configuration step. Reference: https://docs.aws.amazon.com/bedrock/latest/userguide/logging-using-cloudtrail.html
Common Traps and Exam Pitfalls
AIF-C01 IAM and Bedrock security questions reuse a tight set of distractor patterns. Recognizing them saves time on exam day.
Trap 1 — Confusing InvokeModel with GetFoundationModel
InvokeModel runs inference and costs money. GetFoundationModel returns metadata and is free. A scenario that says "read the context window of a model without running inference" is asking for GetFoundationModel, not InvokeModel.
Trap 2 — Assuming bedrock:* in the service role is safe
A service role with bedrock:* gives the knowledge base, agent, or customization job permission to invoke any foundation model, including expensive ones. Always scope the service role to the specific foundation model ARN it uses to embed, summarize, or generate.
Trap 3 — Forgetting KMS grants on cross-account reads
Cross-account access to an Amazon S3 training dataset encrypted with a customer-managed key requires the consumer account to also be allowed in the KMS key policy. An IAM policy allowing s3:GetObject is not enough — without kms:Decrypt at the key policy level, the read fails with an access-denied error that looks like an S3 problem but is actually a KMS problem.
Trap 4 — Assuming VPC Interface Endpoint implies encryption
A Bedrock VPC Interface Endpoint changes the network path, not the encryption. Payloads on the wire are still protected by TLS, and data at rest is still protected by AWS KMS — the endpoint alone is not a substitute for either. AIF-C01 likes to ask the question both ways.
Trap 5 — Using an IAM user for a production generative AI workload
Production Amazon Bedrock callers should always be IAM roles (Lambda execution role, ECS task role, EKS IRSA, EC2 instance profile, or an identity-federated role through IAM Identity Center). An IAM user with long-term access keys embedded in code is the classic anti-pattern that AIF-C01 expects you to reject.
Monitoring and Observability for IAM and Bedrock Security
Observability completes the IAM and Bedrock security loop. Four signals deserve dashboards in every production generative AI workload.
Bedrock invocation metrics in Amazon CloudWatch
Amazon Bedrock emits Amazon CloudWatch metrics per foundation model for Invocations, InvocationLatency, InputTokenCount, OutputTokenCount, and InvocationClientErrors. A sudden spike in InvocationClientErrors with error code AccessDeniedException is the first signal that an IAM policy change broke a workload or a malicious caller is probing the API.
IAM Access Analyzer findings
AWS IAM Access Analyzer continuously scans resource-based policies and surface findings when a Bedrock custom model, an agent, or an Amazon S3 bucket holding training data is accessible from outside your trust boundary. Findings should be triaged within hours; an unintentionally public fine-tuning dataset is a serious data leak.
AWS Config rules
AWS Config supports managed rules that check whether Amazon Bedrock customization jobs and knowledge bases are configured with customer-managed keys, whether VPC Interface Endpoints exist in each Region that uses Bedrock, and whether AWS CloudTrail data events are enabled for Bedrock. Continuous compliance checks catch drift faster than quarterly audits.
Cost anomaly detection
Unexpected bedrock:InvokeModel activity shows up in cost dashboards before it shows up in security dashboards. AWS Cost Anomaly Detection with a rule scoped to the Amazon Bedrock service is a cheap, high-signal layer that has caught real-world credential compromises of generative AI workloads in minutes rather than days.
FAQ — IAM and Bedrock Security for AIF-C01
Q1: What is the minimum IAM permission required to call Amazon Bedrock inference?
At minimum, the calling principal needs bedrock:InvokeModel on the specific foundation model ARN being invoked. For streaming responses, also grant bedrock:InvokeModelWithResponseStream. No other Bedrock action is required for pure inference, and Resource: "*" should be avoided so the permission is scoped to approved foundation models only.
Q2: Does a Bedrock VPC Interface Endpoint encrypt my prompts?
No. A VPC Interface Endpoint changes the network path so that Amazon Bedrock traffic never traverses the public internet — it does not add encryption on top of what TLS already provides. Prompts on the wire are protected by TLS 1.2+ regardless of whether you use the public endpoint or the interface endpoint. For encryption at rest, you still need AWS KMS customer-managed keys on the underlying storage. Treat the VPC Interface Endpoint as a network control and AWS KMS as an encryption control.
Q3: When should I use a Bedrock resource-based policy vs an assume-role pattern?
Use a resource-based policy when the only goal is to let an external AWS account invoke a specific custom model and you want to avoid managing STS sessions. Use the assume-role pattern when the external account needs multiple Bedrock APIs, when you want cross-account calls to appear in the model-owning account's AWS CloudTrail, or when you need session tags to drive attribute-based access control. Both can coexist in the same architecture.
Q4: Why does CreateKnowledgeBase need iam:PassRole?
Because the knowledge base you are creating runs under a Bedrock service role at runtime — it needs to read Amazon S3 data sources, write to the vector store, and call kms:Decrypt. AWS IAM only lets a principal assign a service role to a new resource if that principal is explicitly granted iam:PassRole on the service role ARN. This is a universal AWS IAM safety mechanism, not a Bedrock-specific quirk.
Q5: Is Amazon Bedrock itself free? What about its IAM, VPC Interface Endpoint, and KMS components?
AWS IAM is free. AWS KMS charges per customer-managed key per month plus per 10,000 API calls. VPC Interface Endpoints charge per endpoint per hour plus data processed. Amazon Bedrock itself bills per 1,000 input and output tokens per foundation model, plus provisioned throughput commitments where applicable. CloudTrail management events are free; data events carry a per-event fee.
Q6: Can I block a specific foundation model organization-wide?
Yes. Attach an AWS Organizations service control policy (SCP) with an explicit Deny on bedrock:InvokeModel scoped to the undesired foundation model ARN. Because an explicit deny in an SCP overrides every allow in every member account, the model becomes unreachable even if a member account's IAM policy would otherwise allow it. This is the durable control for enforcing an approved-models list across a large organization.
Q7: How do I audit which foundation models were invoked last month?
Enable AWS CloudTrail data events for Amazon Bedrock and/or Amazon Bedrock model invocation logging. CloudTrail data events give you structured InvokeModel records suitable for Amazon Athena queries; model invocation logging gives you the full prompts and responses suitable for detailed review. The two are complementary — CloudTrail answers "who, what model, when" at scale, and invocation logging answers "what exactly was said" for a specific audit window.
Q8: What is the single highest-impact IAM and Bedrock security control for a new workload?
Scoping bedrock:InvokeModel to an explicit list of foundation model ARNs in the identity-based IAM policy attached to the workload's IAM role. That one change simultaneously enforces an approved-models list, caps runaway inference spend, and aligns with the AWS least-privilege principle. Every other control in the IAM and Bedrock security stack — resource-based policies, VPC Interface Endpoints, AWS KMS customer-managed keys, AWS CloudTrail data events — builds on that foundation.
Further Reading
- Amazon Bedrock — Identity and Access Management
- Amazon Bedrock — Identity-Based Policy Examples
- Amazon Bedrock — Actions, Resources, and Condition Keys
- Amazon Bedrock — VPC Interface Endpoints (AWS PrivateLink)
- Amazon Bedrock — Data Encryption with AWS KMS
- Amazon Bedrock — Logging API Calls with AWS CloudTrail
- AWS IAM — Policies and Permissions
- AWS KMS — Key Concepts
- AWS AIF-C01 Exam Guide (PDF)