AI security compliance governance is the coordinated discipline that keeps AI workloads on AWS safe, auditable, and aligned with external law. It is not a single AWS service, and it is not a single regulation. AI security compliance governance is the umbrella phrase the AIF-C01 exam uses to describe how a layered boundary model (identity, network, data, model output, audit trail) composes with a regulatory landscape (EU AI Act, NIST AI RMF, ISO/IEC 42001, GDPR, HIPAA) and an internal program (AI governance committees, model risk management, incident response) so that any AI decision on AWS can be explained, defended, and rolled back. This hub topic ties Domain 5 together — every sibling topic in D5 clicks into one of the boundaries or one of the programs you will read about below. Master AI security compliance governance as a framework and you will rarely miss a D5 scenario question.
What is AI Security, Compliance & Governance on AWS?
AI security compliance governance is a three-word umbrella. AI security is the set of controls that prevent unauthorized access, tampering, or abuse of AI systems (prompt injection, model theft, data exfiltration through model output). AI compliance is the mapping of those controls to externally audited standards (ISO/IEC 42001, SOC 2, HIPAA, FedRAMP) and binding regulations (EU AI Act, GDPR). AI governance is the internal organizational layer — policies, committees, approval gates, risk registers — that decides what models you deploy, how, and with which guardrails.
At the AIF-C01 foundational tier you are not expected to write SCPs or draft EU AI Act conformity assessments. You are expected to recognize which AWS service, which regulatory concept, and which governance artifact solves which problem, and to describe how they compose. Exam questions take the shape "A company is building a generative AI chatbot that processes EU customer data — which framework, which AWS service, which governance artifact applies?" The answer almost always combines one boundary control + one regulation + one audit service.
Why AI security compliance governance matters for AIF-C01
Domain 5 carries 14% of the AIF-C01 weight. Compared to CLF-C02 the topics bite deeper because generative AI introduces brand-new failure modes (prompt injection, hallucination leaking PII, model inversion) that classic cloud security courses never covered. AI security compliance governance is the only framework that keeps all of those failure modes in one mental model. Candidates who memorize individual services (Bedrock Guardrails, Macie, SageMaker Clarify) without the hub framework end up guessing on scenario questions; candidates who learn the hub first pick the right answer by elimination.
Scope of this hub topic vs adjacent D5 siblings
This hub topic connects the five D5 siblings: AI Threat Model and Attack Types covers the attack surface; IAM and Bedrock Security covers the identity and network boundaries; Bedrock Guardrails and Controls covers the model output boundary; Data Governance and PII covers the data boundary; and this hub covers the regulatory and programmatic layer (governance committees, risk management, incident response, audit composition). Read this hub first, then click into the sibling that matches the exam scenario.
白話文解釋 AI Security, Compliance & Governance
Academic descriptions of AI governance quickly get abstract. Three analogies from different domains pin the concepts down.
Analogy 1 — The hospital operating theatre
An AI model in production is like a surgeon in a hospital operating theatre. AI security is the sterile gown, the scrubbed-in protocol, and the locked door to the theatre — only people with the right credentials get near the patient. That maps to IAM, VPC endpoints for Bedrock, and KMS encryption. AI compliance is the medical licence hanging on the wall and the hospital accreditation certificate in the lobby — the surgeon's training was audited by an external body (EU AI Act conformity, ISO/IEC 42001) and the hospital's procedures were audited by another (SOC 2, HIPAA BAA via AWS Artifact). AI governance is the hospital morbidity-and-mortality committee that meets weekly to review every complication, decide whether to suspend a procedure, and update the consent form — that is your internal AI review board, model risk management committee, and incident response playbook. If any of the three layers fails the patient is harmed. No analogy better captures why AI security compliance governance must be treated as one system rather than three independent checklists.
Analogy 2 — The commercial kitchen with a health inspector
Imagine a commercial kitchen that now uses a robot chef (the AI model). The locked pantry, the staff keycards, and the CCTV above every workstation are AI security — they stop outsiders from poisoning the ingredients or stealing the recipe, and they record every action the robot takes (that is CloudTrail for Bedrock invocations and Config for SageMaker configurations). The framed health-department certificate on the wall is AI compliance — an external regulator inspected the kitchen and certified it against a published standard (FDA for food, EU AI Act for AI). The manager's binder listing every dish on the menu, which supplier each ingredient came from, which chef last modified the recipe, and the date of every customer complaint is AI governance — the model inventory, data lineage, approval gates, and incident log that the AIF-C01 exam wants you to associate with words like "model registry," "data catalog," and "audit trail." A kitchen with cameras but no binder fails its inspection; a kitchen with a binder but no locks gets robbed. You need all three.
Analogy 3 — The airline cockpit certification
A modern commercial airliner illustrates the layered boundary model perfectly. There is a physical boundary (the locked cockpit door — that is your VPC perimeter and Bedrock VPC endpoint). There is an identity boundary (pilot licence, crew badge, two-person rule for cabin access — that is IAM, MFA, and Organizations SCP). There is a data boundary (sealed flight data recorder, encrypted ACARS messages — that is KMS, S3 bucket encryption, Macie for training data). There is an output boundary (autopilot limits, stall warnings, ground proximity warning system that overrides commands — that is Bedrock Guardrails, SageMaker Clarify, and Amazon A2I human review). And there is an audit trail (flight data recorder and cockpit voice recorder that regulators download after every incident — that is CloudTrail, Config, Audit Manager). Above all of that sits a regulatory tier: FAA, EASA, ICAO — mirrored in AI by the FTC, EU AI Office, NIST. And above that sits an internal safety culture: every airline has a safety board, every airline has a tail-number-level risk register, every airline has a grounding playbook — that is your internal AI governance committee. The entire system is what makes a plane trustworthy; any single layer alone would not. AI security compliance governance works exactly the same way.
The Layered Boundary Model for AI on AWS
AI security compliance governance on AWS is most cleanly taught as five concentric boundaries. The exam rarely asks "name the five boundaries" verbatim, but every scenario fits into one of them.
Boundary 1 — Identity
The identity boundary answers "who may call the AI service?" On AWS this is AWS Identity and Access Management (IAM) combined with AWS Organizations. IAM policies scope actions like bedrock:InvokeModel, sagemaker:CreateEndpoint, and comprehend:DetectPiiEntities to specific principals. Service Control Policies (SCPs) from AWS Organizations constrain the maximum permissions of every IAM principal in a member account — preventing, for example, any account in the Sandbox OU from calling Bedrock at all. Bedrock resource-based policies can further restrict which cross-account callers may invoke a particular model. Details live in the IAM and Bedrock Security sibling topic.
Boundary 2 — Network
The network boundary answers "how does the traffic reach the AI service?" By default, Bedrock and SageMaker endpoints are reachable over the public internet. For regulated AI workloads you add VPC endpoints (AWS PrivateLink) so traffic never leaves the AWS backbone; you enable SageMaker network isolation for training jobs so the training container has no outbound internet access; and you put AWS WAF in front of any public-facing chat UI. Without a network boundary, stolen credentials let an attacker invoke the model from anywhere; with one, the attack must originate from inside your VPC.
Boundary 3 — Data
The data boundary answers "what data may enter or leave the AI system and how is it protected?" AWS Key Management Service (KMS) encrypts training data at rest (S3, EBS), model artifacts, Bedrock knowledge-base indexes, and SageMaker Feature Store entries. Amazon Macie scans S3 training datasets for personally identifiable information (PII) before it reaches a training job. AWS Glue Data Catalog maintains data lineage so you can trace every model back to the datasets that trained it. Data residency is enforced by Region choice plus SCPs. The Data Governance and PII sibling goes deeper.
Boundary 4 — Model Output
This boundary is what makes AI security compliance governance different from classic cloud security. Even after identity, network, and data are locked down, the model itself can still emit harmful content (hate speech, PII leaked from training data, hallucinated medical advice) or be manipulated through prompt injection. Amazon Bedrock Guardrails apply content filters, denied-topic blocks, PII redaction, and grounding checks at both input and output. Amazon SageMaker Clarify surfaces bias and feature attribution. Amazon A2I (Augmented AI) routes low-confidence predictions to human reviewers. See the Bedrock Guardrails and Controls sibling for the service-level detail.
Boundary 5 — Audit Trail
The audit trail boundary answers "can you prove, after the fact, what happened?" AWS CloudTrail logs every Bedrock model invocation, SageMaker API call, and IAM decision. AWS Config tracks the configuration history of every SageMaker endpoint, Bedrock guardrail version, and knowledge-base data source. AWS Audit Manager continuously collects evidence mapped to control frameworks (SOC 2, HIPAA, ISO, NIST). Without a reliable audit trail your compliance position collapses — you cannot prove controls were operating, you cannot run incident forensics, and you cannot satisfy a regulator's right-to-explanation request.
AI security compliance governance is multiplicative, not additive. An airtight Bedrock Guardrail (output boundary) does nothing if an attacker with leaked credentials can drop the guardrail (identity boundary failure). Solid IAM does nothing if training data contains unredacted PII that the model memorizes (data boundary failure). Exam questions that give you a scenario failing in one boundary expect an answer from the matching layer — not a generic "turn on encryption". Reference: https://docs.aws.amazon.com/wellarchitected/latest/machine-learning-lens/machine-learning-lens.html
The Regulatory Landscape — Recognition Level
AIF-C01 does not require you to interpret legal text. It requires you to recognize each regulation by name, know its risk-classification mental model, and map it to the AWS primitives that help you comply.
EU AI Act — Risk-Based Tiers
The EU Artificial Intelligence Act is the first major horizontal AI law. It classifies every AI system into one of four risk tiers. Unacceptable-risk systems (social scoring, real-time biometric identification in public spaces) are banned outright. High-risk systems (AI in hiring, credit scoring, medical devices, critical infrastructure) require a conformity assessment, technical documentation, human oversight, logging, and post-market monitoring — this is where AWS security compliance governance tooling (CloudTrail, Config, SageMaker Model Cards, A2I) directly maps to legal obligations. Limited-risk systems (chatbots, deepfakes) require transparency — users must be told they are talking to AI. Minimal-risk systems have no specific obligations. For the exam, memorize the four tiers and recognize that high-risk systems drive most of the logging and human-review requirements.
NIST AI Risk Management Framework (AI RMF)
The NIST AI Risk Management Framework (AI RMF 1.0) is a voluntary US framework organized around four functions: Govern, Map, Measure, Manage. Govern establishes the organizational AI risk policy; Map identifies context and AI system risks; Measure analyzes and tracks risk; Manage allocates resources and responds to incidents. The NIST AI RMF is not a certification — you cannot be "NIST AI RMF certified" — it is a reference model many organizations adopt voluntarily and many US federal RFPs now require. On the exam, recognize the four-function vocabulary and distinguish the AI RMF from the older NIST Cybersecurity Framework (which uses Identify, Protect, Detect, Respond, Recover).
ISO/IEC 42001 — AI Management System Standard
ISO/IEC 42001:2023 is the first international management-system standard for AI. Unlike the NIST AI RMF it is certifiable — an external auditor can grant your organization an ISO/IEC 42001 certificate, much like ISO 27001 for information security. ISO/IEC 42001 requires a documented AI management system (AIMS) with policies, objectives, risk assessment, internal audit, and continual improvement. Organizations selling AI into the EU often pursue ISO/IEC 42001 as evidence they can meet EU AI Act obligations for high-risk systems.
GDPR When AI Processes Personal Data
The EU General Data Protection Regulation predates AI-specific laws but still governs any AI system that processes personal data of EU residents. Three GDPR provisions matter most for AI security compliance governance. Article 22 gives data subjects the right not to be subject to a solely automated decision with significant effects — practical implication: either add human review (Amazon A2I) or do not fully automate the decision. Article 15 creates a right to access and an implicit right to explanation — practical implication: you must be able to describe in plain language why the model made a specific decision (SageMaker Clarify feature attribution helps). Data-minimization and purpose-limitation principles mean you cannot train on more PII than necessary and cannot repurpose the data — practical implication: scan training data with Amazon Macie, redact PII with Comprehend or Bedrock Guardrails, document purpose in the SageMaker Model Card.
Industry-Specific Regulations
On top of the horizontal frameworks sit industry-specific regulations. HIPAA applies when AI processes US Protected Health Information — sign the HIPAA Business Associate Addendum through AWS Artifact and use only HIPAA-eligible services (Bedrock, SageMaker, Comprehend Medical are eligible). FINRA and the SEC govern AI used in US financial advice and trading. PCI DSS still applies if an AI chatbot touches payment card data. The FDA governs AI as a medical device in the US. On the exam these appear as scenario qualifiers ("a healthcare company", "a broker-dealer") — your job is to recognize the right framework, not to quote clauses.
EU AI Act is a binding regulation for AI systems touching the EU market, with mandatory risk tiers. NIST AI RMF is a voluntary US framework with Govern/Map/Measure/Manage functions, not certifiable. ISO/IEC 42001 is an international standard for an AI management system, certifiable by external auditors. They are complementary: many companies adopt NIST AI RMF internally, certify against ISO/IEC 42001, and demonstrate EU AI Act conformity using both as evidence. Reference: https://www.nist.gov/itl/ai-risk-management-framework
Internal AI Governance Programs
Regulations and AWS services alone do not govern AI — people and process do. Every serious AI deployment on AWS needs an internal governance program, and the AIF-C01 exam tests recognition of its components.
AI Governance Committee (Review Board)
A cross-functional AI governance committee typically includes representatives from legal, security, data protection (DPO), product, engineering, and executive leadership. The committee approves or rejects each new AI use case, reviews high-risk model deployments, arbitrates between business urgency and responsible-AI concerns, and signs off on incident response outcomes. In regulated industries the committee's charter is itself a compliance artifact that auditors request during an ISO/IEC 42001 or EU AI Act assessment.
Responsible AI Policy
A responsible AI policy is the written document that states how your organization applies the AWS Responsible AI pillars (fairness, explainability, privacy, safety, controllability, veracity, governance). It lists forbidden use cases, required controls per risk tier, and escalation paths. On AWS, policies operationalize through a mix of SCPs (for preventive controls), AWS Config rules (for detective controls), and SageMaker Model Cards (for documentation). The policy is what an auditor reads first.
AI Use-Case Intake Process
Every new AI initiative should pass through an intake form that captures purpose, data sources, affected stakeholders, risk tier (per EU AI Act or internal taxonomy), chosen foundation model, and required guardrails. The intake triggers the approval gate. Without an intake process, shadow AI proliferates — teams build models the governance committee never sees until an incident.
Training and Awareness
AI governance fails without trained developers and business owners. AWS publishes AI Service Cards for each managed AI service describing intended use and limitations; internal teams should extend those with organization-specific guidance. Training programs must cover prompt-injection awareness, PII-handling obligations, and incident escalation — particularly for non-engineers using Amazon Q Business or low-code tools like SageMaker Canvas.
If an exam scenario mentions approval of a new AI model, rollback of a deployed model, or executive oversight of AI risk, the correct answer involves an internal AI governance committee plus a model registry with approval gates (SageMaker Model Registry), not a single technical control. Reference: https://docs.aws.amazon.com/whitepapers/latest/aws-caf-for-ai/aws-caf-for-ai.html
AI Model Risk Management
Model risk management (MRM) is the structured practice of identifying, rating, approving, and monitoring every model in production. It originated in banking (Federal Reserve SR 11-7 in the US, PRA SS1/23 in the UK) and has become the backbone of AI governance everywhere.
Model Inventory
The model inventory is the master list of every AI model the organization uses — whether built in-house, fine-tuned from a foundation model, or consumed as a third-party API. Each entry records purpose, owner, risk tier, data sources, training date, deployment environment, and current status. On AWS, the SageMaker Model Registry plus a tag standard maintained through AWS Config is the usual technical foundation. For third-party foundation models consumed via Amazon Bedrock, the inventory entry should record the model family, version, and the applicable Bedrock Guardrail.
Risk Rating
Every inventory entry gets a risk rating. Organizations commonly adopt a three-tier scheme (low, medium, high) or align directly to the EU AI Act tiers. Risk drivers include decision autonomy (does a human approve each output?), stakeholder impact (does it affect hiring, credit, health?), data sensitivity (does it touch PII or PHI?), and model explainability (can you describe why the model decided?). The rating determines the depth of review before deployment and the monitoring cadence after.
Approval Gates
High-risk models do not deploy without an approval gate: governance committee review, independent validation, and executive sign-off. AWS provides the plumbing — SageMaker Model Registry supports an explicit approval status (Approved, Rejected, PendingManualApproval) that you can wire into SageMaker Pipelines so an unapproved model cannot reach a production endpoint. For Bedrock-based applications the approval gate is typically enforced at the application deployment pipeline (CodePipeline + AWS Config rules).
Post-Deployment Monitoring
Once deployed, models drift. Data distributions shift, user behaviour evolves, and the world changes. SageMaker Model Monitor continuously compares production inputs and outputs against a baseline and alerts on data-quality drift, model-quality drift, bias drift, and feature-attribution drift. For Bedrock models, CloudWatch metrics plus custom evaluation jobs (Bedrock Model Evaluation) play the same role. When drift is detected the model re-enters the risk-rating and approval workflow.
Third-Party Model Risk
Foundation models are often consumed rather than built. Third-party model risk asks: who trained it, on what data, with what safety testing, and can we audit any of that? AWS AI Service Cards and Bedrock's published model documentation provide provider-side transparency, but the customer remains accountable. Your governance policy should explicitly cover which third-party models are approved, which guardrails are mandatory on top, and what happens when a provider updates a model version without notice.
Model inventory is the master list. Risk rating tiers models (low/medium/high or EU AI Act tiers). Approval gates block unapproved models from production (SageMaker Model Registry + Pipelines). Post-deployment monitoring detects drift (SageMaker Model Monitor, Bedrock Model Evaluation). Third-party model risk covers foundation models you did not train yourself. Reference: https://aws.amazon.com/ai/responsible-ai/
AI Incident Response Patterns
AI incidents differ from classic security incidents. A classic incident is a breach; an AI incident can be a biased output, a hallucination that misled a customer, a successful prompt injection, or a leaked training record — all of which can occur without a single credential being stolen.
Detection
Detection sources include Bedrock Guardrails violation logs (a user triggered a denied topic), CloudWatch anomaly metrics (invocation rate spiked 50x), SageMaker Model Monitor alerts (bias drift crossed threshold), customer complaints funnelled through a feedback channel, and regulator notifications. A mature program feeds all of these into a single incident queue. Note: Amazon GuardDuty does not detect AI-specific incidents — do not pick it as the first answer in a scenario describing model bias or hallucination.
Containment
Containment for AI incidents has two unique patterns beyond classic containment. First, model rollback — revert the production endpoint to a previous approved model version (SageMaker endpoint in-place update or Bedrock provisioned throughput switch). Second, guardrail tightening — temporarily raise Bedrock Guardrail strictness or apply a narrower topic-denial policy while root cause is investigated. Only after containment do you move to root cause.
Root Cause Analysis
Root cause for AI incidents usually falls into one of four categories: data root cause (biased or poisoned training data), model root cause (fine-tuning degraded behaviour), prompt root cause (system prompt allowed exploitation), or operational root cause (guardrail misconfigured). CloudTrail and Config give you the timeline; SageMaker Clarify and feature-attribution reports give you the behavioural explanation; the model registry and data catalog give you the lineage.
Remediation and Communication
Remediation may require retraining (if data root cause), re-tuning (if model root cause), updating the system prompt or prompt template (if prompt root cause), or reconfiguring the guardrail. Communication is its own workstream — under GDPR Article 33 a personal-data breach must be notified to the supervisory authority within 72 hours, and the EU AI Act introduces incident-reporting obligations for high-risk AI systems. Your incident response playbook must include legal and DPO touchpoints.
Post-Incident Review
Post-incident review feeds back into model risk management: the risk rating may rise, the approval gate may tighten, the responsible AI policy may add a new forbidden use case. Auditors expect to see evidence that the governance committee reviewed every material incident within a defined SLA.
A biased output affecting a protected class, a hallucinated financial recommendation that misled a customer, or a successful prompt-injection that leaked confidential context each constitutes an AI incident — even if no credentials were stolen and no database exfiltrated. Your incident response plan must cover these, not only classic breaches. Reference: https://artificialintelligenceact.eu/
Composing CloudTrail, Config, and Audit Manager Across the AI Stack
The AWS audit trio — CloudTrail, Config, Audit Manager — underpins every claim you make to an auditor or regulator. Understanding how they compose across the AI stack is a common exam target.
AWS CloudTrail for AI Workloads
CloudTrail records every API call that changes or uses AI resources. Management events cover IAM grants on Bedrock, creation of SageMaker endpoints, and updates to Bedrock Guardrails. Data events (opt-in, higher volume) cover individual model invocations — InvokeModel on Bedrock, InvokeEndpoint on SageMaker, content sent to Amazon Comprehend or Amazon Rekognition. Enabling data events is what makes per-invocation forensics possible; disabling them leaves you able to see who configured the model but not who used it.
AWS Config for AI Workloads
Config tracks configuration state and configuration history. For AI workloads the resource types of interest include AWS::SageMaker::Endpoint, AWS::SageMaker::Model, AWS::SageMaker::NotebookInstance, and Bedrock resources once they appear in Config coverage. Config rules can enforce policy — for example, a rule requiring every SageMaker endpoint to have VPC configuration attached, or every Bedrock knowledge base to use a customer-managed KMS key. Conformance packs bundle dozens of such rules aligned to frameworks like NIST 800-53 or HIPAA.
AWS Audit Manager for AI Workloads
AWS Audit Manager continuously collects evidence from CloudTrail, Config, Security Hub, and manual uploads, then maps that evidence to controls in a framework. AWS publishes pre-built frameworks (SOC 2, PCI DSS, HIPAA, GDPR, ISO 27001, NIST CSF) and you can build custom frameworks aligned to NIST AI RMF or ISO/IEC 42001. The output is an assessment report suitable for an external auditor — effectively turning continuous telemetry into a signed PDF.
AWS Artifact for AI Workloads
AWS Artifact distributes AWS's own externally audited compliance reports (SOC 1/2/3, ISO 27001, ISO 27017, ISO 27018, PCI DSS, FedRAMP) and legal agreements (HIPAA BAA, GDPR DPA). Artifact does not make your AI workload compliant, but it lets you inherit AWS's controls for the infrastructure layer and sign the agreements that flip legal switches (for example, the HIPAA BAA which is required before you store PHI on Bedrock or SageMaker).
How They Compose — The Audit Chain
An auditor reviewing a high-risk AI system typically asks a chain of questions. "Was the model approved?" — SageMaker Model Registry approval status plus the governance committee minutes. "What data was it trained on?" — SageMaker Lineage plus AWS Glue Data Catalog plus Macie findings. "Who has been invoking it?" — CloudTrail data events. "Is it still configured the way you approved?" — AWS Config history. "Show me your evidence mapped to ISO 27001 control A.8.1.1" — AWS Audit Manager assessment. "Is AWS itself certified?" — AWS Artifact SOC 2 report. Every question has an answer because every layer was wired in from the start.
CloudTrail = who made API calls and who invoked the model. Config = what the configuration looks like now and how it changed. Audit Manager = evidence collection mapped to a control framework. Artifact = AWS's own compliance reports and legal agreements. Candidates pick Artifact when the question wants Config, or Config when the question wants CloudTrail. Drill the verbs: "call" is CloudTrail, "configuration" is Config, "evidence for audit" is Audit Manager, "download AWS report" is Artifact. Reference: https://docs.aws.amazon.com/audit-manager/latest/userguide/what-is.html
AI Security, Compliance & Governance vs Adjacent Topics
This hub intersects with several adjacent topics; keep the boundaries clear.
Hub vs AI Threat Model (5.1)
This hub frames the boundaries and the external regulations. The AI Threat Model and Attack Types sibling catalogues the specific attacks — prompt injection, jailbreaking, model inversion, data poisoning, adversarial examples. A question describing an attack technique belongs to the threat-model topic; a question about the regulatory or programmatic response belongs here.
Hub vs IAM and Bedrock Security (5.1)
IAM and Bedrock Security owns the identity and network boundary plumbing: IAM policies, Bedrock resource policies, VPC endpoints, KMS keys. This hub owns how those primitives serve a bigger governance program.
Hub vs Bedrock Guardrails (5.1)
Bedrock Guardrails and Controls owns the model-output boundary — content filters, denied topics, PII redaction, grounding checks. This hub owns how guardrails fit into a regulated program (mandatory for EU AI Act high-risk systems, mapped to ISO/IEC 42001 operational controls).
Hub vs Data Governance (5.2)
Data Governance and PII owns the data-boundary plumbing: Macie, Glue Data Catalog, lineage, PII detection, training-data quality. This hub owns the regulatory and program layer that consumes those signals (GDPR obligations, model risk management, incident response).
Hub vs Responsible AI Principles (4.1)
Responsible AI Principles owns the seven AWS pillars (fairness, explainability, privacy, safety, controllability, veracity, governance) at the design level. This hub operationalizes the governance pillar specifically — committees, policies, risk management, incident response.
Hub vs Transparency and Explainability (4.2)
Transparency and Explainability covers the technical distinction and tools (SageMaker Clarify, A2I, Model Cards). This hub uses those tools as evidence supporting GDPR Article 22, EU AI Act human-oversight obligations, and ISO/IEC 42001 documentation controls.
Key Numbers and Must-Memorize Facts
AIF-C01 does not demand precise numeric mastery in D5, but a handful of anchors recur in scenarios.
- EU AI Act defines four risk tiers: unacceptable, high, limited, minimal.
- NIST AI RMF has four core functions: Govern, Map, Measure, Manage.
- ISO/IEC 42001:2023 is the first certifiable AI management system standard.
- GDPR breach notification window to the supervisory authority is 72 hours (Article 33).
- CloudTrail keeps 90 days of management events in Event History for free; data events require an explicit trail.
- Bedrock Guardrails apply at both input and output — double-layer filtering.
- SageMaker Model Registry approval statuses:
Approved,Rejected,PendingManualApproval. - AWS supports 140+ compliance programs; the full list is at AWS Compliance Programs.
Common Exam Traps
Beyond the CloudTrail/Config/Audit Manager trap called out above, several other D5 confusions burn candidates.
AWS compliance vs customer compliance
AWS Artifact publishes AWS's audit reports for AWS infrastructure. It does not make your AI application compliant. You remain responsible for IAM, encryption choices, guardrails, training-data governance, and model documentation. The exam tests this inheritance boundary repeatedly.
NIST AI RMF vs NIST Cybersecurity Framework
Both are published by NIST and both use the word "framework", but they are different documents with different function names. AI RMF uses Govern/Map/Measure/Manage; CSF uses Identify/Protect/Detect/Respond/Recover. Questions that invoke AI risk should match AI RMF; generic cyber questions match CSF.
EU AI Act vs GDPR
GDPR is about personal data processing regardless of whether AI is involved. EU AI Act is about AI systems regardless of whether personal data is involved. They overlap when an AI system processes personal data — both apply. A common trap answer invokes one when the question wants the other.
Governance committee vs technical control
"How do we ensure high-risk models are approved before deployment?" has a governance answer (committee + approval gate) and a technical enabler (SageMaker Model Registry approval status). The exam typically wants the combined answer — the committee decides, the technology enforces.
ISO/IEC 42001 vs ISO 27001
ISO 27001 is the information security management standard; ISO/IEC 42001 is the AI management standard. They stack rather than substitute — a serious AI vendor typically holds both.
AI incident vs classic security incident
A biased output, a prompt injection, or a hallucinated answer is an AI incident even when no data was exfiltrated and no credentials were stolen. Incident response playbooks must cover these; GuardDuty alone will not.
Practice Question Links — Task 5.2 Mapped Exercises
Expect AIF-C01 exam items in these shapes on AI security compliance governance.
- "A company building a hiring recommendation AI for the EU market asks which framework applies." Answer: EU AI Act (hiring is high-risk) plus GDPR (processing personal data).
- "Which AWS service continuously collects evidence mapped to a compliance framework for AI workloads?" Answer: AWS Audit Manager.
- "An auditor wants to know who invoked a specific Bedrock model last Tuesday." Answer: CloudTrail data events.
- "A company needs to demonstrate its AI management system is externally certified against an international standard." Answer: ISO/IEC 42001.
- "Which AWS document do you sign before storing PHI in a SageMaker training job?" Answer: HIPAA Business Associate Addendum via AWS Artifact.
- "How does a company ensure a fine-tuned model cannot reach a production endpoint without executive sign-off?" Answer: SageMaker Model Registry with
PendingManualApprovalstatus in a Pipelines gate, plus the governance committee. - "Which framework organizes AI risk around Govern, Map, Measure, Manage?" Answer: NIST AI Risk Management Framework.
- "A biased output from a production LLM is discovered. Which AWS service is first in the incident response chain for forensics?" Answer: CloudTrail (who invoked, when) plus SageMaker Model Monitor alerts (bias drift).
FAQ — AI Security, Compliance & Governance Top Questions
Q1. Is the EU AI Act the same thing as GDPR?
No. GDPR (2016/679) regulates processing of personal data about EU residents, whether or not AI is involved. The EU AI Act (2024) regulates AI systems placed on the EU market, whether or not personal data is involved. They stack: an AI system processing EU personal data must comply with both. GDPR drives PII handling, consent, and right-to-explanation concerns; the EU AI Act drives risk-based tiering, conformity assessment, logging, and human oversight obligations. On AWS, GDPR obligations are typically addressed through Macie, Comprehend PII detection, Bedrock Guardrails PII redaction, and the AWS GDPR Data Processing Addendum in AWS Artifact; EU AI Act obligations add SageMaker Model Cards, A2I for human oversight, and Audit Manager for continuous evidence.
Q2. Do I need to be ISO/IEC 42001 certified to deploy AI on AWS?
No. ISO/IEC 42001 is a voluntary certification that demonstrates your organization has a mature AI management system. AWS itself is working through multiple AI-related certifications and publishes status in AWS Artifact. Your own certification decision depends on market requirements — EU customers increasingly ask for it, and it strengthens evidence for EU AI Act high-risk system conformity. For the AIF-C01 exam, recognize ISO/IEC 42001 as the certifiable AI management system standard and distinguish it from the voluntary NIST AI RMF.
Q3. What does an AI governance committee actually do on a weekly basis?
A mature AI governance committee typically reviews new AI use-case intake forms, approves or rejects models pending deployment, reviews incident reports from the past week, updates the responsible AI policy when new risks emerge, and signs off on third-party foundation model additions. On AWS, the committee's approvals manifest technically as SageMaker Model Registry status changes, SCP updates in AWS Organizations, AWS Config rule adjustments, and Bedrock Guardrail configuration changes. Auditors ask for committee minutes as primary evidence of active governance.
Q4. How does CloudTrail differ from Audit Manager for AI compliance?
CloudTrail is the raw API-call log — every action on every resource, captured event by event. Audit Manager is a higher-level service that consumes CloudTrail (plus Config, Security Hub, and manual uploads) and maps findings to controls in a compliance framework (SOC 2, HIPAA, ISO, NIST). You use CloudTrail for forensics ("who invoked this model"), Audit Manager for compliance reports ("here is our evidence for SOC 2 control CC6.1"). They complement each other — Audit Manager without CloudTrail has no raw data; CloudTrail without Audit Manager requires manual evidence mapping.
Q5. What is an AI incident and when must it be reported externally?
An AI incident is any event where an AI system causes harm or near-harm — biased output affecting a protected group, hallucinated advice leading a customer astray, prompt injection leaking confidential context, or model drift degrading accuracy below an agreed threshold. External reporting obligations depend on jurisdiction and incident type. Under GDPR, a personal-data breach must be notified to the supervisory authority within 72 hours (Article 33) and to affected data subjects without undue delay when there is a high risk. Under the EU AI Act, providers of high-risk AI systems must report serious incidents to the competent national authority. Sector regulators add their own rules (FDA for medical AI, FINRA for investment AI). Your internal incident response playbook must encode these triggers and involve legal and DPO from detection onward.
Q6. Can I use AWS Control Tower guardrails as EU AI Act compliance evidence?
Partially. AWS Control Tower mandatory guardrails (for example, prohibiting changes to CloudTrail) establish the foundational logging and governance posture that EU AI Act high-risk systems require. But Control Tower guardrails alone do not satisfy AI-specific obligations around training-data quality, model documentation, human oversight, or conformity assessment. Treat Control Tower as the strong foundation of AI security compliance governance on AWS and layer AI-specific controls (Bedrock Guardrails, SageMaker Clarify, Model Cards, Audit Manager AI evidence) on top.
Q7. How do I handle third-party foundation model risk on Amazon Bedrock?
Three steps. First, restrict via IAM — scope bedrock:InvokeModel to only the specific model identifiers your governance committee has approved. Second, layer Bedrock Guardrails on top of the vendor's built-in safety training to enforce your organization's content policy. Third, document the model in your inventory with provider, version, training cutoff, AWS AI Service Card reference, and approval date; route any vendor model-version update back through the approval gate. Model provenance for Bedrock models also means recognizing that AWS itself does not train third-party models — Anthropic, Meta, Mistral, Stability, and Cohere do — so your supply-chain due diligence extends to them.
Further Reading
- AWS Well-Architected Machine Learning Lens: https://docs.aws.amazon.com/wellarchitected/latest/machine-learning-lens/machine-learning-lens.html
- AWS Responsible AI — policies and resources: https://aws.amazon.com/ai/responsible-ai/
- AWS Cloud Adoption Framework for AI/ML/Generative AI: https://docs.aws.amazon.com/whitepapers/latest/aws-caf-for-ai/aws-caf-for-ai.html
- AWS Compliance Programs overview: https://aws.amazon.com/compliance/programs/
- AWS Audit Manager User Guide: https://docs.aws.amazon.com/audit-manager/latest/userguide/what-is.html
- AWS CloudTrail User Guide: https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html
- AWS Config Developer Guide: https://docs.aws.amazon.com/config/latest/developerguide/WhatIsConfig.html
- AWS Artifact User Guide: https://docs.aws.amazon.com/artifact/latest/ug/what-is-aws-artifact.html
- Amazon Bedrock Security: https://docs.aws.amazon.com/bedrock/latest/userguide/security.html
- Amazon SageMaker Security: https://docs.aws.amazon.com/sagemaker/latest/dg/security.html
- NIST AI Risk Management Framework (AI RMF 1.0): https://www.nist.gov/itl/ai-risk-management-framework
- ISO/IEC 42001:2023 — AI management system: https://www.iso.org/standard/81230.html
- EU Artificial Intelligence Act: https://artificialintelligenceact.eu/
- EU General Data Protection Regulation: https://gdpr-info.eu/
- AWS Certified AI Practitioner Exam Guide (AIF-C01): https://d1.awsstatic.com/training-and-certification/docs-ai-practitioner/AWS-Certified-AI-Practitioner_Exam-Guide.pdf