Improving the security posture of an existing AWS architecture is fundamentally different from designing a greenfield workload. In a new build you choose the rails up front — default encryption on, Secrets Manager from day one, IAM Identity Center instead of IAM users, ACM-issued certificates on every listener. In an existing system you inherit whatever the previous team (or the acquired company) already shipped: a sprawl of unencrypted EBS volumes, hardcoded database credentials in a Docker image tagged latest-prod, a single IAM user named jenkins-deploy with an access key issued in 2019, two hundred S3 buckets no one has inventoried, a legacy OpenVPN appliance that lives outside any SSO, and a root account that still has access keys attached. SAP-C02 Task 3.2 tests whether you can walk into that situation, run a diagnostic, and prescribe the remediation sequence in the right order — not just list every AWS security service you know.
This guide assumes Associate-level familiarity with IAM, KMS, and CloudTrail and focuses on the diagnose-then-remediate posture the Professional exam rewards. We will cover the Security Hub FSBP baseline as the scoring surface, Inspector v2 as the vulnerability retrofit engine, Macie for PII discovery across legacy buckets, IAM Access Analyzer's three complementary jobs, an encrypt-at-rest migration playbook that survives stateful resources, the secrets rotation pattern that gets you off hardcoded credentials, legacy VPN to AWS Verified Access + IAM Identity Center, TLS enforcement retrofit on existing ALBs, runtime protection rollout with GuardDuty, and an incident response playbook that ends in Amazon Detective. Each section ends with the SAP-C02 trap signals and the auto-remediation glue (EventBridge → Config → SSM Automation) that the exam expects you to know cold.
What Security Posture Improvement Means on SAP-C02
Security posture improvement is the discipline of taking an AWS environment that is already running production traffic and raising its security baseline — closing gaps, enforcing encryption, rotating credentials, enabling detective controls, and driving continuous compliance — without forcing a full rewrite. The Professional exam frames Task 3.2 as a diagnostic problem: given a set of symptoms (CVEs in an EC2 fleet, public S3 bucket discovered by an auditor, hardcoded AWS access key in a Git repo, unrotated KMS key, disabled CloudTrail in a member account), identify the correct combination of AWS services to reach a remediated state, and sequence those services so high-blast-radius findings are closed first.
Crucially, Domain 3 questions are written in a different voice than Domain 2 questions. Domain 2 asks "a company is building a new SaaS product, what should the security architecture be?" Domain 3 asks "a company already has a SaaS product that fails its first SOC 2 audit, what should they do next, in order?" The correct answer always respects three constraints the exam reuses: minimise downtime on stateful resources, prefer native managed services over third-party agents, and favour preventive + detective control pairs over point fixes.
- Security Hub FSBP (AWS Foundational Security Best Practices): a managed security standard (100+ controls) that scores your account on conformance with AWS's baseline and produces individual control findings you can remediate; functions as the Professional exam's default "where are we today?" posture score.
- Inspector v2: the automated vulnerability management service that continuously scans EC2 instances, Lambda functions (code and dependencies), and ECR container images for CVEs and unintended network exposure.
- IAM Access Analyzer: the zone-of-trust analyzer that finds resource policies granting external access, flags unused IAM roles/users/access keys, and generates least-privilege IAM policies from CloudTrail history.
- Amazon Macie: the sensitive-data discovery service that samples S3 objects with managed data identifiers (PII, PHI, credentials) and generates findings with file-level precision.
- AWS Config auto-remediation: the Config Rule + SSM Automation pairing that evaluates resources for compliance and, on non-compliance, triggers a runbook that fixes the resource automatically.
- Encrypt-at-rest retrofit: the migration playbook that inventories existing unencrypted EBS/S3/RDS resources, enables account-level defaults for new resources, then performs snapshot-restore or copy-based migration to bring legacy resources under KMS CMK encryption.
- AWS Verified Access: the zero-trust application access service that replaces VPNs by evaluating each HTTPS request against identity (IAM Identity Center / OIDC IdP) and device posture before brokering access to an internal application.
- Reference: https://docs.aws.amazon.com/securityhub/latest/userguide/fsbp-standard.html
白話文解釋 Security Improvement of Existing Systems
Existing-system security work is half archaeology and half plumbing. Three analogies from very different domains make the workflow memorable.
Analogy 1: The Home Inspection Before a Renovation
You just bought a 1980s house. Before you pick wallpaper you hire a home inspector who runs down a checklist — smoke detectors, electrical panel, roof, plumbing, asbestos, termite damage — and hands you a report with severity ratings. AWS Security Hub with FSBP is exactly this inspector: it enumerates the 100+ checks you should have been doing and gives you a compliance score and a prioritised finding list. Amazon Inspector v2 is the termite + pest inspector specifically for your "wooden" resources (EC2 AMIs, Lambda code, ECR images) — it won't judge your paint colour, but it will find every CVE crawling inside the walls. Amazon Macie is the document auditor you hire separately to open every drawer and file cabinet (S3 bucket) and tell you which ones contain old passports, tax returns, or credit-card numbers the previous owner left behind. IAM Access Analyzer is the locksmith who walks every door, identifies which keys open which rooms, notices that the pool cleaner's key also opens the master bedroom (external access finding), and notes which keys haven't been used in 90 days (unused access). Once you have all four reports, you schedule contractors in the right order: fix the missing smoke detectors first (CloudTrail + MFA), then re-key the locks (rotate access keys), then encrypt the "safe" (RDS/S3/EBS), then renovate the bathroom (Verified Access cutover).
Analogy 2: Taking Over a Messy Kitchen
A new head chef takes over a failing restaurant. The previous team left expired ingredients in the walk-in (unused IAM users), the master key to the liquor cabinet hanging on a hook in the hallway (root account access keys), handwritten recipes taped above the stove that include supplier passwords (hardcoded credentials), and no CCTV (no CloudTrail data events). The chef cannot close the restaurant — it still serves dinner tonight. So she works in order of blast radius. Day 1: change the walk-in lock and liquor-cabinet key first (revoke root keys, enforce MFA). Day 1-2: install CCTV before anything else so she can watch what's happening while she fixes it (enable CloudTrail organization trail + Security Hub + GuardDuty). Week 1: inventory every ingredient (Macie on S3, Inspector on EC2, Access Analyzer on IAM). Week 2: move all supplier passwords into a locked safe that rotates combinations weekly (Secrets Manager). Week 3-4: start replacing non-food-safe cookware (encrypt-at-rest migration via snapshot-restore). Month 2: replace the delivery-driver side-door (legacy VPN) with a badge reader that checks identity per entry (Verified Access). The restaurant never closes; each fix is a small change with a checkpoint.
Analogy 3: The Airport Upgrading Its Security After a Breach
An international airport has been running for twenty years and just failed a regulator audit. The fix is not "build a new airport." It is a layered rollout. AWS Config rules + SSM Automation are the automatic baggage recheck conveyor — if a bag reaches the gate without the right tag, the conveyor routes it back for scanning automatically, no human needed. Inspector v2 is the X-ray machine at every checkpoint, continuously scanning each passenger's bags (EC2 AMI, Lambda package, container image) against a threat database that updates hourly. GuardDuty runtime protection is the undercover officers walking the concourse watching for suspicious behaviour after a passenger passed the checkpoint (a container making DNS queries to a known C2 domain, a Lambda function spawning an unexpected shell). Amazon Detective is the investigation room — once something trips an alarm, the chief security officer sits down with a single pane of glass that stitches together CloudTrail, VPC Flow Logs, and GuardDuty findings to build a timeline. AWS Verified Access is the new priority-pass kiosk that replaces the old unreliable staff side-door — every entry requires identity + device posture check per request, not a one-time badge swipe.
For SAP-C02 Task 3.2, the home inspection analogy is best when the question lists a collection of findings and asks "what is the prioritisation?" — you treat Security Hub as the inspector's report. The messy kitchen analogy is best when the question emphasises "the system is in production, cannot take downtime" — you sequence remediations with stateful resources left for snapshot-restore windows. The airport analogy is best when the question mixes preventive + detective + runtime + forensic services. Reference: https://docs.aws.amazon.com/securityhub/latest/userguide/what-is-securityhub.html
Diagnostic Entry Point: Where to Start on an Unfamiliar Account
The Professional exam rewards candidates who start with the diagnostic, not the remediation. When a question begins "a company recently acquired a smaller competitor and inherited the competitor's AWS account — what is the first action?", the correct answer is almost never "enable encryption" or "rotate keys" — it is "get visibility." The first objective is to know what you own, what is exposed, and which findings are already critical; only then can you prioritise.
The four-question diagnostic
- Do I have an inventory? Enable AWS Config (recorder + delivery channel) across all regions in the account. Config records the full resource snapshot every time a resource changes. Without Config, you are asking questions blind.
- Do I have audit logs? Enable a CloudTrail organization trail into a central log archive account with S3 Object Lock or MFA delete. Management events must be on; data events for S3 object-level and Lambda invocation come next. Without CloudTrail, you cannot answer "who did that?"
- What is my baseline score? Enable AWS Security Hub with the FSBP standard (and CIS AWS Foundations and PCI DSS if applicable). Security Hub ingests Config rule results, GuardDuty, Inspector, Macie, IAM Access Analyzer, and Firewall Manager findings into a unified model (ASFF — AWS Security Finding Format) and produces a percentage compliance score per standard.
- What's currently on fire? Enable GuardDuty (with S3, EKS, Lambda, RDS, and Runtime Monitoring features) to light up anything actively malicious. GuardDuty analyses CloudTrail, VPC Flow Logs, DNS logs, and runtime sensors; it is the fastest "do I have an active intrusion right now?" signal.
Only after those four foundations are on should you start issuing remediation work orders. On the exam, if a question asks "the first step after inheriting an unfamiliar account," the correct answer is almost always one of: enable CloudTrail, enable Security Hub + standards, enable Config, enable GuardDuty — often grouped under "enable the foundational services and aggregate findings into the security tooling account."
The acquired-company diagnostic scenario
Scenario: Your company just acquired a 50-person competitor. The acquired AWS account shows 200 S3 buckets with unknown contents, the root user has an active access key last used seven days ago, no accounts have MFA enforced, CloudTrail is enabled in us-east-1 only, there is one IAM group called
Adminswith 14 members all using long-lived access keys, and Security Hub has never been turned on. What do you do on Day 1?
The SAP-C02 correct answer is a sequence, not a single action:
- Secure the front door: rotate or, ideally, delete the root access key immediately (AWS best practice is that the root user should have no access keys); enable MFA on the root; add a hardware or virtual MFA to every IAM user with console access, and enforce MFA via an SCP or IAM policy condition (
aws:MultiFactorAuthPresent). If the root access key was leaked, also revoke all session credentials for every IAM role used that day. - Turn on the cameras before doing anything else: enable a multi-region CloudTrail trail with log-file validation, push logs to an S3 bucket in the central log-archive account with Object Lock. Enable Config in every region. The reason to do this before remediation is that any subsequent key rotation, policy change, or resource modification will leave a forensic trail you can later audit.
- Enable detective services: turn on GuardDuty, Security Hub with FSBP and CIS standards, IAM Access Analyzer at organization scope, and Inspector v2 with all three scans (EC2 + Lambda + ECR). At this point you have the baseline score and the full finding inventory.
- Triage findings by blast radius: critical/high findings first — public S3 buckets, IAM users with AdministratorAccess + no MFA, unencrypted RDS instances with public snapshots, security groups with 0.0.0.0/0 to port 22 or 3389, CloudTrail disabled anywhere, KMS keys with deletion scheduled.
- Begin the scoped remediation waves: each wave targets a related control set (credentials, encryption, network exposure, secrets, runtime) and runs with its own rollback plan.
SAP-C02 questions that include multiple-choice options where one choice is "immediately enable EBS default encryption" and another choice is "first enable CloudTrail, Config, Security Hub, GuardDuty, and Access Analyzer" will reward the visibility-first answer. You cannot prioritise what you cannot see. Reference: https://docs.aws.amazon.com/securityhub/latest/userguide/what-is-securityhub.html
Security Hub FSBP Baseline and Compliance Drift Remediation
AWS Security Hub is the scoreboard. The AWS Foundational Security Best Practices (FSBP) standard is a curated set of over 100 controls that map to the highest-signal security hygiene items across IAM, S3, EC2, RDS, KMS, CloudTrail, Config, EFS, Lambda, ELB, and more. Each control maps to one or more AWS Config rules under the hood; Security Hub runs those rules, collects findings in AWS Security Finding Format (ASFF), and computes a per-standard compliance score.
How Security Hub drives improvement
On an existing account, Security Hub's job is threefold:
- Baseline: give you a today-score (e.g., "58% compliant on FSBP") so you can measure progress week over week.
- Prioritise: rank findings by severity (Critical / High / Medium / Low / Informational) and attach a remediation link for each.
- Automate: feed findings into EventBridge so custom actions, SNS notifications, or SSM Automation runbooks can trigger remediation without human touch.
Key architectural moves on the exam:
- Aggregator account: designate the Audit / Security Tooling account as the delegated administrator for Security Hub via AWS Organizations, then enable cross-region aggregation so every finding in every region in every member account funnels into one console and one S3 bucket for downstream ingestion.
- Cross-Region aggregation: the linking-Regions feature allows the aggregator region to receive copies of findings from every other linked region so a single dashboard reflects the entire estate.
- Finding lifecycle: findings have a workflow status (NEW / NOTIFIED / SUPPRESSED / RESOLVED) and automation rules that can flip the workflow, assign severity, or route via EventBridge.
Compliance drift and auto-remediation via Config + SSM Automation
"Drift" on the Professional exam means a resource was once compliant and later changed. Example: someone added a new security group rule opening port 22 to 0.0.0.0/0. The remediation architecture is always the same three-step pattern:
- Detect — AWS Config rule evaluates the resource against a desired configuration. Managed rule
restricted-sshflags any security group with ingress on 22 from 0.0.0.0/0. - Route — EventBridge rule matches Config compliance change events (
source: aws.config,detail-type: Config Rules Compliance Change) withnewEvaluationResult.complianceType = NON_COMPLIANT. - Remediate — Config's native remediation action invokes an SSM Automation document (managed or custom) that removes the offending rule, or EventBridge routes to a Lambda that does the same.
AWS provides a catalog of pre-built SSM Automation runbooks for common remediations: AWSConfigRemediation-RemoveSecurityGroupIngressRule, AWSConfigRemediation-EnableEbsEncryptionByDefault, AWSConfigRemediation-EnableCloudTrail, AWSConfigRemediation-RotateKMSKey, AWSConfigRemediation-EnableS3BucketEncryption, and more. You attach them to the Config rule as the remediation target, optionally with automatic execution (with or without approval).
- Detect with AWS Config rule (managed or custom).
- Aggregate in Security Hub via the FSBP/CIS standard.
- Route via EventBridge rule on
Security Hub Findings - ImportedorConfig Rules Compliance Change. - Remediate via Config remediation action invoking an SSM Automation runbook, or via direct EventBridge → SSM / Lambda target.
- Notify via SNS → email/Slack/PagerDuty for findings above a severity threshold.
- Audit the remediation itself in CloudTrail and in SSM Automation execution history.
- Reference: https://docs.aws.amazon.com/config/latest/developerguide/remediation.html
Exam trap: Security Hub alone does not remediate
Security Hub produces findings; it does not fix them. The remediation verb is always delegated — usually to AWS Systems Manager Automation or AWS Lambda, glued by EventBridge. If a question asks "which service automatically remediates a finding?", the answer is Config remediation action or SSM Automation, not Security Hub. Security Hub's value is normalisation and aggregation, not action.
AWS Security Hub standards and most findings are Regional. You must enable Security Hub in every Region where you have resources, and then use cross-Region aggregation to pull findings into one aggregator Region. A single-Region Security Hub enablement is a common SAP-C02 wrong-answer distractor. Reference: https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-cross-region-aggregation.html
Inspector v2 for EC2, Lambda, and ECR Vulnerability Retrofit
Amazon Inspector v2 is the automated continuous vulnerability management service. On an existing architecture, Inspector is the tool that turns "we have 200 EC2 instances, who knows their patch level" into a ranked CVE list with an Inspector score per finding. Unlike its predecessor Inspector Classic, Inspector v2 is agentless for most workflows (leveraging SSM agent already present for EC2) and scans three resource types.
What Inspector v2 scans
- EC2 instances — operating-system CVEs and, optionally, network reachability findings (an EC2 in a public subnet with port 22 open and a critical OpenSSH CVE is a very different severity than the same instance behind an ALB in a private subnet). Scans are continuous when the instance is running and SSM agent is installed.
- ECR container images — base OS CVEs, programming-language package CVEs (Python, Node, Java, Ruby, Go, .NET). Inspector's enhanced scanning for ECR is the default; enhanced scanning runs on push, on schedule, and continuously against the CVE database, so an image becomes non-compliant automatically when a new CVE is published against a package it bundles.
- Lambda functions — two modes: standard scanning (package CVEs in the deployment ZIP or layer) and code scanning (static analysis of your handler code for injection, hardcoded secrets, weak crypto).
Retrofit rollout pattern
On an existing account:
- Enable Inspector v2 via the delegated administrator model (typically the Audit / Security Tooling account).
- Auto-enable for new accounts in AWS Organizations so new member accounts inherit scanning.
- Triage findings by Inspector score and exploitability. Inspector scoring adjusts CVSS to reflect the environment (network reachability, whether the package is in a running process, whether an exploit is publicly known).
- Prioritise public-facing + critical CVEs first. If Inspector says "this instance is reachable from the Internet and has a critical RCE in Apache", that is Day-1 remediation.
- Push findings to Security Hub so a single aggregator view tracks both vulnerability findings and misconfiguration findings.
- Automate patching via Systems Manager Patch Manager — define a patch baseline per OU or per tag, schedule a maintenance window, and use Inspector findings to validate that post-patch runs drop the CVE count.
Generating an SBOM
Inspector v2 can export a Software Bill of Materials (SBOM) per resource in CycloneDX or SPDX format. SBOM export is the audit artifact increasingly required by enterprise customers and regulators — on the exam, SBOM is the correct answer when a scenario says "we must provide a machine-readable inventory of every third-party library in every container we ship."
The SAP-C02 pattern for "200 EC2 instances with CVEs": (1) Inspector v2 enabled to produce the CVE finding list, (2) EventBridge rule on Inspector findings above a severity threshold, (3) Systems Manager Patch Manager with a patch baseline and a maintenance window, (4) State Manager association to enforce agent presence, (5) Inspector re-scan after patch window to close the finding. Never pick "launch a new AMI" when the fleet can be patched in place. Reference: https://docs.aws.amazon.com/inspector/latest/user/what-is-inspector.html
Macie for S3 PII Discovery Across Legacy Buckets
Amazon Macie is purpose-built for one question that constantly appears on SAP-C02: "this S3 bucket has existed for five years and nobody knows what is in it — does it contain PII?" Macie uses managed data identifiers (names, addresses, government IDs, credit-card numbers, health records, AWS credentials) and optional custom data identifiers (regex + keyword) to sample objects and produce findings with object-level precision.
Macie discovery pattern on legacy buckets
- Enable Macie at the organization level via the delegated administrator model.
- Use automated sensitive data discovery (the default continuous, low-cost sampling mode) to get a quick posture view across every bucket in the estate. This mode uses intelligent sampling to give you a heatmap of which buckets are likely to hold sensitive data.
- Run targeted discovery jobs on high-risk buckets identified by the heatmap. A discovery job can be one-time or scheduled (daily/weekly/monthly) and can be scoped by tag, size, or object-key prefix.
- Review findings:
SensitiveData:S3Object/Credentials(AWS keys, OAuth tokens),SensitiveData:S3Object/Financial,SensitiveData:S3Object/Personal, plus policy findings likePolicy:IAMUser/S3BucketPublic. - Feed findings to Security Hub via the Macie ↔ Security Hub integration, and trigger remediation via EventBridge → Lambda → bucket policy update / object tagging / object move to a quarantine bucket.
Scenario: the 200-bucket problem
When a scenario says "200 unscanned buckets in the acquired account," the correct sequence is:
- Enable Macie with automated sensitive data discovery — this is the scalable, cost-efficient first pass.
- Add S3 Block Public Access at the account level immediately (via
aws s3control put-public-access-block) to prevent new accidental exposure while discovery runs. - Enable S3 server access logging + CloudTrail data events for S3 so you have audit history of object-level reads.
- Let Macie sample for a week, then run discovery jobs on any bucket flagged with high sensitivity.
- Remediate: move confirmed PII buckets behind an S3 bucket policy denying public access with
aws:SecureTransport: falsedeny, enable default encryption with a dedicated KMS CMK, apply ABAC tags for data-classification-driven access, and integrate with Lake Formation if consumers are analytics users.
S3 Block Public Access prevents accidental public exposure at the account/bucket level; Macie discovers what sensitive data already exists inside the objects regardless of exposure. The exam often gives both as options and expects you to apply Block Public Access immediately as a preventive control while Macie scans run in parallel. You need both, not one. Reference: https://docs.aws.amazon.com/macie/latest/user/what-is-macie.html
IAM Access Analyzer: External Access, Unused Access, and Policy Generation
IAM Access Analyzer is one of the most under-appreciated services on SAP-C02 because it does three distinct jobs that are each relevant to the improvement workflow. Understanding what each job does — and what it does not do — is a frequent exam trap.
Job 1: External access analyzer (zone of trust)
The external access analyzer continuously inspects resource-based policies on S3 buckets, IAM roles (trust policies), KMS keys, Lambda functions, SQS queues, Secrets Manager secrets, SNS topics, IAM Identity Center applications, RDS snapshots, ECR repositories, EFS filesystems, and more. Given a zone of trust — either a single AWS account or an entire AWS Organization — it emits a finding whenever a resource grants access to a principal outside the zone.
This is the correct tool when the question asks: "identify every resource in our estate that grants access to someone outside our organization." For an SAP-C02 scenario where an acquired company has been sharing S3 buckets, KMS keys, and IAM role trusts with external consultants for years, the external access analyzer produces the complete outside-access inventory in minutes.
Job 2: Unused access analyzer
The unused access analyzer examines IAM entities in your account and reports:
- Unused IAM users (no activity within the tracking period).
- Unused IAM roles (the role's trust policy has not been assumed in N days).
- Unused access keys (no API call recorded for N days).
- Unused permissions inside an in-use role — i.e., the role has
s3:*but has only ever useds3:GetObjectands3:ListBucketin the tracking window. - Unused console passwords.
The unused access analyzer is the correct tool for the question "reduce blast radius on our IAM identities without breaking workloads." It is a paid feature (per resource per month) separate from the free external access analyzer — on the exam this is a distinguishing detail.
Job 3: Policy generation from CloudTrail
The policy generation feature reads up to 90 days of CloudTrail history for a principal (IAM role or user) and produces a least-privilege IAM policy containing only the actions the principal actually invoked. This is the tool to use when a scenario says "we have a legacy IAM role with AdministratorAccess that we want to tighten without breaking the application." You run policy generation, review the generated policy, attach it as the replacement, and optionally keep AdministratorAccess as a fallback for one week of observation before detaching.
What IAM Access Analyzer does NOT do
Access Analyzer does not evaluate identity-based IAM policies against best practices (that is the IAM Access Advisor and AWS Config rule territory). It does not detect credentials leaked in code (that is Secrets Manager, Macie, Inspector Lambda code scanning, or third-party tools like GitHub secret scanning). It does not scan Active Directory or Identity Center for weak passwords.
A very common SAP-C02 distractor: "run IAM Access Analyzer to find IAM users with overly broad identity-based policies." False — the external access analyzer only inspects resource-based policies and trust policies, not identity-based permissions. Use IAM Access Advisor for last-accessed services on identity policies, or AWS Config managed rules like iam-policy-no-statements-with-admin-access. Reference: https://docs.aws.amazon.com/IAM/latest/UserGuide/what-is-access-analyzer.html
Encrypt-at-Rest Retrofit Playbook
Retrofitting encryption is the single most common "existing system" remediation on the exam because the in-place upgrade path differs per service. The playbook has four phases.
Phase 1: Inventory
Use AWS Config advanced queries (or Security Hub FSBP controls) to list:
- EBS volumes where
encrypted = false. - S3 buckets where
ServerSideEncryptionConfigurationis absent or set to SSE-S3 when the policy mandates SSE-KMS. - RDS instances where
StorageEncrypted = false. - RDS snapshots public or unencrypted.
- EFS filesystems where
Encrypted = false. - DynamoDB tables using the default AWS-owned key when the policy mandates customer-managed (CMK).
- Redshift clusters, ElastiCache Redis, SageMaker domains, and other data stores where encryption is optional.
Phase 2: Enable account-level defaults (prevent new exposure)
- EBS default encryption: account-level setting, per Region. Enable it immediately — from that point forward every new volume, every new snapshot, and every new AMI is encrypted with the designated KMS key.
- S3 default encryption: every new bucket created with default encryption on (SSE-S3 by default; upgrade to SSE-KMS with your CMK for higher control).
- RDS: there is no "default encrypt new DB instances" toggle globally, but you can enforce it via an SCP denying
rds:CreateDBInstanceunlessStorageEncrypted=true, or via a Config rulerds-storage-encryptedwith auto-remediation rejecting non-compliant resources. - AWS Organizations SCP: deny unencrypted resource creation across OUs as a hard preventive layer.
Phase 3: Migrate existing unencrypted resources (the hard part)
Each stateful resource requires a different migration because encryption is an immutable property at creation for most AWS data stores. The exam expects you to know the pattern per service.
- EBS volume migration: (a) create a snapshot of the unencrypted volume, (b) copy the snapshot with
--encrypted --kms-key-id <CMK>specified, (c) create a new volume from the encrypted snapshot, (d) stop the instance, detach the old volume, attach the new encrypted volume, start the instance. Downtime is typically minutes; can be scripted as an SSM Automation runbook. For root volumes, produce an encrypted AMI from the encrypted snapshot and re-launch. - S3 object migration: enable default encryption on the bucket, then use S3 Batch Operations with a
CopyObjectjob to re-write every existing object encrypted (copy-in-place). For very large buckets, S3 Batch scales to billions of objects; the operation can be tagged, billed, and audited. - RDS migration: encryption cannot be enabled on a running unencrypted RDS instance. The pattern is (a) take a snapshot, (b) copy the snapshot with encryption enabled and your CMK, (c) restore the encrypted snapshot into a new instance, (d) cut traffic over (DNS CNAME, read replica promotion pattern, or AWS DMS for near-zero-downtime). For Aurora, the same snapshot-copy-restore pattern applies.
- EFS migration: create a new encrypted filesystem and use AWS DataSync to copy data over, then re-mount.
- DynamoDB: encryption is always on; to change from the default AWS-owned key to a customer-managed CMK, modify the table via
UpdateTable— no data copy needed, the change is metadata-level. This is one of the nicer migration paths. - Redshift: modify the cluster to enable encryption — this launches a background re-encryption that can take hours to days for large clusters; plan the change window with load-lowering at the cluster.
- ElastiCache Redis: encryption at rest is set at cluster create; migration requires a backup-and-restore into a new encrypted cluster, then client cutover.
Phase 4: KMS CMK strategy
Use customer-managed CMKs (not AWS-managed aws/<service> keys) when you need key-policy-level access control, cross-account use, manual/automatic rotation control, key-usage CloudTrail, or automatic annual rotation. For multi-region workloads, use KMS Multi-Region Keys (MRKs) so the same key ID exists in multiple Regions with cryptographically interoperable ciphertexts — this is the right answer when a DR strategy requires encrypted snapshots to be decryptable in the DR Region without re-wrapping.
You cannot flip encryption on in place for an existing RDS or Aurora instance. The only supported path is snapshot → copy snapshot with encryption → restore new instance from encrypted snapshot → application cutover. The exam repeatedly tests this in a question like "enable encryption on an existing RDS database with minimal downtime" and the correct answer is always "snapshot, copy with encryption, restore, cut traffic over (DMS if near-zero downtime)" — never "modify the DB instance to enable encryption." Reference: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html
Secret Rotation Migration: From Hardcoded Credentials to Secrets Manager
The single most common audit finding in an acquired account is hardcoded credentials — database passwords in source code, third-party API keys in environment variables, AWS access keys in Dockerfiles. The remediation is to migrate every such secret to AWS Secrets Manager (or SSM Parameter Store SecureString for non-rotating values) with automatic rotation where supported.
Discovery
- Amazon Macie can detect credentials inside S3 objects, including old backup tarballs and archived logs.
- Amazon Inspector Lambda code scanning detects hardcoded secrets in Lambda handler code.
- GitHub / Bitbucket secret scanning plus AWS CodeGuru Security catch secrets in source repositories.
- IAM Credentials Report enumerates every IAM user's access keys with creation date and last-used date — any key older than 90 days or never rotated is a candidate for immediate rotation.
Migration pattern
- Move the secret into Secrets Manager with a descriptive name and tag (
App=billing,Env=prod,Owner=team-foo). - Update the application to fetch the secret at runtime via the AWS SDK (
GetSecretValue). For Lambda, add an IAM policy allowingsecretsmanager:GetSecretValueon the specific secret ARN and cache the secret in the execution environment for the Lambda warm lifetime. - Remove the hardcoded value from source and container images. Scrub git history with BFG Repo-Cleaner or
git filter-repo, force-push, and invalidate any image tag that ever contained the secret. - Enable automatic rotation. Secrets Manager has built-in rotation Lambda templates for RDS (MySQL, PostgreSQL, Oracle, MSSQL, MariaDB), Amazon RDS Proxy, DocumentDB, Redshift, and a "generic" rotation function you customise for third-party APIs.
- Define a rotation schedule — 30 days is typical for service accounts, 90 for third-party API keys. Rotation is handled by a Lambda function that creates the new secret version, tests it, updates the target resource, and flips the
AWSCURRENTstaging label atomically. - Wire the old credential's last use to a CloudWatch alarm — if the app is still reading the old hardcoded value from anywhere, the alarm flags it and you can trace back to the un-migrated caller.
Secrets Manager vs Parameter Store trade-off
- Secrets Manager: built-in rotation, cross-account resource policies, multi-region replication, higher per-secret cost. Use for anything that rotates.
- SSM Parameter Store SecureString: cheaper, no built-in rotation, integrates with CloudFormation / SSM easily. Use for configuration values that happen to be sensitive but don't rotate (encrypted environment variables, feature flags).
When an RDS instance is created (or modified) with "Manage master user password in AWS Secrets Manager" enabled, AWS handles rotation entirely — no custom Lambda. For existing RDS instances, turn this option on via ModifyDBInstance. This is the exam's preferred answer when a scenario says "minimise operational overhead of rotating RDS master passwords." Reference: https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html
Legacy VPN to Verified Access + IAM Identity Center Migration
Most legacy AWS architectures route employee/contractor access to internal web apps through an OpenVPN appliance on EC2, AWS Site-to-Site VPN, or AWS Client VPN. These work, but the model is "trust the network tunnel" — once a user is on the VPN they have broad network reachability, sessions are long-lived, and posture is checked at connect time only. The modern zero-trust replacement is AWS Verified Access.
Verified Access model
Verified Access evaluates every HTTPS request against a policy expressed in Cedar, with inputs from:
- Identity via a trust provider — either IAM Identity Center (recommended for AWS-centric orgs) or an OIDC-compliant external IdP (Okta, Auth0, Entra ID / Azure AD, Ping).
- Device posture via an optional device trust provider (Jamf for macOS, CrowdStrike / Jamf via extensions). Posture signals include disk encryption state, OS version, EDR presence.
The Verified Access policy looks like permit(principal, action, resource) when { context.identity.groups.contains("finance-admins") && context.device.os == "macOS" && context.device.disk_encryption == "enabled" };.
Migration pattern from VPN
- Identify the applications currently behind the VPN — typically internal admin UIs, Jira/Confluence, Grafana dashboards, Jenkins, legacy financial tools.
- Place each application behind an ALB in a private subnet (if not already).
- Create a Verified Access instance, a trust provider (IAM Identity Center), an endpoint per application (pointing at the ALB), and an access policy.
- Point your corporate DNS (public or via Route 53 private hosted zone) to the Verified Access endpoint URL.
- Migrate users in waves — start with one application and a pilot group, validate audit logs (Verified Access logs every decision to CloudWatch Logs / S3 / Kinesis Firehose in OCSF format), then scale.
- Decommission the VPN once all applications are behind Verified Access and users confirm no broken access paths.
- Enforce IAM Identity Center MFA with phishing-resistant methods (hardware keys / platform passkeys) at the IdP layer so every Verified Access decision is MFA-backed.
Why this is better than VPN
- Per-request authorization instead of per-session network tunnel.
- No IP-allowlist maintenance — access is identity-based, not source-IP-based.
- Device posture per request catches lost/stolen devices within minutes of the MDM marking the device non-compliant.
- Full audit log of every decision (allow/deny) with identity, device signals, source IP, and policy evaluation trace — OCSF-normalised into Security Lake.
IAM Identity Center as the identity single source
When migrating, IAM Identity Center should also replace per-account IAM users for console / API access. The sequence is: (1) enable Identity Center at the org management account, (2) connect external IdP via SCIM + SAML for user/group provisioning, (3) create permission sets per role (ReadOnly, DataAnalyst, SecurityAdmin), (4) assign permission sets to accounts via groups, (5) revoke IAM user access keys as teams migrate. This also unlocks attribute-based access control (ABAC) where IdP-sourced tags propagate to the session and IAM policies condition on them.
- VPN (legacy): network-centric, perimeter-based, "inside the tunnel = trusted." One-time auth, long-lived session.
- Zero trust (Verified Access + Identity Center): identity-and-device-centric, per-request policy, no perimeter assumption. Continuous evaluation via short-lived tokens and device posture refresh.
- SAP-C02 correct answer when a scenario says "replace a VPN for internal web apps": Verified Access + IAM Identity Center, never "upgrade the VPN to Client VPN."
- Reference: https://docs.aws.amazon.com/verified-access/latest/ug/what-is-verified-access.html
TLS Enforcement Retrofit: ACM, SSL Policies, and ALB Hardening
Enforcing TLS on an existing environment has three moving parts: every public-facing endpoint must have a valid certificate, every listener must enforce a strong SSL policy, and every plain-HTTP listener must redirect or be removed.
Step 1: Certificate estate
- Inventory: ACM Console per Region, plus anywhere certificates are uploaded directly to load balancers, CloudFront, API Gateway, or EC2.
- Replace self-managed / third-party certs with ACM-issued certs where possible. ACM handles renewal automatically for resources it integrates with (ALB, NLB, CloudFront, API Gateway).
- Private internal TLS: use ACM Private CA to issue certificates to internal services (microservice-to-microservice TLS, internal ALBs, service mesh sidecars). Private CA provides full hierarchy control, short-lived certificates, and cross-account sharing via RAM.
Step 2: ALB / NLB SSL policy hardening
Every HTTPS listener on an ALB and every TLS listener on an NLB references an SSL policy — a named bundle of protocol versions and cipher suites. Existing legacy listeners commonly run ELBSecurityPolicy-2016-08 which permits TLS 1.0 and 1.1. The retrofit:
- Replace with
ELBSecurityPolicy-TLS13-1-2-2021-06(TLS 1.2 + 1.3, modern cipher suites) or the FIPS-compliant variant if required. - Disable TLS 1.0 and TLS 1.1 explicitly — many compliance frameworks (PCI DSS, FedRAMP) mandate this.
- Add an HTTP listener on port 80 configured as a 301 redirect to HTTPS, so legacy clients that hit HTTP get moved to TLS rather than served plaintext.
Step 3: Enforce TLS on S3 and API endpoints
- S3 bucket policy: add a
Denystatement with condition"Bool": {"aws:SecureTransport": "false"}to reject any non-HTTPS request. This is a 2-line change that is a FSBP control. - CloudFront: set
ViewerProtocolPolicytoredirect-to-httpsfor every cache behaviour, and minimum TLS version to TLSv1.2_2021. - API Gateway: set the minimum TLS version per domain, require SigV4 or JWT authorization, and enforce TLS via regional / edge endpoints with proper certificates.
Step 4: Monitor and audit
- AWS Config rule
alb-http-to-https-redirection-checkflags ALB listeners not redirecting HTTP → HTTPS. - Config rule
elb-tls-https-listeners-onlyflags NLB/ELB with non-TLS listeners. - Config rule
s3-bucket-ssl-requests-onlyverifies bucket policy requires SecureTransport. - Each pairs with an SSM Automation remediation so drift is closed within minutes.
If a previous engineer uploaded a third-party certificate directly to an ALB (rather than importing to ACM or issuing through ACM), AWS will not renew it. Expired certs cause production outages at midnight UTC. During security remediation, re-issue via ACM and swap the listener certificate; confirm by listing certs in the ACM console and ensuring the certificate ID the ALB references is ACM-managed. Reference: https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-https-listener.html
Runtime Protection Rollout: GuardDuty EKS, Lambda, RDS, and S3 Malware Protection
Amazon GuardDuty covers threats that misconfiguration-focused services miss: active intrusions, credential abuse, lateral movement, cryptomining, and malware. On an existing account you enable GuardDuty once at the organization level, then progressively turn on the protection plans relevant to your workloads.
Protection plans to evaluate
- GuardDuty Foundational (the original): CloudTrail, VPC Flow Logs, and DNS logs analysis. Always enable first.
- S3 Protection: data-event-based monitoring for anomalous S3 API usage (unusual downloads, public-access-block disablement, unusual cross-account access).
- Malware Protection for S3: on-upload or on-demand object scanning for malware. Useful when a bucket accepts user uploads (customer support attachments, ingestion zones).
- Malware Protection for EC2: agentless scan of EBS volumes when GuardDuty detects suspicious behaviour; can run proactively on demand.
- EKS Protection: EKS audit log monitoring for Kubernetes API abuse patterns (privilege escalation, suspicious exec, service account token misuse).
- EKS Runtime Monitoring (now unified with ECS / EC2 under Runtime Monitoring): process-level and network-level telemetry from a GuardDuty security agent (EKS add-on or EC2 SSM distributor package) catches container escapes, reverse shells, cryptomining binaries.
- RDS Protection: login anomaly detection on RDS Aurora MySQL/PostgreSQL — detects brute force, anomalous geo/user patterns.
- Lambda Protection: network activity monitoring of Lambda functions — flags a function reaching a C2 domain or unusual data exfiltration to non-corporate destinations.
Rollout sequence
- Enable GuardDuty Foundational via delegated administrator at the org level; auto-enable for new accounts.
- Enable S3 Protection org-wide.
- Enable EKS Protection for clusters that have public endpoints or sensitive workloads.
- Deploy Runtime Monitoring agents via the managed EKS add-on; verify the security agent pods are running and produce findings (GuardDuty provides sample findings for validation).
- Enable Lambda Protection after establishing a baseline — Lambda Protection costs scale with invocation volume, so enable by tag if cost is a concern.
- Enable RDS Protection for Aurora MySQL/Postgres DBs exposed to application logins.
- Enable Malware Protection for S3 on buckets that accept user uploads.
Tune suppression rules carefully
A mature GuardDuty rollout requires suppression rules to mute known benign findings (penetration test source IP, security team's internal scanner). Store suppression rules as IaC (CloudFormation / Terraform) so they are reviewable and expire automatically.
- Foundational: CloudTrail + VPC Flow Logs + DNS (always on, baseline).
- S3 Protection: S3 data-event anomalies.
- Malware Protection for S3: object content scanning on upload.
- EKS Protection: K8s audit log analysis.
- Runtime Monitoring (EKS + ECS + EC2): on-host process/network telemetry.
- RDS Protection: Aurora login anomaly detection.
- Lambda Protection: Lambda network activity.
- All plans feed into Security Hub and EventBridge as the common finding sink.
- Reference: https://docs.aws.amazon.com/guardduty/latest/ug/what-is-guardduty.html
Incident Response Playbook with Amazon Detective
When a GuardDuty finding fires at 02:00, the on-call engineer needs one screen with the full story. Amazon Detective is that screen. Detective ingests CloudTrail, VPC Flow Logs, GuardDuty findings, EKS audit logs, and Security Lake data, stitches them into a behavior graph, and lets an analyst pivot on any IP, principal, role, or resource to see every related event in a time window.
Detective enablement
- Enable Detective via the delegated administrator account (same pattern as GuardDuty / Security Hub).
- Link all member accounts so Detective's graph spans the full organisation.
- Link every Region you operate in.
- Pre-built finding groups cluster related findings (same principal, same target, overlapping time window) so you triage campaigns rather than individual alerts.
Playbook sequence (acquired-company example)
A Detective-driven IR playbook for "GuardDuty flagged UnauthorizedAccess:IAMUser/ConsoleLoginSuccess.B for a long-dormant IAM user":
- Containment — immediately revoke the IAM user's access keys and active sessions via
aws iam update-access-key --status Inactiveandaws sts revoke-access-keys-for-rolefor any assumed-role chain. - Investigation in Detective — open the finding, pivot to the IAM user's profile, inspect login locations, API call volumes, and new resource creations over the last 24 hours.
- Scope assessment — did the user create new IAM users, new access keys, new EC2 instances, change CloudTrail configuration? Detective surfaces all these within the same view.
- Eradication — delete any newly created identities, terminate unauthorized resources, rotate any secrets the user could have accessed.
- Recovery and hardening — enforce MFA on all IAM users (or better, migrate to IAM Identity Center with phishing-resistant MFA and remove the IAM user entirely), enable GuardDuty suspicious-login notifications, update SCP to deny IAM user creation going forward.
- Post-incident — export CloudTrail + GuardDuty + Detective timeline to Security Lake for long-term evidence retention, open a finding in Security Hub with a suppression reason if the event is confirmed benign.
Detective vs Security Hub vs CloudTrail Lake
- Security Hub is the scoreboard and router (findings and severity).
- CloudTrail Lake is the SQL-queryable event store for forensic ad-hoc queries.
- Detective is the pre-built graph for interactive investigation (no SQL required), with behavioural baselines that highlight anomalies.
All three complement each other; on the exam a single-pane-of-glass forensic investigation scenario points to Detective, a SQL-queryable event archive points to CloudTrail Lake, and a normalised multi-source SIEM-style lake points to Security Lake.
You cannot enable Detective without GuardDuty enabled for at least 48 hours in the same account — Detective uses GuardDuty behavioural baselines as one of its graph inputs. On an existing account you cannot meet an "investigate the intrusion now" requirement if GuardDuty was never on; you must enable GuardDuty first and accept a 48-hour learning window. Reference: https://docs.aws.amazon.com/detective/latest/userguide/detective-investigation-about.html
Remediation Sequence for the Acquired-Company Scenario
We promised a full walk-through of the diagnostic scenario introduced earlier. Here is the full sequenced remediation that the Professional exam expects to see — in the right order.
Scenario recap: the acquired AWS account has 200 unscanned S3 buckets, root access keys in active use, no MFA enforced on any user, CloudTrail enabled in a single Region only, 14 members of an Admins group with long-lived access keys, and Security Hub has never been enabled.
Day 0 — Emergency containment (first 2 hours)
- Root account hardening: delete the root access key; enable hardware-MFA on the root; lock the root credentials in a physical safe. Root should never have access keys per AWS best practice.
- Enable S3 Block Public Access at the account level immediately — single API call, prevents any new accidental public exposure while discovery runs.
- Enable MFA enforcement via IAM policy — attach a deny-all-without-MFA policy to the
Adminsgroup until every member has MFA.
Day 0-1 — Visibility foundation
- Enable a multi-region CloudTrail organization trail with log-file validation, target S3 bucket in a dedicated log-archive account with Object Lock in compliance mode, and S3 data events for the critical buckets.
- Enable AWS Config in every Region with a delivery channel to the log-archive account.
- Enable AWS Security Hub with FSBP and CIS AWS Foundations standards; designate the audit account as the delegated administrator; enable cross-Region aggregation.
- Enable GuardDuty via delegated administrator, all foundational features on plus S3 Protection.
- Enable IAM Access Analyzer at organization scope (external access analyzer is free; unused access analyzer enabled on the top-risk accounts).
- Enable Amazon Macie with automated sensitive data discovery across all 200 buckets.
- Enable Inspector v2 for EC2, Lambda, and ECR.
Day 2-7 — Triage and highest-impact fixes
- Review Security Hub severity board: remediate every Critical finding first (public buckets, unencrypted RDS snapshots public, CloudTrail disabled somewhere, root access key still present anywhere).
- Run IAM Access Analyzer unused access on the
Adminsgroup and every service account: every access key unused for 30+ days is disabled then deleted. - Use Access Analyzer policy generation on the
Adminsrole to produce a least-privilege replacement policy; review, apply in "observe" mode, and replace AdministratorAccess. - Migrate every
Adminsmember to an IAM Identity Center user/permission set; remove IAM users entirely as teams migrate. - Turn on EBS default encryption and S3 default encryption in every Region.
- Begin Macie discovery job scheduling on any bucket flagged with high sensitivity; remediate each confirmed PII bucket with default SSE-KMS encryption, SecureTransport-required bucket policy, and access restricted via VPC endpoint + Block Public Access.
Week 2-4 — Encrypt-at-rest retrofit
- Inventory unencrypted EBS / RDS / EFS via Config queries.
- Snapshot → encrypted-copy → restore migration for each RDS instance during scheduled maintenance windows.
- Use S3 Batch Operations to re-encrypt existing objects where SSE-KMS is newly required.
- For EBS volumes on running instances, script snapshot-copy-encrypt-replace via SSM Automation, scheduled during low-traffic windows.
Week 3-5 — Secrets and TLS
- Discover hardcoded secrets via Macie (S3), Inspector Lambda code scanning, and (outside AWS) source-repo secret scanning.
- Migrate each secret to Secrets Manager with automatic rotation (RDS master passwords use the native integration; third-party API keys use a custom rotation Lambda).
- Retrofit TLS on every ALB: upgrade SSL policy to TLS 1.3, add HTTP-to-HTTPS redirect, enforce SecureTransport on every S3 bucket.
- Issue ACM certificates for every internal service; for internal TLS, stand up ACM Private CA.
Week 4-8 — VPN replacement and runtime protection
- Replace legacy VPN with Verified Access: create Verified Access instance, trust provider (Identity Center), per-application endpoints behind ALBs, Cedar policies tied to IdP groups and device posture; migrate users in waves; decommission OpenVPN appliance.
- Enable GuardDuty Runtime Monitoring for EKS clusters and ECS services via the managed add-on.
- Enable GuardDuty RDS Protection on Aurora MySQL/Postgres endpoints.
- Enable GuardDuty Lambda Protection after reviewing invocation-volume cost.
- Enable Malware Protection for S3 on any bucket that accepts user uploads.
Ongoing — Compliance drift and auto-remediation
- Pair each FSBP control with an AWS Config auto-remediation using pre-built SSM Automation runbooks where available, custom runbooks otherwise.
- Route all Security Hub findings above High severity to SNS → ticketing / Slack.
- Enable Amazon Detective for IR investigations; drill GameDay scenarios quarterly with AWS FIS.
- Track weekly Security Hub FSBP score as a KPI; tie remediation cadence to business SLAs.
Many SAP-C02 Domain 3 questions ask for "the best order" or "what should be done first." The recurring right answer: containment of exposed credentials and public endpoints → visibility (CloudTrail, Config, Security Hub, GuardDuty, Access Analyzer, Inspector, Macie) → triage by severity → remediation waves (identity, encryption, secrets, network exposure, runtime). Picking an expensive remediation before enabling visibility is almost always wrong on the exam. Reference: https://docs.aws.amazon.com/securityhub/latest/userguide/what-is-securityhub.html
Common Traps on SAP-C02 Task 3.2
A selection of exam-recurring traps students miss:
- "Enable encryption on existing RDS" ≠ ModifyDBInstance. You must snapshot, copy-with-encryption, and restore. No in-place encryption enablement for RDS/Aurora at-rest.
- IAM Access Analyzer does not audit identity-based IAM policies. The external access analyzer looks at resource-based policies and trust policies only.
- Security Hub produces findings; it does not remediate. The remediator is AWS Config remediation action, Systems Manager Automation, or Lambda glued via EventBridge.
- Macie automated discovery ≠ full scanning. Automated sensitive data discovery samples objects intelligently to give posture quickly and cheaply. Explicit discovery jobs are required for full-object coverage on a scheduled cadence.
- GuardDuty Runtime Monitoring requires the security agent. For EKS, deploy via the managed add-on; for ECS / EC2, via SSM distributor package. Without the agent, you get only the control-plane signals.
- Detective requires 48 hours of GuardDuty data before its graph is useful.
- Verified Access is per-HTTP-application, not a network tunnel. Each application gets its own endpoint.
- ACM certificates auto-renew only for ACM-integrated resources. A cert uploaded directly to an ALB (not via ACM) is the customer's renewal problem.
- Config rules are Regional — you need them in every Region with workloads, aggregated via Config aggregator or via Security Hub cross-Region aggregation.
- Organization CloudTrail trails vs per-account trails: an organization trail is created in the management account and applies across every member account; you cannot delete it from a member account. This is both a feature (consistency) and a trap (member account admins cannot tamper with it).
- Macie + Security Hub integration is native — enable both and findings flow automatically. No Lambda glue needed.
- Secrets Manager RDS native integration for master passwords removes the need for a custom rotation Lambda. On the exam, "minimise operational overhead for RDS password rotation" = this feature.
SCPs restrict member accounts going forward; they cannot retroactively undo an action. If an acquired account's root user already created a public S3 bucket before you attached the organization and the deny SCP, you still have to go remediate the existing resource. SCPs are preventive, not corrective. Reference: https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html
FAQ
Q1: If I enable EBS default encryption in a Region, are my existing unencrypted volumes suddenly encrypted?
No. Default encryption applies only to new EBS volumes and new snapshots created after the setting is enabled. Existing unencrypted volumes remain unencrypted until you explicitly migrate them via the snapshot → copy-with-encryption → create-new-volume → detach/attach pattern (or create an encrypted AMI for root volumes and relaunch). The same applies for S3 default encryption — new objects get encrypted, existing objects remain in whatever encryption state they were written. For existing S3 objects, use S3 Batch Operations with a CopyObject job to re-encrypt in place. For RDS, the snapshot-copy-restore pattern is the only supported path.
Q2: How do I prioritise which S3 buckets to remediate first when Macie reports findings on 200 buckets?
Build the priority matrix from three signals combined: (1) exposure — is the bucket publicly accessible or accessible from outside the organization? (S3 Block Public Access state + IAM Access Analyzer external access findings); (2) sensitivity — does Macie report Critical / High findings for PII, credentials, financial data, or PHI? (Macie finding severity); (3) business criticality — what application owns the bucket, measured by tags. Critical-exposure + critical-sensitivity buckets are P0: apply Block Public Access immediately, encrypt with a dedicated CMK, enforce SecureTransport via bucket policy, and restrict access to a specific VPC endpoint. Medium buckets are P1: schedule within two weeks. Low buckets are P2: automate via Config auto-remediation and review monthly.
Q3: What is the exam's correct answer when a question says "inspect every IAM role for external access across 50 accounts in under an hour"?
IAM Access Analyzer with an organization-level zone of trust. Create the analyzer in the delegated administrator account with zone of trust set to "AWS organization." The analyzer evaluates every resource-based policy and trust policy across every member account and every Region, emits findings for any principal outside the organization boundary, and lets you archive known benign external access (e.g., a third-party SaaS integration) via archive rules. This is dramatically faster and more comprehensive than manually crawling every role's trust policy. Note the analyzer does not inspect identity-based IAM policies — for those use IAM Access Advisor and Config rules.
Q4: Inspector v2 finds a critical CVE in a running EC2 instance. What is the auto-remediation pattern on the exam?
Pipe the Inspector finding into EventBridge, filter on severity Critical or High, and target a Systems Manager Automation runbook that runs AWS-PatchInstanceWithRollback or a custom runbook that (a) snapshots the instance, (b) invokes Patch Manager to apply the patch baseline in the next maintenance window, (c) validates Inspector re-scans the instance and finds the CVE closed, (d) notifies on failure. For container images, the pattern is different: Inspector scans ECR, pushes a finding on image push, EventBridge triggers a pipeline stage that blocks deployment of the vulnerable image tag and alerts the service owner to rebuild with a patched base image. Never pick "launch a new AMI and migrate all workloads" — the exam rewards in-place patching via Patch Manager.
Q5: How does Security Hub relate to Security Lake? Do I need both?
They serve different purposes and complement each other. Security Hub is the near-real-time findings aggregator — normalised into AWS Security Finding Format (ASFF), scored against standards (FSBP, CIS, PCI, NIST), routed via EventBridge for remediation. Amazon Security Lake is the long-term, SQL-queryable data lake for security telemetry normalised into the Open Cybersecurity Schema Framework (OCSF) — it ingests CloudTrail, VPC Flow Logs, Route 53 DNS logs, Security Hub findings, EKS audit logs, Lambda events, and custom sources, stores them in Apache Parquet on S3, and exposes them through Lake Formation to subscribers (Athena, QuickSight, third-party SIEMs). In practice, enterprises use both: Security Hub as the operational scoreboard and remediation router; Security Lake as the analytical and long-retention audit store for the SOC team and incident response.
Q6: A scenario says "migrate 14 IAM admin users with static access keys to a secure identity model." What is the expected answer?
Migrate to IAM Identity Center with permission sets, backed by either the Identity Center built-in directory or an external IdP (Azure AD, Okta) via SAML 2.0 + SCIM. Each admin is provisioned as an Identity Center user (or synced from the corporate IdP), assigned to the Admins permission set via a group, with MFA enforced at the IdP — ideally phishing-resistant (hardware keys / platform passkeys). Console and CLI access use short-lived session credentials (max 12 hours by default, configurable down to 1 hour). All IAM users and their access keys are then deleted, and an SCP is added to prevent iam:CreateUser and iam:CreateAccessKey going forward. For programmatic access from CI/CD or applications, replace IAM user access keys with IAM roles assumed via OIDC (GitHub Actions, GitLab, etc.) or IRSA/Pod Identity for Kubernetes workloads.
Q7: What replaces a Site-to-Site VPN when the goal is to give employees access to internal web apps, and why not just keep the VPN?
AWS Verified Access paired with IAM Identity Center is the replacement. VPNs have four structural weaknesses the exam emphasises: (1) coarse network-level trust once connected, (2) static source-IP allowlists that break for remote workers on changing networks, (3) no per-request policy evaluation — device posture is checked at connect time only, (4) long-lived tunnels hide lateral movement. Verified Access inverts each: per-HTTPS-request policy evaluation, identity-and-device-centric rather than network-centric, short-lived token-based sessions, and full OCSF audit logging of every access decision. That said, VPN / Client VPN is still relevant for machine-to-machine scenarios that are not HTTPS — Verified Access covers only HTTP/HTTPS applications, not generic IP connectivity.
Exam Signal and Summary
Task 3.2 — improve security posture of an existing architecture — is one of the highest-weight areas in Domain 3 and a frequent topic in real SAP-C02 exam items. The examiners reuse a small number of signature scenarios: the acquired-company account, the audit-finding-driven remediation, the "rotate credentials without breaking production," the "encrypt RDS without downtime," the "legacy VPN to zero trust," the "patch 200 EC2 instances with CVEs," the "200 buckets with unknown contents," and the "disabled CloudTrail in a member account." For each, the winning answer pattern is the same shape:
- Visibility first (CloudTrail + Config + Security Hub + GuardDuty + Access Analyzer + Inspector + Macie).
- Containment of exposed credentials and public endpoints.
- Triage by severity.
- Remediation waves grouped by control category (identity, encryption, secrets, network, runtime).
- Auto-remediation via Config rule + SSM Automation for drift after the initial sweep.
- Security improvement as a continuous KPI (weekly FSBP score, open-finding count by severity).
If you can walk into the "acquired company" scenario and sequence the 30-step remediation above without notes, you will be in the strongest position for Task 3.2 on exam day. The trick is resisting the temptation to jump to "enable encryption everywhere" — the exam rewards the candidate who enables visibility first, prioritises by blast radius, and treats every remediation as a chain of detect → route → remediate → verify where AWS-native services do the work.