What Migration Target Infrastructure Means on SAP-C02
Migration target infrastructure is the pre-migration landing zone that receives every workload you lift, replatform, or refactor from on-premises into AWS. Before a single server, database, or file share can leave the datacenter, the target environment has to already exist in production-ready form: accounts vended, guardrails applied, hybrid connectivity lit, identity federated, observability centralized, encryption defaults enforced, and cost visibility wired up. On SAP-C02, the exam rarely asks "what is a landing zone?" — it asks "given a 3-datacenter, 5-business-unit enterprise that must begin migrating in 60 days, what is the minimum set of migration target infrastructure decisions that will not need to be retrofitted later?"
This topic sits inside Domain 4 (Accelerate Workload Migration and Modernization, task statement 4.3) but pulls heavily from Domain 1 governance and Domain 2 security foundations. The migration target infrastructure surface includes AWS Control Tower, Account Factory, Account Factory Customization, Landing Zone Accelerator (LZA), Customizations for Control Tower (CfCT), service control policies, tag policies, backup policies, AWS Organizations OUs, AWS Transit Gateway, AWS Direct Connect, Site-to-Site VPN, Route 53 Resolver inbound and outbound endpoints, centralized inspection VPC with AWS Network Firewall, IAM Identity Center, AD Connector, Trusted Identity Propagation, org-wide CloudTrail, Amazon Security Lake, GuardDuty delegated administration, AWS Config aggregator, AWS KMS with per-OU key policies, S3 Bucket Keys, tag policies, AWS Budgets, and cost allocation reports. Mastering migration target infrastructure at Pro depth means knowing which of these must exist before wave one begins, which can land in wave two, and which are genuinely optional.
This note assumes you already know Associate-level multi-account basics (what an OU is, what a member account is, what sts:AssumeRole does). Everything here is written at Professional tier — sizing Direct Connect for cutover bandwidth, choosing between Account Factory Customization and LZA, deciding when Trusted Identity Propagation beats classic SAML federation, and building guardrail baselines that do not strangle the migration teams trying to move 300 workloads in a year.
Plain-Language Explanation: Migration Target Infrastructure
Migration target infrastructure has a lot of moving parts. Three plain-English analogies make the landing zone, connectivity, and identity pieces stick before we dive into the technical mechanics.
Analogy 1 — The New Office Building Before Move-In Day
Imagine your company has signed a lease on a 30-floor office building and is moving 5 business units out of 3 old offices in 60 days. You do not start moving desks on day one. First, the building manager (AWS Control Tower) sets up the lobby, elevators, and security desk (management account, log archive account, audit account). Then the facilities team (Account Factory + Landing Zone Accelerator) prepares each floor (member account) with a standard layout: network jacks wired (VPC), fire alarms (Config rules), access-card readers at every door (IAM Identity Center), CCTV recording to a central vault (org-wide CloudTrail), and smoke detectors on every ceiling (GuardDuty). The fiber trunk line from the old office (Direct Connect) is lit up and tested. Only after all that is working do you schedule the actual furniture move (MGN replication, DMS migration). If you move the desks first and wire up the fire alarms later, you spend six months retrofitting sprinklers around occupied cubicles — exactly the anti-pattern SAP-C02 scenarios punish.
Analogy 2 — The Hospital Opening a New Wing
Migration target infrastructure is like building a new hospital wing that will receive 200 patients transferred from three aging facilities. Before any patient arrives, you need isolation rooms prepared (OUs for Prod / Non-Prod / Sandbox), the pharmacy stocked (shared services account with AMIs, base images, and KMS keys), the records room with HIPAA-compliant filing (log archive account with S3 Object Lock), nurse stations networked (Transit Gateway hub with per-department VLANs enforced by route tables), dedicated ambulance bays (Direct Connect circuits sized for cutover bandwidth), staff badges that work across every wing (IAM Identity Center with AD Connector federating the hospital's Active Directory), and one central nursing station dashboard (AWS Config aggregator + Security Hub) that shows the state of every room at once. The chief of medicine (the security delegated administrator) has visibility across the entire building without walking into each patient's room. Only when the wing passes its inspection (guardrail baseline applied and drift-free) can Transfer Day proceed.
Analogy 3 — The Shipping Port Opening a New Container Terminal
Migration target infrastructure is the brand-new container terminal your shipping company must stand up before vessels from three legacy ports can redirect to it. The master harbor plan (AWS Organizations + OU tree) decides which berths serve which business units. The customs gate (SCPs + guardrails) screens every container before it touches AWS soil — no untagged containers, no containers from denied regions, no containers that disable the camera. The rail and truck connections to inland depots (Transit Gateway + Direct Connect) must exist before the first ship docks, or containers pile up on the quayside. The port authority office (IAM Identity Center + Trusted Identity Propagation) issues one badge that works at every berth and forwards the dockworker's identity all the way down to the specific pallet they touch (Redshift row-level, S3 Access Grants). The harbor master's radar room (CloudTrail + GuardDuty + Security Lake) sees every ship movement across every berth from one screen. The fuel depot (KMS) stores keys that every berth uses to seal its cargo. Without any one of these, the terminal cannot accept traffic — and retrofitting them during active operations is how ports shut down for weeks.
If the SAP-C02 question mentions "production-ready landing zone in N days" or "onboard the first workloads safely", the office-building analogy maps cleanest — it foregrounds the sequence (prepare before move). If the question emphasizes compliance, auditability, or regulated workloads, use the hospital analogy. If the question is dominated by connectivity bandwidth and cutover orchestration, use the shipping port. Reference: https://docs.aws.amazon.com/whitepapers/latest/organizing-your-aws-environment/organizing-your-aws-environment.html
The 60-Day Landing Zone Scenario
SAP-C02 migration target infrastructure questions almost always bury the constraints inside a scenario. We will anchor the rest of this note to a canonical example that mirrors real exam stems and real customer engagements.
Scenario. A manufacturing enterprise operates 3 on-premises datacenters (US-East colocation primary, EU-West regional DR, APAC regional edge). It has 5 business units (Corporate IT, Manufacturing, Supply Chain, R&D, Customer Support), each with its own engineering team, its own non-production environment, and its own production workloads. The CIO has approved a 3-year migration and demands a production-ready landing zone within 60 days so that the first wave of low-risk workloads (Corporate IT intranet, dev environments) can begin migrating immediately while waves two and three are still being planned. The landing zone must satisfy:
- Regulated-workload compliance (SOC 2, ISO 27001) from day one.
- Encryption at rest and in transit enforced across every account.
- A single sign-on experience for the 2,400 engineers in the corporate Active Directory.
- Central visibility for the 12-person Security team without giving them management-account access.
- Zero "mystery bills" — every dollar traceable to a business unit and environment.
- Hybrid connectivity to all three datacenters with a cutover bandwidth budget for wave one of roughly 2 Gbps sustained.
At Professional level, AWS assumes a landing zone already exists or must be designed before any migration workload lands. Any answer that starts migrating workloads into the management account, or into ad-hoc member accounts without guardrails, is wrong on SAP-C02. The exam will happily offer such a choice to punish candidates who skipped this topic. Reference: https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html
Landing Zone Design — Control Tower Account Factory vs Landing Zone Accelerator
The first decision is which landing zone tool. AWS provides two supported paths and they are not mutually exclusive — most enterprise migrations layer both.
AWS Control Tower — the opinionated baseline
AWS Control Tower is the managed landing zone service. In about an hour it provisions the AWS Organizations structure, creates the Log Archive and Audit accounts in a Security OU, enables IAM Identity Center, turns on an organization-wide CloudTrail, deploys an AWS Config aggregator, and applies a baseline of mandatory guardrails to every OU. Its primary interface is Account Factory, a self-service account-vending pipeline that produces a new member account, wires it into the correct OU, applies the OU's guardrails, and registers it with the central identity provider — all with a form submission and an approval. Control Tower is the right starting point for virtually every SAP-C02 migration scenario because it eliminates weeks of foundational YAML and because the exam's canonical answers assume it.
Landing Zone Accelerator on AWS (LZA) — the customization layer
Landing Zone Accelerator on AWS is an AWS-published solution (maintained by AWS Solutions) that layers enterprise-scale customizations on top of Control Tower: opinionated multi-region CloudTrail, detailed Config rules, pre-built centralized logging, centralized Network Firewall inspection VPC, AWS Backup vault policies, IAM permission sets aligned to common roles, and — importantly — codified configuration files (accounts.yaml, organizational-units.yaml, network.yaml, security.yaml, etc.) that drive a CI/CD pipeline so that landing zone changes are version-controlled and peer-reviewed like application code. LZA is especially strong for regulated industries (healthcare, finance, government) because it ships config baselines aligned to CMMC, HIPAA, NIST 800-53, and PCI DSS.
Customizations for Control Tower (CfCT) — the middle ground
CfCT is the older first-party customization framework: a CodePipeline that applies CloudFormation StackSets and SCPs to new accounts as they are vended by Account Factory. CfCT remains fully supported and is the correct answer when the exam asks "how do I add a customization on top of Control Tower without adopting a full accelerator?"
- Control Tower Account Factory: self-service account provisioning pipeline that vends new member accounts into the correct OU with guardrails applied.
- Account Factory Customization (AFC): blueprint-based customization (CloudFormation templates) applied to each new account during vending.
- Customizations for Control Tower (CfCT): reference-architecture pipeline that applies CloudFormation StackSets + SCPs to accounts post-vending.
- Landing Zone Accelerator on AWS (LZA): AWS Solutions-published accelerator with opinionated security, network, and logging baselines driven by YAML config files and a CodePipeline.
- Landing Zone baseline: the set of accounts, OUs, guardrails, logging, identity, and network infrastructure that must exist before workloads land.
- Reference: https://docs.aws.amazon.com/solutions/latest/landing-zone-accelerator-on-aws/solution-overview.html
Decision matrix for the 60-day scenario
For the canonical scenario, the production-ready path in 60 days is: Control Tower + Account Factory + Landing Zone Accelerator. Control Tower gives you the validated baseline in an afternoon, Account Factory gives the business units self-service vending within two weeks, and LZA gives you the compliance-aligned customizations (org-wide encryption, Network Firewall inspection, Backup policies) that would otherwise take a 5-person team three months to write from scratch. CfCT would be the substitute if the team had strong CloudFormation skills but no appetite for LZA's opinionated defaults.
A common SAP-C02 trap is to offer "use AWS Organizations to deploy a landing zone" as a plausible answer. AWS Organizations is the underlying management and policy service — it provides OUs, SCPs, and consolidated billing, but it does not provision accounts, configure Log Archive and Audit, or apply guardrail baselines. You always need Control Tower (or a fully hand-built equivalent) on top of Organizations to call the result a "landing zone". Reference: https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html
Account Vending Strategy — Per-Workload, Per-Environment, Per-BU
Once Control Tower and Account Factory are in place, the next design decision is how many accounts to create and along which axis. This is the single most-tested topic in the migration target infrastructure section on SAP-C02.
The three canonical vending axes
- Per-workload: one account per application (e.g.,
erp-prod,ecommerce-prod). Maximizes blast-radius isolation and quota separation but explodes account count. - Per-environment: one account per lifecycle stage within a BU (e.g.,
manufacturing-dev,manufacturing-test,manufacturing-prod). Simpler to manage but mixes workloads that may need independent quotas. - Per-business-unit: one account per BU, with environments collapsed inside VPCs or namespaces. Simple but weak isolation.
Real migration landing zones combine all three. The AWS whitepaper "Organizing Your AWS Environment Using Multiple Accounts" recommends a matrix: per-BU × per-environment at minimum, with per-workload accounts introduced for workloads that are regulated, revenue-critical, or have independent quota needs.
For the 60-day scenario with 5 BUs × 3 environments (dev, test, prod), you land on 15 BU accounts plus shared accounts (log archive, audit, network, shared services, security tooling, backup, CI/CD) for a day-one total of roughly 22 accounts. Wave-two and wave-three migrations add per-workload accounts for regulated apps, landing the 12-month target closer to 50–80 accounts — exactly the scale Control Tower and LZA are optimized for.
Shared services accounts
Beyond the BU accounts, your landing zone needs a handful of shared services accounts that host cross-cutting infrastructure:
- Network account — owns the Transit Gateway, centralized VPCs shared via AWS RAM, Route 53 Resolver rules, Network Firewall policies, Direct Connect gateways.
- Shared services account — golden AMIs, base container images in ECR, centralized CI/CD tooling, Systems Manager inventories, AD Connector (or AWS Managed Microsoft AD).
- Security tooling account — delegated administrator for GuardDuty, Security Hub, Inspector, Macie, Config aggregator, IAM Access Analyzer.
- Log archive account — immutable S3 buckets for CloudTrail and Config (Object Lock, MFA delete, cross-account bucket policy).
- Backup account — central AWS Backup vault for cross-account recovery points and vault lock (WORM).
The AWS Organizations management account is immune to SCPs, which means any workload running there operates outside the guardrails you worked so hard to design. On SAP-C02, every answer that puts migration workloads, KMS keys, or security tooling in the management account is wrong. The management account exists to manage the org; that is it. Reference: https://docs.aws.amazon.com/controltower/latest/userguide/best-practices.html
Baseline Guardrails — Mandatory, Strongly Recommended, Elective, and SCP Baselines
Guardrails (officially called controls in modern Control Tower) are the enforceable policies the landing zone applies to every account. SAP-C02 tests your ability to choose the right control type for the right compliance requirement.
The four control categories
- Preventive controls — backed by SCPs, stop a non-compliant API call before it happens (e.g., "deny
cloudtrail:StopLoggingon the organization trail"). Visible asDENYin CloudTrail'serrorCode. - Detective controls — backed by AWS Config rules, observe state after the fact and mark resources non-compliant (e.g., "S3 buckets without default encryption").
- Proactive controls — backed by CloudFormation Hooks, inspect IaC templates before deployment and block non-compliant stacks at provisioning time.
- Governance intent categories — orthogonal to the above, Control Tower also tags each control as Mandatory (cannot be disabled — e.g., "disallow public write access to log archive"), Strongly Recommended (enabled by default but can be opted out — e.g., "enable MFA for root user"), or Elective (opt-in — e.g., "disallow S3 buckets from being publicly readable").
The baseline guardrail set for the 60-day scenario
On day one, the landing zone should apply roughly 30–40 guardrails across the OU tree:
- Root OU: all Mandatory controls (untampered Log Archive, untampered Audit CloudTrail, no public S3 on the two security accounts).
- Security OU: extra Mandatory controls protecting the Audit and Log Archive accounts from destructive operations.
- Workloads OU: Strongly Recommended detective controls for encryption at rest, MFA on root, CloudTrail enabled.
- Sandbox OU: additional preventive SCP denying production services (RDS, EKS, Aurora) to prevent expensive accidents.
- Suspended OU: full deny-all SCP for accounts scheduled for closure.
Custom SCPs to layer on top
On top of the Control Tower controls, LZA or CfCT typically applies custom SCPs:
DenyRegionsExceptApproved— allow onlyus-east-1,us-west-2,eu-west-1at start (prevents accidental multi-region sprawl).DenyRootUserActions— block IAM actions taken by the root user.DenyDisableSecurityServices— denyguardduty:Delete*,securityhub:Disable*,config:Delete*,macie:Disable*.RequireIMDSv2— deny EC2 launches without IMDSv2.DenyLeaveOrganizations— blockorganizations:LeaveOrganizationfrom the member.RequireEncryptedEBS— denyRunInstanceswithout encrypted EBS.RequireTLSForS3— deny S3 PutObject withoutaws:SecureTransport=true.
| Need | Control type | Backed by |
|---|---|---|
| Stop the action before it happens | Preventive | SCP |
| See which accounts drifted | Detective | AWS Config rule |
| Block the CloudFormation template at deploy time | Proactive | CloudFormation Hook |
| Cannot be disabled by member | Mandatory category | Control Tower managed |
| Default-on but optional | Strongly Recommended | Control Tower managed |
| Opt-in only | Elective | Control Tower managed |
Reference: https://docs.aws.amazon.com/controltower/latest/userguide/controls.html
Hybrid Connectivity for Migration — Direct Connect, VPN, Transit Gateway
Migration target infrastructure is dead on arrival without working hybrid connectivity. The connectivity layer is what differentiates a usable landing zone from a landing zone that looks good on a slide.
Direct Connect sizing for cutover bandwidth
The 60-day scenario's wave-one cutover needs ~2 Gbps sustained to replicate 15 servers of steady-state deltas while full baseline copies happen over Snowball Edge. AWS Direct Connect delivers dedicated fiber at 1 Gbps, 10 Gbps, or 100 Gbps per port. Two principles apply:
- Size for peak cutover plus steady-state, not for average. Cutovers spike when MGN final-sync runs and DMS switches over.
- Size for BGP failover headroom. If you plan active-active across two Direct Connect circuits, each circuit must carry 100% of traffic alone because the other can fail — so two 10 Gbps ports for 10 Gbps of actual demand.
The AWS Direct Connect resiliency recommendations define four tiers:
- Development (99.9% SLA) — a single Direct Connect connection with a Site-to-Site VPN backup. Acceptable for wave-zero but not for production cutover.
- High Resilience (99.99% SLA) — two Direct Connect connections to two different devices in two different Direct Connect locations. The minimum for production migration traffic.
- Maximum Resilience (99.999% SLA) — separate connections terminating on separate devices in multiple Direct Connect locations, separate AWS regions if applicable.
- Maximum Resilience with SiteLink — SiteLink enables on-prem-to-on-prem over the AWS backbone, useful when the three datacenters need to talk to each other during the migration.
For the scenario, the correct answer is two 10 Gbps Direct Connect connections from the primary US-East datacenter to two Direct Connect locations, plus a pair of Site-to-Site VPN tunnels as the emergency fallback that together meet the 99.99% tier while giving roughly 10 Gbps sustainable cutover headroom.
Transit Gateway as the migration hub
You do not point every Direct Connect circuit at every VPC — you point them at a Transit Gateway in the network account. The TGW becomes the hub of the migration target infrastructure: spoke VPCs for each BU environment attach, the Direct Connect Gateway attaches, the Site-to-Site VPN attaches, inter-region peering attaches. TGW route tables enforce segmentation — Dev spokes cannot reach Prod spokes even though they share the hub. This is the same pattern covered in depth in our Transit Gateway and Hybrid Networking topic; here the important point is that the TGW must exist in the network account before wave one, with inter-account sharing via AWS RAM already configured.
Resolver endpoints for DNS
Hybrid DNS is the quiet killer of migration projects. On-prem applications need to resolve AWS private hostnames (*.vpce.amazonaws.com, *.rds.amazonaws.com), and AWS workloads need to resolve on-prem hostnames (corp.internal, ad.corp.internal) during the migration cut-in period. The solution is Route 53 Resolver endpoints in the network account:
- Inbound endpoints — ENI per AZ in a VPC, accepting DNS queries from on-prem Directory Services, resolving AWS private hosted zones. Shared to other accounts' VPCs via VPC association or resolver rule RAM shares.
- Outbound endpoints — ENI per AZ forwarding queries from AWS to on-prem DNS based on Resolver rules (
corp.internal -> 10.1.1.53).
For the 60-day scenario, the network account needs one inbound + one outbound endpoint per region the landing zone will host workloads in, with rules shared org-wide via RAM.
Centralized inspection VPC
Enterprise-grade migration landing zones funnel all east-west and north-south traffic through a centralized inspection VPC in the network account. This VPC contains AWS Network Firewall (or a third-party firewall fleet via Gateway Load Balancer), and TGW route tables redirect all inter-spoke and egress traffic through it. LZA deploys this pattern by default; rolling your own takes weeks of routing-table work.
Retrofitting a centralized inspection VPC after workloads are live is brutal — every spoke needs new routes, every application team has to test for breakage, and every cutover window has to account for firewall policy gaps. On SAP-C02, the correct answer to "how should the inspection layer be introduced?" is in the landing zone before wave one, not after migration completes. Reference: https://docs.aws.amazon.com/network-firewall/latest/developerguide/arch-centralized-deployment.html
Identity Foundation — IAM Identity Center, AD Connector, Trusted Identity Propagation
Migration target infrastructure lives or dies on identity. If engineers cannot SSO into the new accounts on day one, the migration team reverts to per-account IAM users, hard-coded access keys, and spreadsheet permission tracking — a decade of technical debt incurred in three weeks.
IAM Identity Center as the org-wide SSO plane
AWS IAM Identity Center (formerly AWS SSO) is the central identity plane for the entire AWS organization. Integrated with AWS Organizations as a service, it publishes permission sets (reusable IAM role templates) and assigns them to groups × accounts. Engineers log into https://d-1234567890.awsapps.com/start with their corporate credentials and see a console showing only the accounts and permission sets they have access to. Every login is logged in CloudTrail under the Identity Center's delegated region.
For the 60-day scenario with 2,400 engineers, Identity Center is non-negotiable. Per-account IAM users would require 22 × 2,400 = 52,800 user objects on day one and grow unbounded.
Identity sources — AD Connector vs Managed AD vs external IdP
Identity Center supports three identity sources:
- Identity Center directory (built-in) — fine for small orgs, no sync from corporate AD. Wrong choice for a 2,400-person enterprise.
- Active Directory — AWS Managed Microsoft AD (fully managed in AWS) or AD Connector (lightweight proxy forwarding LDAP and Kerberos to existing on-prem AD). For the scenario, AD Connector is the classic migration answer: it reuses the corporate AD that already has all 2,400 users, groups, and password policies, so engineers log in with the same credentials they use for email.
- External IdP via SAML 2.0 — Okta, Azure AD / Entra ID, PingFederate. Used when the corporate identity lives outside AD. Modern enterprises increasingly choose this path, but the scenario says "corporate Active Directory", so AD Connector wins.
Trusted Identity Propagation — the pro-tier feature
Trusted Identity Propagation (TIP) is the 2023 Identity Center capability that forwards the actual human user identity from Identity Center through AWS services down to data-level authorization. The engineer logs in as alice@corp → Identity Center issues a token → the token flows through QuickSight, Redshift, S3 Access Grants, EMR, and Lake Formation → each service enforces row-level / column-level / prefix-level authorization for alice@corp specifically, not for a shared role. Before TIP, every service saw one IAM role for "all BI analysts" and had to inspect SAML attributes or session tags to approximate user identity.
For migration target infrastructure, TIP matters when the target architecture includes shared data platforms (the scenario's R&D BU will migrate a data lake). Without TIP, you end up writing clunky ABAC policies per service; with TIP, Lake Formation and Redshift natively honor AD group memberships.
- Permission set: reusable IAM policy template in Identity Center that is materialized as an IAM role in each member account when assigned.
- AD Connector: AWS Directory Service mode that proxies LDAP/Kerberos to on-prem Active Directory without replicating data into AWS.
- AWS Managed Microsoft AD: a fully managed Microsoft AD in AWS, typically used when AWS needs its own AD (e.g., FSx for Windows, Amazon WorkSpaces) or when there is no on-prem AD.
- Trusted Identity Propagation (TIP): capability that forwards the Identity Center user identity through AWS analytics services for fine-grained authorization at the data layer.
- Delegated administrator (Identity Center): member account authorized to manage permission sets and assignments without management-account access.
- Reference: https://docs.aws.amazon.com/singlesignon/latest/userguide/trustedidentitypropagation.html
AD Connector is a proxy, not a cache. Every authentication round-trips from AWS to the on-prem AD controllers. The hybrid connectivity (Direct Connect + Site-to-Site VPN backup, Route 53 Resolver for DNS) must be lit before AD Connector is provisioned, or Identity Center silently fails authentication and migration engineers get locked out. On SAP-C02, any answer sequence that sets up AD Connector before connectivity is wrong. Reference: https://docs.aws.amazon.com/directoryservice/latest/admin-guide/directory_ad_connector.html
Observability Foundation — CloudTrail, Security Lake, GuardDuty, Config Aggregator
Observability is the second-biggest day-one investment after identity. Every account must emit its logs, findings, and configuration state into the central security tooling account and the log archive — on day one, not retroactively.
Organization-wide CloudTrail
Control Tower creates an organization trail by default: one CloudTrail in the management account that records management events for every member account, delivering to an S3 bucket in the log archive account. On top of that, LZA adds data events (S3 object-level, Lambda invocations, DynamoDB item-level) for regulated workload accounts, plus CloudTrail Lake for SQL queries over historical events.
For the scenario, the baseline is: organization trail, data events on log archive buckets, CloudTrail Lake with a 365-day retention event data store.
Amazon Security Lake for normalized logs
Amazon Security Lake ingests logs from AWS services (CloudTrail, VPC Flow Logs, Route 53 Resolver, Security Hub findings, EKS Audit Logs) plus custom sources, normalizes them to OCSF (Open Cybersecurity Schema Framework), and stores them as partitioned Parquet in S3 in the security tooling account. Subscribers (SIEMs like Splunk, Datadog, or custom Athena queries) consume the normalized data without each tool having to parse raw logs.
For the scenario, Security Lake is the right day-one answer because the 12-person security team wants a single queryable surface and at least one third-party SIEM will subscribe later.
GuardDuty with delegated administrator
GuardDuty is enabled org-wide via the Organizations integration, with the security tooling account as the delegated administrator. Once delegated, the security team can enable GuardDuty (and its protection plans: S3, EKS, Malware, RDS, Lambda) across all current and future member accounts without touching the management account. Findings aggregate in the delegated admin and route to Security Hub.
AWS Config aggregator
AWS Config must run in every region in every account to evaluate compliance against Config rules (including the detective guardrails from Control Tower). Each account's Config ships to the log archive bucket. A Config aggregator in the security tooling account pulls state from every account × region into one searchable view, so the security team can answer "which accounts have unencrypted EBS?" with a single query.
SAP-C02 scenarios frequently describe an auditor arriving three months into a migration asking "show me every admin login across the 40 accounts last Tuesday". If CloudTrail, Security Lake, and GuardDuty were not enabled before wave one, you have nothing. The correct answer is always to enable these before the first migrated workload lands, not after the first incident. Reference: https://docs.aws.amazon.com/awscloudtrail/latest/userguide/creating-trail-organization.html
Encryption Foundation — KMS Per-OU Policies and S3 Bucket Keys
Encryption at rest is easy to add and excruciating to retrofit. Existing unencrypted RDS requires a snapshot-and-restore migration; existing S3 buckets require object re-upload. The landing zone must enforce encryption defaults before workloads land.
KMS key organization
Landing zone encryption design typically follows three layers:
- Per-account KMS keys for account-scoped services (EBS default encryption, RDS, Secrets Manager, Parameter Store). Created by LZA during account vending, with key policies restricting use to the owning account.
- Per-OU shared keys for cross-account workflows (e.g., the encryption key for backups shared to the backup account, the encryption key for logs shared to the log archive). Key policies use
aws:PrincipalOrgIDconditions to restrict access to the org. - Per-service KMS keys in the security tooling account for centralized encryption of Security Lake, Config aggregation, and Security Hub findings.
S3 Bucket Keys to cut KMS cost
Every S3 PUT with SSE-KMS generates a KMS API call ($0.03 per 10,000). High-volume log buckets (CloudTrail, VPC Flow Logs) can emit millions of PUTs/day — a five-figure monthly KMS bill purely from encryption overhead. S3 Bucket Keys solve this by generating an intermediate bucket-level data key that encrypts objects for a time window, reducing KMS requests by up to 99%. Enable S3 Bucket Keys on every high-throughput bucket in the landing zone — it is a one-line configuration change with no downside.
Encryption enforcement via SCP
The landing zone enforces encryption via preventive SCPs attached at the root:
- Deny
s3:PutObjectwithouts3:x-amz-server-side-encryption. - Deny
ec2:RunInstanceswithout encrypted EBS. - Deny
rds:CreateDBInstancewithoutStorageEncrypted=true. - Deny
dynamodb:CreateTablewithout a SSE-KMS specification.
Plus detective Config rules to catch anything that slips past (though, with properly written SCPs, nothing should).
A single API call per region per account flips every future EBS volume to encrypted. Control Tower and LZA both include this in the account-baseline customization. On SAP-C02, any answer that relies on developers to remember --encrypted per volume is wrong — the control must be enforced at the account level. Reference: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html#encryption-by-default
Cost and Tagging Foundation — Tag Policies, Budgets, Cost Allocation
The last pillar of the landing zone is financial. If the migration completes and the CFO cannot answer "what did Manufacturing BU spend last quarter?", the landing zone was a failure regardless of how clean its security posture is.
Tag policies enforced by Organizations
Tag policies (a policy type in AWS Organizations, analogous to SCPs but for tags) define allowed tag keys, allowed tag values per key, and inheritance. The landing zone should define a mandatory tag set on day one:
cost-center(one of a fixed list per BU)environment(dev,test,prod)workload(application identifier)data-classification(public,internal,confidential,restricted)owner-email(corporate email address)
Tag policies do not block untagged resources (that requires an SCP). The landing zone therefore pairs tag policies with an SCP that denies ec2:RunInstances and rds:Create* without the mandatory tags, and a Config rule that flags retroactive drift.
Cost allocation tags activated org-wide
In the billing console of the management account, every tag key that drives chargeback must be activated as a cost allocation tag before it appears in Cost Explorer or the Cost and Usage Report. Missing this step is the single most common reason "we tagged everything but still cannot slice the bill by BU" — on SAP-C02 it is a stock trap.
AWS Budgets with actions and SNS alarms
Every BU account receives a monthly AWS Budget sized to expected spend, with thresholds at 50%, 80%, and 100% delivering SNS notifications to the BU's FinOps lead. Production accounts add a Budget Action that auto-applies a restrictive SCP (blocking new resource creation but not disrupting running workloads) at 120%, preventing runaway bills from a stuck Lambda or a leaked access key.
Cost and Usage Report to the log archive
The management account writes the Cost and Usage Report (CUR) in Parquet format to the log archive bucket, partitioned by date. Athena + QuickSight in the security/finance tooling account then build the FinOps dashboards — cost by BU, by environment, by workload, by service — all derived from the tag policies you enforced on day one.
On SAP-C02, the "business wants to track migration spend per BU" scenario resolves to tag policy + SCP + activated cost allocation tags + CUR → Athena. Any answer that uses only Cost Explorer manually, or relies on account-level splits without tags, is wrong because it cannot scale beyond a few accounts or sustain audit. Reference: https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/configurecostallocreports.html
60-Day Landing Zone Build Plan
For the scenario, here is the week-by-week plan that SAP-C02 expects you to reproduce.
Weeks 1–2: Foundation
- Day 1–2: Deploy AWS Control Tower in the management account. Validate log archive and audit accounts.
- Day 3–4: Deploy Landing Zone Accelerator via CloudFormation, commit initial config YAML to CodeCommit, run pipeline.
- Day 5–10: Design OU tree (Security, Infrastructure, Workloads → Prod/Non-Prod, Sandbox, Suspended), review with BU leads.
- Day 10–14: Vend shared services accounts (network, shared services, security tooling, backup, CI/CD). Delegate GuardDuty, Security Hub, Config aggregator, Macie, IAM Access Analyzer to the security tooling account.
Weeks 3–4: Connectivity and Identity
- Day 15–17: Provision two 10 Gbps Direct Connect connections from US-East datacenter to two DX locations. Establish Site-to-Site VPN backup.
- Day 18–20: Deploy Transit Gateway in the network account, attach Direct Connect Gateway, attach VPN, configure route tables.
- Day 21–23: Provision Route 53 Resolver inbound + outbound endpoints in the network account, create forwarding rules for
corp.internal, share via RAM. - Day 24–26: Integrate IAM Identity Center with AD Connector pointing at on-prem AD (DNS already works via Resolver). Test SSO login.
- Day 27–30: Define permission sets (Admin, PowerUser, ReadOnly, BillingReadOnly, SecurityAuditor, NetworkAdmin). Assign to AD groups. Enable Trusted Identity Propagation for the upcoming data lake workload.
Weeks 5–6: Security, Encryption, Cost
- Day 31–35: Deploy centralized inspection VPC with AWS Network Firewall. Update TGW route tables to funnel east-west and egress traffic through inspection.
- Day 36–40: Enable org-wide CloudTrail with data events, CloudTrail Lake, Security Lake. Activate GuardDuty (all protection plans), Security Hub, Inspector, Macie across all accounts.
- Day 41–44: Define KMS key hierarchy. Enable EBS default encryption, S3 default encryption, RDS force SSL across accounts via SCP. Turn on S3 Bucket Keys on log buckets.
- Day 45–48: Define tag policies. Apply mandatory-tag SCPs. Activate cost allocation tags. Configure AWS Budgets with actions.
Weeks 7–8: Hardening and Wave Zero
- Day 49–52: Run guardrail compliance audit. Remediate drift. Run GameDay exercise (simulated account compromise, simulated Direct Connect outage).
- Day 53–56: Onboard wave-zero workload (Corporate IT intranet test environment). Validate end-to-end: SSO → network → deploy → logs in log archive → findings in Security Hub → cost in Budgets.
- Day 57–60: Document runbooks, publish the landing zone handbook, train BU migration leads. Hand off to wave-one migration kickoff.
A common mistake is to turn on Macie, Inspector, GuardDuty Malware Protection, Detective, and every Config conformance pack simultaneously on day 1. The cost spike plus finding volume will overwhelm the 12-person security team. The correct sequence for the 60-day plan is: GuardDuty + Security Hub + Config + CloudTrail first; then Macie for S3 PII discovery; Inspector for vulnerability scanning; Detective only if there is a clear investigative use case. SAP-C02 rewards the phased answer. Reference: https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-standards.html
Common Traps on SAP-C02 Migration Target Infrastructure Questions
Professional-tier migration target infrastructure questions are built around the exact same set of traps. Memorize these and you will eliminate at least one wrong answer per question.
- Control Tower Landing Zone vs LZA vs CfCT confusion. Control Tower = the baseline service. LZA = the AWS Solutions-maintained accelerator. CfCT = the legacy but still supported customization pipeline. They are not synonyms.
- Putting workloads in the management account. Always wrong. SCPs do not apply to the management account, and any compromised admin there owns the entire org.
- Assuming Account Factory alone is "a landing zone". Account Factory vends accounts but does not provide connectivity, identity, or observability. LZA or equivalent customization layer is required.
- Forgetting that AD Connector needs reachable AD. No hybrid DNS + no Direct Connect/VPN = broken Identity Center. Connectivity is a prerequisite for identity.
- Single Direct Connect for production cutover. A single DX at 99.9% SLA is for development. Production cutovers require the 99.99% tier (two DX, two DX locations).
- Ignoring resolver endpoints. Hybrid apps cannot resolve each other's hostnames without Route 53 Resolver inbound/outbound endpoints. Missing this breaks day-one.
- Enabling GuardDuty/Security Hub without delegated administrator. Makes the management account a daily-use account for the security team, breaking the hygiene principle.
- Retrofitting encryption after migration. RDS must be re-created from snapshot to add encryption. Always enforce encryption before data lands.
- Relying on manual tagging. Tag policies declare allowed tag keys/values; SCPs enforce presence. Both are required or tagging coverage silently degrades.
- Confusing Customer Managed Key with AWS Managed Key. Customer Managed Keys support key policies, rotation control, and granular access. AWS Managed Keys do not. Regulated workloads need CMKs.
SAP-C02 answer choices will pair "AWS Control Tower" with "Landing Zone Accelerator" and expect you to pick the right one. Control Tower is the opinionated baseline (OUs, guardrails, Log Archive, Audit, Identity Center). LZA is the customization layer that sits on top. The correct answer for "production-ready landing zone in 60 days for a regulated enterprise" is both together, not either alone. Reference: https://docs.aws.amazon.com/solutions/latest/landing-zone-accelerator-on-aws/solution-overview.html
Related Topics and Migration Target Infrastructure Boundaries
Migration target infrastructure is intentionally broad because SAP-C02 tests the boundary between domains. Related topics to cross-reference:
- Multi-Account Governance — the OU design, SCP evaluation, and Control Tower control internals in depth.
- Transit Gateway and Hybrid Networking — TGW route tables, DX resilience tiers, overlapping CIDRs, central egress.
- Migration Assessment and 7Rs — how you classify the workloads that will land in this infrastructure.
- Migration Tooling (MGN, DMS, DataSync, Snow family) — how workloads actually move onto this infrastructure.
- Centralized Security Logging — deeper dive on CloudTrail Lake, Security Lake OCSF, cross-account observability.
- Encryption and Certificate Management — KMS multi-region keys, ACM Private CA, CloudHSM.
The migration target infrastructure topic owns the pre-migration design — account vending, guardrails, connectivity, identity, observability, encryption, cost. It does not own the migration execution (that is the tooling topic) or the application target architecture (that is migration-target-infrastructure's sibling, application target architecture, which picks EC2 instance types, RDS engines, FSx variants).
FAQ
Q1. Why can't I just use AWS Organizations plus a few SCPs instead of AWS Control Tower?
You can technically build a landing zone with hand-written CloudFormation and AWS Organizations, but the effort is enormous: bootstrapping Log Archive and Audit accounts, writing and maintaining 30+ guardrail SCPs and Config rules, integrating Identity Center, configuring org trail, configuring Config aggregator, designing drift detection. Teams that take this path spend 3–6 months building what Control Tower does in an afternoon, and they rarely keep the hand-built setup as current as Control Tower's managed updates. On SAP-C02, "build it from scratch" is almost never the correct answer for a 60-day deadline.
Q2. Should every business unit get its own OU or just its own accounts inside a shared Workloads OU?
Both, layered. The Workloads OU typically splits into Prod and Non-Prod sub-OUs, and inside those you create per-BU groupings. The reason is policy inheritance: a guardrail that says "production workloads must use encrypted EBS" is inherited by every BU's prod account automatically. If you split by BU at the top level instead, you end up duplicating prod-specific policies across 5 BU OUs and forgetting to update one when the rule changes. SAP-C02 favors policy-driven OU design over org-chart-mirroring OU design.
Q3. For the 60-day scenario, would Landing Zone Accelerator, CfCT, or pure Account Factory Customization be correct?
Landing Zone Accelerator is the best match. The scenario demands compliance alignment (SOC 2, ISO 27001), centralized inspection, org-wide encryption, backup policies, and governance from day one — all of which LZA ships pre-built. CfCT requires your team to write the CloudFormation baselines themselves, which blows the 60-day timeline. Account Factory Customization alone only handles per-account blueprint customization during vending; it cannot do the network inspection VPC or the Security Lake deployment. On SAP-C02, LZA is the defensible choice when speed and compliance coexist.
Q4. How should I handle the transition from AD Connector to a future cloud-native IdP like Azure AD?
Build Identity Center with AD Connector first because that is what matches the scenario's "corporate Active Directory". Plan the Azure AD (Entra ID) transition as a separate project: you change the Identity Center identity source from AD to external SAML IdP, migrate permission set assignments from AD groups to Entra groups (SCIM sync), and retire AD Connector. The migration target infrastructure does not need to anticipate this — Identity Center supports switching identity sources without tearing down permission sets. On SAP-C02, do not overcomplicate day-one design with speculative future identity changes.
Q5. Why Security Lake if CloudTrail Lake already supports SQL queries?
CloudTrail Lake is CloudTrail-only. Security Lake ingests CloudTrail plus VPC Flow Logs, Route 53 Resolver query logs, Security Hub findings, EKS audit logs, and custom OCSF-formatted sources into a unified schema. When a SIEM or the security team investigates an incident, they want to join CloudTrail events with VPC Flow Logs with GuardDuty findings — only Security Lake makes that a one-query operation. CloudTrail Lake is great for CloudTrail-only compliance queries but does not replace a normalized log lake on SAP-C02-level answers.
Q6. What is the cost trade-off of enabling S3 Bucket Keys on every bucket?
S3 Bucket Keys reduce KMS requests per object operation by up to 99% for buckets with SSE-KMS, producing substantial savings on high-throughput log, analytics, and backup buckets. The only trade-off is that the bucket-level data key is cached for a time window, so if you revoke KMS key grants, it takes up to 24 hours for the revocation to fully propagate to the bucket. For the vast majority of landing zone use cases (log archive, Security Lake, backup) that latency is acceptable. For scenarios with second-level revocation requirements (classified workloads), use object-level SSE-KMS without Bucket Keys for those specific buckets.
Q7. How do I size Direct Connect for migration cutover without over-provisioning for steady state?
Model three traffic components. First, baseline replication: MGN agents push deltas from on-prem to AWS, typically 100–500 Mbps per active replication. Second, final sync during cutover: a short burst when all pending writes flush, often 2–4× the baseline for 15–60 minutes. Third, steady-state hybrid traffic: user sessions between on-prem and migrated apps during the coexistence period, which can be 100 Mbps to several Gbps. Add the three at expected peak overlap, double for active-active redundancy, then round up to the next Direct Connect speed tier. For the 60-day scenario's wave one, two 10 Gbps connections are correct — overkill for the replication itself but appropriately sized when steady-state hybrid traffic and redundancy are layered in. Once all workloads have migrated, Direct Connect bandwidth can be reviewed and downgraded.
Exam Signal Summary
Migration target infrastructure sits in Domain 4 (20% weight) and in the 60-day scenario lens, SAP-C02 expects you to:
- Pick Control Tower + Landing Zone Accelerator as the landing zone tool pair when speed and compliance both matter.
- Design OU tree as Security + Infrastructure + Workloads (Prod/Non-Prod) + Sandbox + Suspended, never mirroring org chart.
- Size Direct Connect to the 99.99% resilience tier for production cutover, plus VPN backup, all terminating on a Transit Gateway in the network account.
- Deploy Route 53 Resolver inbound + outbound endpoints before AD Connector.
- Use IAM Identity Center with AD Connector for the corporate AD case, and enable Trusted Identity Propagation when data-lake style workloads migrate.
- Enable org-wide CloudTrail, Security Lake, GuardDuty (delegated admin), Config aggregator before wave one.
- Enforce encryption via account-level defaults (EBS, S3), SCPs, and S3 Bucket Keys on high-throughput buckets.
- Pair tag policies with SCPs and activated cost allocation tags; alarm via AWS Budgets with Budget Actions.
If an answer choice skips any of these on the grounds of "do it later", it is almost certainly the SAP-C02 wrong answer. The landing zone is the migration's foundation, and the exam scores candidates who treat it that way.