What Is Transit Gateway and Hybrid Networking on SAP-C02
Transit Gateway and hybrid networking is the single most heavily weighted networking topic on SAP-C02, surfacing in 8 to 12 of the 75 scored questions and hitting a heat score of 0.90 in our exam telemetry. When an exam scenario mentions "50 VPCs," "two AWS Regions," "on-premises datacenter," "overlapping CIDRs," "central egress," "inspection," "shared services," or "isolated dev/prod," the correct answer almost always involves Transit Gateway and hybrid networking components working together. SAP-C02 does not test whether you have heard of Transit Gateway — it tests whether you can design Transit Gateway and hybrid networking architectures that satisfy bandwidth, latency, resilience, segmentation, and cost constraints at the same time.
Hybrid networking in AWS means every path a packet can take between your on-premises datacenter, a colocation, another cloud, and an AWS VPC. The Transit Gateway and hybrid networking surface includes AWS Transit Gateway itself (the regional hub), AWS Direct Connect (dedicated private fiber), AWS Site-to-Site VPN (IPSec over public internet), AWS Direct Connect SiteLink (on-prem to on-prem via AWS backbone), Virtual Private Gateway (legacy single-VPC attachment), Direct Connect Gateway (multi-Region Private VIF aggregator), Transit VIF (Transit Gateway pairing with Direct Connect), Transit Gateway inter-Region peering, Transit Gateway Connect for SD-WAN, multicast, appliance mode, and the Gateway Load Balancer inspection pattern. Master these Transit Gateway and hybrid networking components together — not in isolation — and roughly one third of the SAP-C02 networking questions become mechanical.
This Transit Gateway and hybrid networking note is written at Pro depth. It covers every attachment type, every route-table behavior, every Direct Connect resilience tier with the exact SLA percentages, overlapping CIDR remediation patterns, central egress blueprints, inspection VPC with GWLB, segmentation for regulated workloads, and a canonical "50-VPC org with isolated dev/prod" scenario walk-through. We also include a decision matrix, a diagnostic flow for troubleshooting asymmetric routing, and a full set of SAP-C02-style traps around Transit Gateway and hybrid networking behaviors that experienced engineers still get wrong.
白話文解釋 Transit Gateway and Hybrid Networking
Transit Gateway and hybrid networking is heavy on jargon, but the underlying mental model is simple once you map it to everyday systems. The following analogies anchor Transit Gateway and hybrid networking concepts before we get into route tables, BGP, and inspection pipelines.
Analogy 1 — The Central Bus Terminal (Transit Gateway as Hub)
Imagine every VPC is a neighborhood and every Site-to-Site VPN or Direct Connect link is a bus route from another city. Before Transit Gateway, you had to run a direct shuttle van between every pair of neighborhoods (VPC peering). Ten neighborhoods meant 45 shuttle vans. Fifty neighborhoods meant 1,225 shuttle vans — impossible to operate. Transit Gateway is a central bus terminal in the middle of the city. Every neighborhood runs one bus into the terminal, every long-distance route (VPN, Direct Connect) arrives at the terminal, and the terminal dispatches passengers to the correct onward platform. Transit Gateway route tables are the platform assignment boards at the terminal: "passengers from Dev neighborhood may only board buses to Shared-Services; passengers from Prod may board any bus." Transit Gateway peering is a sister terminal in another city — a direct luxury coach runs on the AWS backbone between the two, no rerouting through the public streets. Transit Gateway Connect is a tour-bus operator (SD-WAN appliance) that plugs into the terminal and sells connectivity with its own ticket system (GRE tunnels and BGP) without you having to manage individual VPN tunnels.
Analogy 2 — The Electricity Grid (Direct Connect Resilience Tiers)
AWS Direct Connect is like the electricity grid feeding your datacenter. A single-location, single-connection Direct Connect is one power cable from one substation — if the cable is cut or the substation trips, you lose all power. AWS offers a 99.9% SLA on this setup, which sounds impressive until you remember 0.1% is about eight hours of darkness per year. The Highly Resilient model (99.99% SLA) is two cables from two separate substations in the same metro area — a construction crew cutting one street does not blackout your building. The Maximum Resilient model (99.99% SLA) is four cables from two substations spread across two different metros — even a regional grid failure leaves you lit. Link Aggregation Groups (LAG) are bundling four skinny cables into one fat trunk so you can pull more current; Bidirectional Forwarding Detection (BFD) is the fast-acting circuit breaker that trips within a second when a cable fails instead of waiting thirty seconds for the slower fuses (BGP hold timers). SiteLink is the utility selling your spare grid capacity to a neighboring building — your two offices talk to each other over the AWS backbone instead of through the local distribution network.
Analogy 3 — The Office Building Floor Plan (Segmentation and Inspection VPC)
Think of your multi-VPC AWS footprint as a corporate office tower. Each VPC is a floor. Dev, Staging, Prod, and Shared-Services are separate floors. Transit Gateway route tables are the key-card access matrix programmed into the elevator: the Dev key card only stops at Dev and Shared-Services; the Prod key card only stops at Prod and Shared-Services; neither can press the other floor's button. The inspection VPC with Gateway Load Balancer is the mailroom and security-guard floor that every outbound package must pass through — a GENEVE tunnel wraps each envelope, the guard (third-party firewall appliance) screens it for contraband, and the envelope is handed back for onward delivery. Appliance mode is the rule that says "whichever guard checked your outbound envelope must also check the reply" so the conversation is not split between two guards who cannot see each other's logs (symmetric flow). Central egress VPC is the single loading dock for the entire building — every floor's outbound internet traffic converges there so you only pay for one set of NAT Gateways and one fleet of firewalls instead of replicating them on every floor.
With the bus terminal, the electricity grid, and the office floor plan anchored, Transit Gateway and hybrid networking scenarios become a matter of identifying which metaphor applies and following its logic.
Transit Gateway Attachments — The Six Types You Must Know
Every Transit Gateway and hybrid networking design begins with attachments. A Transit Gateway is a regional object; you attach spokes to it, and each attachment has a type. SAP-C02 will test you on the tradeoffs and limitations of each attachment type, often by describing a workload and asking which attachment or combination is correct.
VPC Attachment
A VPC attachment creates an Elastic Network Interface in one subnet per Availability Zone inside the spoke VPC. Transit Gateway then pushes routes into that subnet's route table (or you add them manually). Traffic from the VPC toward any CIDR owned by another spoke flows to the Transit Gateway ENI, across the hub, and into the destination. A VPC attachment is the workhorse of Transit Gateway and hybrid networking designs. You should attach in every AZ the VPC uses — failure to attach an AZ means workloads in that AZ lose Transit Gateway reachability during a single-AZ deployment.
VPN Attachment
A VPN attachment terminates a Site-to-Site VPN directly on the Transit Gateway instead of on a legacy Virtual Private Gateway. Two IPSec tunnels are provisioned automatically for resilience. BGP is strongly recommended (dynamic) over static routes so failover is automatic. VPN attachments can be accelerated — traffic hits the nearest AWS edge Point-of-Presence and rides the AWS backbone to the Region, dramatically reducing jitter for customer gateways in remote geographies. Accelerated VPN costs more and is not available in every Region but is the default answer for "customer gateway in Africa, workload in us-east-1, VPN latency is unstable."
Direct Connect Gateway Attachment (via Transit VIF)
You cannot attach Direct Connect directly to a Transit Gateway. The topology is: Direct Connect physical connection -> Transit VIF -> Direct Connect Gateway -> Transit Gateway. A single Direct Connect Gateway can peer with up to three Transit Gateways (across up to three Regions), enabling multi-Region hybrid networking over one set of Direct Connect ports. Transit VIF is limited to one per Direct Connect connection — this is a common SAP-C02 trap because candidates assume VIFs are unlimited.
Transit Gateway Peering Attachment
Two Transit Gateways in different Regions (or the same Region, in different accounts) can peer with each other. Inter-Region Transit Gateway peering rides the AWS global backbone, is encrypted at the AWS hardware layer, supports static routes only (no BGP between Transit Gateways), and enables multi-Region Transit Gateway and hybrid networking designs without running VPN tunnels between Regions. Same-Region peering across accounts is equally valid and is often used during mergers and acquisitions.
Transit Gateway Connect Attachment
Connect attachments are for SD-WAN appliances and third-party network virtual appliances running inside a VPC. The appliance terminates GRE tunnels against the Transit Gateway and exchanges routes via BGP. Connect attachments deliver up to 5 Gbps per GRE tunnel and up to 20 Gbps aggregate per Connect attachment, far above Site-to-Site VPN limits. If a scenario describes "Cisco Meraki SD-WAN" or "VeloCloud appliance needs high-bandwidth BGP," Transit Gateway Connect is the correct answer.
Transit Gateway Multicast Domain (Attachment-adjacent)
Multicast is not a traditional attachment type but a feature that rides on VPC attachments. Transit Gateway can act as a multicast router, letting one source send a packet to many receivers across attached VPCs. This is narrow — financial market data feeds, legacy enterprise applications, IPTV — but SAP-C02 has asked about it because no other AWS service provides native multicast between VPCs.
Transit Gateway Attachment — A Transit Gateway attachment is the connection point between a spoke (VPC, VPN, Direct Connect Gateway, another Transit Gateway, or a Connect peer) and the regional Transit Gateway hub. Each attachment is associated with exactly one Transit Gateway route table, and its routes can be propagated into zero or more route tables. Attachments are the atomic unit of segmentation in Transit Gateway and hybrid networking architectures. Reference: https://docs.aws.amazon.com/vpc/latest/tgw/tgw-attachments.html
Transit Gateway Route Tables, Association, and Propagation
Transit Gateway route tables are the control plane that turns a hub into a segmentation engine. Every Transit Gateway attachment has exactly one association with a route table — that association determines which route table's routes the attachment consults when deciding where to send a packet. Propagation is the inverse: an attachment can propagate its CIDRs into many route tables, making those CIDRs reachable by any attachment associated with those tables. Association is "where I read routes from"; propagation is "where I advertise my routes to."
Static vs Propagated Routes
Propagated routes are learned automatically (from a VPC attachment's CIDR, or from BGP on a VPN or Direct Connect attachment). Static routes are manually entered. Static routes always beat propagated routes for the same prefix (more specific or equal length). Static routes are critical for blackholing a prefix (route to blackhole target) and for overriding BGP when you must force traffic through an inspection VPC.
The Isolated vs Shared Route Table Pattern
The canonical Transit Gateway and hybrid networking segmentation design uses one route table per domain:
- Prod Route Table: Prod VPC attachments associate here. Prod propagates into Prod and into Shared-Services.
- Dev Route Table: Dev VPC attachments associate here. Dev propagates into Dev and into Shared-Services.
- Shared-Services Route Table: Shared-Services VPC attachment associates here. Shared-Services propagates into Prod, Dev, and itself.
- On-prem Route Table: Direct Connect Gateway and VPN attachments associate here. They propagate the on-premises CIDRs into Prod and Shared-Services but not into Dev.
Result: Prod cannot reach Dev, Dev cannot reach Prod, both can reach Shared-Services, on-prem can reach Prod but not Dev. No Security Group rules or NACLs are needed for this segmentation — it is enforced by the absence of a route.
Transit Gateway Route Tables Are the Real Segmentation Boundary — At scale, Security Groups and NACLs become hard to audit across dozens of accounts. Transit Gateway and hybrid networking segmentation belongs at the route table layer: if Dev has no route to Prod's CIDR in its associated Transit Gateway route table, Dev cannot reach Prod regardless of what firewall rules exist. Auditors can read the Transit Gateway route tables in one place and prove isolation. This is how regulated enterprises satisfy PCI-DSS segmentation requirements for Transit Gateway and hybrid networking. Reference: https://docs.aws.amazon.com/vpc/latest/tgw/tgw-route-tables.html
Blackhole Routes
A blackhole route in a Transit Gateway route table silently drops traffic matching the prefix. Blackholes are used to preempt accidental reachability (for example, explicitly blackhole Prod CIDRs from the Dev route table so that a future misconfiguration cannot route Dev -> Prod), and to absorb DDoS-like traffic during incident response.
AWS Direct Connect Deep Dive — Physical, Logical, and Resilience
Direct Connect is the private-fiber half of Transit Gateway and hybrid networking. SAP-C02 probes Direct Connect at four layers: physical connection, logical VIF, Direct Connect Gateway, and resilience tier.
Dedicated vs Hosted Connections
A Dedicated Connection is an entire physical port (1 Gbps, 10 Gbps, or 100 Gbps) allocated to you at a Direct Connect location. You own the port, pay the port-hour charge directly, and can create up to 50 VIFs on it. A Hosted Connection is a logical slice of a partner's physical port, available in granular bandwidths from 50 Mbps to 10 Gbps. Hosted connections are faster to procure (hours to days vs weeks to months), carry only one VIF, and are billed through the partner. When a SAP-C02 scenario says "needs 500 Mbps in one week," Hosted Connection is the answer; "needs 100 Gbps and already has a cross-connect" points to Dedicated.
Virtual Interfaces (VIFs) — Three Flavors
- Private VIF: reaches one Amazon VPC via a Virtual Private Gateway, or reaches a Direct Connect Gateway that aggregates many VPCs across Regions. Private VIFs carry RFC1918 traffic.
- Public VIF: reaches all AWS public service endpoints (S3, DynamoDB, KMS, public APIs) in the home Region plus every Region globally via the AWS backbone, but not the public internet. Public VIFs use public AS numbers and announce your public prefixes to AWS. Required for SiteLink.
- Transit VIF: reaches a Direct Connect Gateway that is associated with up to three Transit Gateways. One Transit VIF per Direct Connect connection — this is a hard limit.
Link Aggregation Group (LAG)
A LAG bundles up to four Dedicated Connections of the same bandwidth at the same Direct Connect location into a single logical link using LACP. LAGs improve throughput (2x, 3x, or 4x) and provide member-level resilience (lose one cable, keep N-1). LAGs do not span locations — that is the Highly Resilient model, discussed below.
Bidirectional Forwarding Detection (BFD)
BFD is a sub-second link-liveness protocol layered on BGP. Without BFD, BGP relies on hold timers of 90 seconds by default, meaning a failed link continues to black-hole traffic for up to 90 seconds before BGP notices. With BFD enabled, failure detection drops to around one second. AWS enables BFD on the AWS side by default; you must enable it on your customer router. Enable BFD on every production Direct Connect and VPN attachment — the SAP-C02 correct answer for "convergence time is 90 seconds, reduce it" is always "enable BFD."
Direct Connect Resilience Models — Exact SLAs
AWS publishes three reference models for Direct Connect and hybrid networking resilience:
| Model | Topology | SLA |
|---|---|---|
| Development and Test | One connection, one location | 99.9% |
| High Resilience | Two connections, two devices, one location (or two locations, one connection each) | 99.99% |
| Maximum Resilience | Two connections each at two separate Direct Connect locations (four connections total) | 99.99% |
The jump from 99.9% to 99.99% means about 8 hours of permitted downtime per year drops to about 52 minutes per year. Maximum Resilience is the only model that survives a Direct Connect location-level event (fire, power failure, metro fiber cut). Regulated industries (finance, healthcare) default to Maximum Resilience.
Direct Connect Resilience Numbers — Memorize Before the Exam — - Dev/Test model: 1 connection, 1 location, 99.9% SLA (~8h 45m downtime/year allowed).
- High Resilience: 2 connections at 1 or 2 locations, 99.99% SLA (~52m downtime/year).
- Maximum Resilience: 2 connections at each of 2 locations (4 total), 99.99% SLA, survives location failure.
- Connection bandwidths: Dedicated 1/10/100 Gbps; Hosted 50 Mbps to 10 Gbps.
- VIFs per Dedicated Connection: 50. VIFs per Hosted Connection: 1.
- Transit VIFs per connection: 1 (hard limit).
- DX Gateway to Transit Gateway associations: 3 maximum.
- BFD default hello: 300 ms, multiplier 3 -> ~1s failover. Reference: https://docs.aws.amazon.com/directconnect/latest/UserGuide/resiliency_recommendation.html
Direct Connect Gateway — The Multi-Region Aggregator
A Direct Connect Gateway is a global AWS construct (not tied to a Region) that sits between a Direct Connect VIF and one or more Virtual Private Gateways or Transit Gateways. One Direct Connect Gateway supports VGW associations in every AWS Region globally, and it supports Transit Gateway associations in up to three Regions. The common SAP-C02 design is:
One Direct Connect connection in us-east-1 Ashburn -> Transit VIF -> Direct Connect Gateway -> Transit Gateway in us-east-1 + Transit Gateway in eu-west-1 (via inter-Region DX Gateway association).
This lets a single pair of fiber connections in Ashburn reach VPCs in multiple Regions, using the AWS backbone for the cross-Region hop. No VPN tunnels, no second Direct Connect deployment.
AWS Direct Connect SiteLink
SiteLink is the on-premises to on-premises over the AWS backbone feature. Two Direct Connect locations (for example, London and Singapore) can exchange traffic via the AWS global network without the traffic ever touching an AWS VPC. SiteLink replaces private MPLS for multi-site enterprises, often at a fraction of the cost. SiteLink requires Public VIFs or Transit VIFs — not Private VIFs — and is billed on a SiteLink flat fee plus data transfer. When a scenario says "Tokyo office needs to talk to Frankfurt office, AWS Direct Connect is already in both," the answer is SiteLink.
Direct Connect Is Not Encrypted by Default — Add MACsec or IPSec — Direct Connect rides dedicated fiber but does not encrypt frames at the link layer by default. For compliance regimes that mandate encryption in transit, enable MACsec (AES-256 GCM, available on 10 Gbps and 100 Gbps Dedicated Connections with supported devices) or run an IPSec VPN over the Private/Transit VIF. A candidate who assumes Direct Connect is private-therefore-encrypted will miss the HIPAA/PCI-oriented SAP-C02 question every single time. Reference: https://docs.aws.amazon.com/directconnect/latest/UserGuide/MACsec.html
AWS Site-to-Site VPN — Static, BGP, and Accelerated
Site-to-Site VPN is the IPSec-over-internet half of Transit Gateway and hybrid networking. Every VPN connection from AWS provides two tunnels terminating in two different AWS Availability Zones for the same Region — both tunnels are active, and your Customer Gateway (CGW) should load-balance or actively use both.
Static Routing vs BGP
Static routing VPNs require you to enter each remote CIDR manually on the AWS side and each AWS CIDR manually on the CGW. Failover between tunnels is not automatic — the CGW must detect the tunnel is down and shift traffic. BGP VPNs exchange routes dynamically over each tunnel, and failover is automatic based on BGP keep-alives (or BFD). Always choose BGP for production Transit Gateway and hybrid networking designs. Static is acceptable only for legacy CGWs that do not speak BGP.
Customer Gateway (CGW)
The Customer Gateway is an AWS-side object describing your on-prem router — it records the public IP, BGP ASN (or "static"), and optional certificate. The CGW object does not configure your actual router; you must apply the configuration AWS generates to your physical or virtual router (Cisco ASA, Juniper SRX, Palo Alto, pfSense, etc.).
Accelerated Site-to-Site VPN
Accelerated VPN attaches the VPN to AWS Global Accelerator, so the IPSec tunnel terminates at the nearest AWS edge location and rides the AWS backbone to the Region. For customers in Africa, South America, Southeast Asia, or Oceania, accelerated VPN typically halves RTT and eliminates jitter caused by congested transit paths. Requires the VPN to be attached to a Transit Gateway (not a Virtual Private Gateway) and incurs Global Accelerator charges.
Tunnel Throughput Ceiling
A single IPSec tunnel is limited to approximately 1.25 Gbps of throughput. Equal-Cost Multipath (ECMP) over a Transit Gateway VPN attachment lets you scale out by aggregating multiple VPN connections — up to roughly 50 Gbps in practice. Beyond that, switch to Direct Connect or Transit Gateway Connect.
Use BGP and BFD on Every Production VPN — For Transit Gateway and hybrid networking production workloads, always configure BGP with BFD enabled on both VPN tunnels. BGP gives automatic failover; BFD drops failure detection from 90 seconds to ~1 second; ECMP across multiple VPN connections scales past the 1.25 Gbps per-tunnel ceiling. This combination is the default pattern in every AWS reference architecture for VPN-based Transit Gateway and hybrid networking. Reference: https://docs.aws.amazon.com/vpn/latest/s2svpn/VPNTunnels.html
Topology Patterns — Hub-and-Spoke vs Mesh vs Hybrid
Full Mesh (VPC Peering)
With N VPCs, a full mesh requires N(N-1)/2 peerings*. At N=50 that is 1,225 peerings, each with its own route-table entry in every VPC. Full mesh is legitimate only when N is small (under about 10) and when routing must be absolutely lowest-latency with no intermediate hop. VPC Peering is non-transitive, so a mesh is the only peering-based option that works at all.
Hub-and-Spoke (Transit Gateway)
Transit Gateway collapses the mesh: N attachments, not N*(N-1)/2 peerings. Every VPC has one route toward the Transit Gateway's range of spoke CIDRs, and the Transit Gateway does the routing. Hub-and-spoke adds a single hop of latency (typically sub-millisecond inside a Region) but makes segmentation, logging, and billing centrally manageable. This is the default Transit Gateway and hybrid networking topology for anything beyond a handful of VPCs.
Partial Mesh with Transit Gateway
For two or three latency-critical VPC pairs, combine Transit Gateway with targeted VPC Peerings. Traffic between the mesh-peered VPCs takes the direct peering; all other traffic rides Transit Gateway. More specific prefix wins at the VPC route table, so design the peering prefixes narrower than the Transit Gateway prefixes.
Cloud WAN (Adjacent Pattern)
AWS Cloud WAN is a newer managed-WAN service that abstracts Transit Gateway, Direct Connect, and VPN into a single global network with a policy document. Cloud WAN is in SAP-C02 scope as a recognize-and-distinguish item. For multi-Region, multi-account Transit Gateway and hybrid networking designs that span more than ~5 Regions, Cloud WAN is often the correct answer over stitching Transit Gateways together with inter-Region peering.
Shared Services VPC Pattern
Every enterprise Transit Gateway and hybrid networking footprint has a Shared Services VPC that hosts resources consumed by all other VPCs — Active Directory domain controllers, internal DNS resolvers (Route 53 Resolver endpoints), package mirrors, monitoring collectors, PKI services, license servers. The pattern:
- Create a Shared Services VPC.
- Attach it to the Transit Gateway.
- Associate its attachment with a Shared-Services route table.
- Propagate Shared Services's CIDR into every other route table (Prod, Dev, Sandbox, On-prem).
- Propagate every other attachment's CIDR into the Shared-Services route table so that Shared Services can initiate reply traffic.
- Use Route 53 Resolver inbound/outbound endpoints inside the Shared Services VPC so on-prem can resolve AWS private DNS and vice versa.
This makes Shared Services reachable from every workload VPC without mesh connectivity, and it centralizes DNS, identity, and observability — a major Transit Gateway and hybrid networking cost and security win.
Central Egress VPC Pattern
Each VPC with its own NAT Gateway pays hourly charges per NAT Gateway plus per-GB processing. With 50 VPCs running 2 NAT Gateways each for AZ redundancy, that is 100 NAT Gateways billing 24x7. The central egress pattern consolidates:
- Create a dedicated Egress VPC with NAT Gateways (two or three, one per AZ).
- Attach the Egress VPC to the Transit Gateway.
- In every spoke VPC's route table, point
0.0.0.0/0at the Transit Gateway. - In the Transit Gateway route table used by spokes, add a static route
0.0.0.0/0 -> Egress VPC attachment. - In the Egress VPC's route table (the VPC route table),
0.0.0.0/0points at the NAT Gateway; traffic returning from the internet goes NAT -> Transit Gateway -> originating spoke.
Savings are material: one set of NAT Gateways for the whole organization instead of N. Be mindful of NAT Gateway throughput ceiling (45 Gbps per NAT Gateway) — scale out by adding more NAT Gateways if aggregate egress exceeds this.
Appliance Mode Is Mandatory for Central Egress and Inspection VPCs — When a VPC attachment carries stateful traffic (central egress through NAT, inspection through a firewall appliance), you must enable appliance mode on that Transit Gateway attachment. Without appliance mode, the Transit Gateway may pick different AZs for the forward and return paths of the same flow, causing asymmetric routing that breaks stateful devices. Appliance mode pins both directions of a flow to the same AZ ENI. This is a top-3 Transit Gateway and hybrid networking trap on SAP-C02. Reference: https://docs.aws.amazon.com/vpc/latest/tgw/tgw-appliance-mode.html
Inspection VPC and Gateway Load Balancer
Regulated workloads (PCI-DSS, HIPAA, financial services) often require every packet to be inspected by a firewall appliance — Palo Alto, Fortinet, Check Point, Cisco, or a custom IDS. The Transit Gateway and hybrid networking blueprint for this is the Inspection VPC with Gateway Load Balancer (GWLB).
Gateway Load Balancer (GWLB)
GWLB is a Layer 3/4 load balancer that operates at the IP layer and uses the GENEVE protocol (UDP 6081) to tunnel the original packet — unchanged — to a fleet of third-party virtual firewall appliances. The appliances inspect the packet, make a pass/drop decision, and send allowed traffic back into the tunnel. Because GENEVE preserves the original 5-tuple and payload, the appliance sees exactly what a bump-in-the-wire physical firewall would see.
The Blueprint
- Create an Inspection VPC with GWLB endpoints (one per AZ) and a fleet of firewall EC2 appliances behind GWLB.
- Attach the Inspection VPC to Transit Gateway with appliance mode enabled.
- In the spoke Transit Gateway route table, send the next-hop target for
0.0.0.0/0(or east-west prefixes, or both) to the Inspection VPC attachment. - In the Inspection VPC, GWLB endpoints receive the traffic, GENEVE-tunnel to the firewall fleet, get the pass verdict, and return traffic onward via the Transit Gateway.
- Optionally chain Inspection VPC -> Egress VPC so all outbound internet traffic is inspected then NATed centrally.
Result: every packet between any two spokes (or out to the internet) passes through the firewall fleet. Segmentation is enforced by routing and by the appliance's rules. This is the canonical Transit Gateway and hybrid networking answer to "PCI workload in AWS, security team requires stateful firewall inspection."
Gateway Load Balancer (GWLB) — Gateway Load Balancer is an AWS-managed transparent Layer 3/4 load balancer that distributes traffic to a fleet of third-party virtual network appliances using GENEVE (UDP 6081) encapsulation. GWLB endpoints in each consumer VPC act as next-hop targets, making the inspection fleet appear as a bump-in-the-wire to spoke VPCs. GWLB is the native AWS building block for Transit Gateway and hybrid networking inspection architectures. Reference: https://docs.aws.amazon.com/elasticloadbalancing/latest/gateway/introduction.html
Overlapping CIDR Remediation
Transit Gateway will not route between two attachments that advertise overlapping CIDR blocks — one of them will be dropped or cause unpredictable routing. Overlapping CIDRs commonly arise from mergers, acquisitions, lift-and-shift migrations, and naive "every VPC is 10.0.0.0/16" defaults. SAP-C02 has a dedicated question pattern for this. Remediation options, in order of preference:
- Re-IP one side (best but expensive and slow — requires application-level coordination and often downtime).
- Private NAT Gateway in an overlap-bridge VPC, mapping overlapping source/dest to unique proxy CIDRs. Transit Gateway routes between the bridge VPC and both sides using the unique proxy CIDRs.
- PrivateLink (VPC Endpoint Services) — if only a handful of services need to cross the overlap, publish them via PrivateLink. PrivateLink is CIDR-agnostic because it operates at the service endpoint level, not the IP layer.
- Network Address Translation at a third-party firewall sitting in an inspection VPC — NAT source and destination so both sides see unique IPs.
- Last resort: route only non-overlapping sub-prefixes via Transit Gateway and accept partial connectivity (rarely acceptable).
Transit Gateway Silently Breaks on Overlapping CIDRs — Transit Gateway does not "helpfully warn you" about overlapping CIDRs. It will accept the attachments, sometimes accept the routes, and then route traffic unpredictably or black-hole silently. Candidates who assume "Transit Gateway will figure it out" get the SAP-C02 overlap question wrong. The correct first answer is almost always PrivateLink (for specific services) or Private NAT Gateway in a bridge VPC (for broader connectivity) — never "just attach both VPCs." Reference: https://docs.aws.amazon.com/vpc/latest/tgw/tgw-limits.html
Inter-Region Transit Gateway Peering
When a workload spans two or more AWS Regions, Transit Gateway peering is the preferred Transit Gateway and hybrid networking approach.
Properties
- Runs across the AWS global backbone (not the public internet).
- Encrypted automatically at the AWS hardware layer (AES-256).
- Supports static routes only across the peering — no BGP between two Transit Gateways.
- Inter-Region data transfer is billed per GB (check current pricing per Region pair).
- Latency is roughly the inter-Region AWS backbone latency (for example, us-east-1 <-> eu-west-1 around 70-75 ms).
The Design Pattern
One Transit Gateway per Region; each Region's workloads attach to the local Transit Gateway; the two Transit Gateways peer with each other. On-prem connectivity can terminate in one Region and reach the other Region via the peering, but this concentrates cross-Region traffic — for high-throughput multi-Region on-prem access, deploy Direct Connect in both Regions and use a Direct Connect Gateway associated with both Transit Gateways.
Transit Gateway Multicast
Multicast is an under-discussed Transit Gateway and hybrid networking capability but it shows up on SAP-C02. Transit Gateway multicast lets you create a multicast domain that spans multiple attached VPCs, with sources and receivers registered via static IGMP-style membership or dynamic IGMPv2. Use cases:
- Financial market data distribution: a ticker source multicasts to dozens of analytics consumers.
- Legacy enterprise apps: some ERP, trading, and broadcast platforms require multicast.
- IPTV or streaming fan-out inside the AWS footprint.
Key constraints: multicast on Transit Gateway requires Nitro-based instances, does not cross Transit Gateway peerings (Region-local only), and traffic cannot leave to VPN or Direct Connect. When the question is "multicast across VPCs in one Region without rearchitecting," Transit Gateway multicast is the answer.
Transit Gateway Connect (SD-WAN Integration)
Transit Gateway Connect is the SD-WAN bridge. Your SD-WAN virtual appliance (Cisco, VMware/VeloCloud, Aruba Silver Peak, Aviatrix, Fortinet, Versa, etc.) runs on EC2 in a transit VPC, terminates GRE tunnels against the Transit Gateway, and speaks BGP to exchange routes. Advantages over VPN attachments:
- Up to 20 Gbps aggregate per Connect attachment (vs ~1.25 Gbps per VPN tunnel).
- GRE plus BGP — no IPSec overhead since the appliance handles encryption, or it rides Direct Connect.
- Native SD-WAN policy from the appliance's controller — no double configuration.
Common SAP-C02 scenario: "Enterprise runs Cisco SD-WAN across 200 branches, wants to integrate with AWS without replacing SD-WAN." The answer is Transit Gateway Connect (plus SiteLink if inter-branch-through-AWS is also needed).
Scenario Patterns — How SAP-C02 Asks Transit Gateway and Hybrid Networking
SAP-C02 questions follow recognizable patterns. Drill these so you can pattern-match in under 30 seconds.
Pattern 1 — "Five or More VPCs Need to Communicate"
Keywords: "ten VPCs", "multi-account", "complexity of peerings". Correct answer: Transit Gateway with RAM sharing. VPC Peering is a distractor (non-transitive, mesh explosion).
Pattern 2 — "Reduce NAT Gateway Cost Across Many VPCs"
Keywords: "each VPC has its own NAT Gateway", "cost-optimize egress". Correct answer: Central egress VPC behind Transit Gateway with appliance mode enabled. Two or three NAT Gateways total, not N.
Pattern 3 — "PCI/HIPAA Compliance, Stateful Inspection Required"
Keywords: "stateful firewall", "third-party appliance", "inspect east-west traffic". Correct answer: Inspection VPC with Gateway Load Balancer, Transit Gateway routes all traffic through it, appliance mode enabled, appliances in Auto Scaling Group behind GWLB.
Pattern 4 — "Two Regions, Hybrid, Minimize Latency and Complexity"
Keywords: "us-east-1 and eu-west-1", "on-prem in both geographies". Correct answer: Transit Gateway per Region + Transit Gateway peering + Direct Connect in each Region + Direct Connect Gateway associations with each Transit Gateway.
Pattern 5 — "Overlapping 10.0.0.0/16 from Acquired Company"
Keywords: "overlapping CIDRs", "cannot re-IP in short term". Correct answer: PrivateLink for specific services or Private NAT Gateway in a bridge VPC. Never "just attach to Transit Gateway."
Pattern 6 — "Customer Gateway in Africa, Latency to us-east-1 Is Unstable"
Keywords: "high jitter", "unpredictable VPN latency", "customer is in remote geography". Correct answer: Accelerated Site-to-Site VPN — traffic enters nearest AWS edge, rides backbone. Requires Transit Gateway attachment (not VGW).
Pattern 7 — "Convergence Takes 90 Seconds During Failover"
Keywords: "failover is too slow", "BGP default timers". Correct answer: Enable BFD on the customer router. Drops detection to ~1 second.
Pattern 8 — "On-Prem Sites Want to Talk to Each Other via AWS"
Keywords: "two offices, both have Direct Connect, replace MPLS". Correct answer: AWS Direct Connect SiteLink on Public or Transit VIFs.
Pattern 9 — "SD-WAN Appliance, Needs High Bandwidth BGP Into AWS"
Keywords: "Cisco SD-WAN", "VeloCloud", "needs more than 1.25 Gbps". Correct answer: Transit Gateway Connect with GRE and BGP.
Pattern 10 — "Multicast Stock Ticker Across Three VPCs"
Keywords: "multicast", "financial data feed". Correct answer: Transit Gateway multicast domain with Nitro instances.
Canonical Scenario — 50-VPC Organization with Central Egress and Isolated Dev/Prod
This is the reference SAP-C02 scenario for Transit Gateway and hybrid networking. Memorize the full architecture.
Requirements
- 50 VPCs spread across 8 AWS accounts, all in us-east-1.
- Workloads segmented into Prod (20 VPCs), Dev (20 VPCs), Shared Services (1 VPC), Security/Inspection (1 VPC), Egress (1 VPC), DR placeholder (7 VPCs).
- Prod and Dev must be completely isolated from each other at the network layer (auditor requirement).
- Shared Services VPC provides Active Directory, internal DNS, and monitoring to all VPCs.
- On-prem datacenter connects via Direct Connect with Maximum Resilience, and optionally via VPN as cold backup.
- All outbound internet traffic must egress through a single point for IP allowlisting on SaaS providers.
- All east-west traffic between VPCs must pass through a stateful firewall inspection fleet (PCI-DSS requirement).
- Eventually needs to extend to eu-west-1 for DR; reuse the design.
The Design
-
One Transit Gateway in us-east-1, shared across all 8 accounts via AWS Resource Access Manager (RAM). Enable default route-table association off and default propagation off — we will manage both explicitly.
-
Transit Gateway Route Tables (five):
rt-prod: Prod VPC attachments associate here.rt-dev: Dev VPC attachments associate here.rt-shared: Shared Services VPC attachment associates here.rt-onprem: Direct Connect Gateway attachment and VPN attachments associate here.rt-inspection-egress: Inspection VPC and Egress VPC attachments associate here.
-
Propagation Matrix:
- Prod attachments propagate into
rt-shared,rt-onprem,rt-inspection-egress. - Dev attachments propagate into
rt-shared,rt-inspection-egress(notrt-onprem— auditors bar on-prem from reaching Dev). - Shared Services attachment propagates into
rt-prod,rt-dev,rt-onprem. - On-prem attachments propagate their on-prem CIDRs into
rt-prod,rt-shared(notrt-dev). - Inspection and Egress propagate
0.0.0.0/0as static routes (not via propagation) intort-prod,rt-dev.
- Prod attachments propagate into
-
Static Routes for Central Egress:
rt-prodandrt-dev:0.0.0.0/0 -> Inspection VPC attachment.- Inspection VPC route table:
0.0.0.0/0 -> Egress VPC attachment(chain inspection then egress), or keep inspection and egress as one VPC for simpler designs. - Egress VPC VPC-route-table:
0.0.0.0/0 -> NAT Gateway; return path via TGW.
-
Appliance Mode: enabled on Inspection VPC attachment and on Egress VPC attachment — non-negotiable for stateful flows.
-
Direct Connect: Maximum Resilience model. Two Direct Connect locations in the Ashburn metro, two Dedicated Connections per location (10 Gbps), LAG bundling each pair. Transit VIF on each connection terminates on a single Direct Connect Gateway. Direct Connect Gateway associates with the us-east-1 Transit Gateway. BFD enabled on all VIFs. Cold-backup VPN attachments (two, in different CGWs) also attached to the Transit Gateway and associated with
rt-onprem, with BGP MED tuned so DX is preferred. -
Blackhole Routes: in
rt-prod, add a static blackhole for every Dev VPC CIDR (defense in depth). Mirror inrt-dev. -
Shared Services DNS: Route 53 Resolver inbound and outbound endpoints in the Shared Services VPC. Outbound forwards
corp.example.comto on-prem DNS over the Direct Connect. Inbound accepts queries from on-prem for AWS private-hosted zones. -
Future DR in eu-west-1: deploy a second Transit Gateway in eu-west-1, same route-table structure. Peer the two Transit Gateways. Associate the existing Direct Connect Gateway with the eu-west-1 Transit Gateway too (one DX Gateway, two Transit Gateway associations — within the three-association limit). Cross-Region data replication (RDS, S3 CRR) rides the peering.
Why This Design Wins the Exam
- Segmentation is enforced at Transit Gateway route tables — auditable in one place, independent of Security Groups.
- Central egress collapses NAT cost from ~100 NAT Gateways to ~3.
- Inspection VPC with GWLB handles the PCI-DSS stateful-firewall requirement.
- Appliance mode prevents asymmetric-routing-induced incidents.
- Maximum Resilience Direct Connect meets the 99.99% SLA.
- BFD delivers ~1-second failover.
- Direct Connect Gateway scales to the second Region without new fiber.
- Transit Gateway peering for DR replication without VPN.
- RAM centralizes the Transit Gateway across 8 accounts.
Decision Matrix — Which Transit Gateway and Hybrid Networking Primitive?
| Requirement | Primary Answer | Why |
|---|---|---|
| 2-5 VPCs, static topology, lowest possible latency | VPC Peering | No transit hop; no monthly hub fee. |
| 6+ VPCs or multi-account growth | Transit Gateway + RAM | Collapses mesh; centralizes segmentation. |
| Private fiber, 1 Gbps+, consistent latency, audit | Direct Connect Dedicated + DX Gateway + Transit VIF | Deterministic SLA; encryption via MACsec if required. |
| Connectivity in days, modest bandwidth | Hosted Connection or Site-to-Site VPN | Faster provisioning; lower cost. |
| Sub-second BGP failover | Enable BFD | Drops detection from 90s to ~1s. |
| Customer router in distant geography, unstable internet | Accelerated Site-to-Site VPN | Edge entry + AWS backbone. |
| On-prem to on-prem via AWS backbone | Direct Connect SiteLink | Replaces MPLS. |
| SD-WAN integration with high BGP throughput | Transit Gateway Connect | 20 Gbps aggregate; GRE + BGP. |
| Multicast between VPCs | Transit Gateway Multicast | Only native AWS option. |
| Central internet egress across many VPCs | Egress VPC + Transit Gateway + appliance mode | Collapses NAT cost; IP allowlisting. |
| East-west stateful inspection | Inspection VPC + GWLB + Transit Gateway + appliance mode | Bump-in-the-wire for PCI/HIPAA. |
| Overlapping CIDRs, cannot re-IP | PrivateLink or Private NAT Gateway bridge VPC | CIDR-agnostic service publishing. |
| Two Regions, one on-prem ingress | Direct Connect Gateway associated with Transit Gateway in each Region | Single fiber serves both. |
| DR between Regions, avoid internet | Transit Gateway Inter-Region Peering | AWS backbone, encrypted, static routes. |
| Resilience target 99.99% survives DC-location failure | Maximum Resilient Direct Connect (4 connections, 2 locations) | Only tier surviving location event. |
Diagnostic Flow — Troubleshooting Transit Gateway and Hybrid Networking
When a production Transit Gateway and hybrid networking problem lands on your desk (or an SAP-C02 scenario describes a broken reachability case), work through this checklist in order.
Step 1 — Is the Attachment Healthy?
Check the Transit Gateway console. Every attachment should show available. A VPN attachment with one tunnel up and one tunnel down still works but has lost redundancy — fix it, but continue troubleshooting.
Step 2 — Is the Attachment Associated With the Expected Route Table?
Associations are one-to-one. An attachment associated with the wrong route table will read unexpected routes (or none). Check Associations tab of each Transit Gateway route table.
Step 3 — Is the Destination CIDR Propagated or Statically Present?
In the associated route table, is there an entry for the destination CIDR? If not, either (a) propagation from the destination attachment is not enabled, or (b) you need a static route. Remember: more specific beats less specific, and static beats propagated for ties.
Step 4 — Is There Symmetric Propagation?
For bidirectional traffic, the return path needs its own propagation. Source can reach destination, but destination's route table must also have source's CIDR. Classic half-working-flow bug.
Step 5 — Spoke VPC Route Table: Is Transit Gateway the Next Hop?
The VPC route table (not the Transit Gateway route table) must send the destination CIDR to the Transit Gateway attachment (ENI in each AZ). Missing routes here cause packets never to reach the Transit Gateway.
Step 6 — Security Groups and NACLs
Security Group on the source instance allows outbound; on the destination, allows inbound from the source CIDR. NACLs are stateless — both directions must be allowed. This is the CLF-C02 trap that carries into SAP-C02.
Step 7 — Appliance Mode Where Stateful
If traffic traverses an inspection or egress VPC, confirm appliance mode is enabled on those attachments. Asymmetric AZ selection produces flows that half-work — pings succeed but TCP handshakes fail or randomly drop.
Step 8 — Overlapping CIDRs
Two attachments advertising the same or overlapping CIDRs will cause silent drops or unpredictable routing. describe-transit-gateway-route-tables and inspect.
Step 9 — Direct Connect BGP Session
show ip bgp summary on the customer router. Is the session Established? If stuck in Active or Idle, check ASN, MD5 password, and VLAN tag. BFD status should be up.
Step 10 — VPC Flow Logs
Enable VPC Flow Logs on the source, destination, and any transit subnet. Filter for REJECT vs ACCEPT. REJECT on a Security Group shows exactly which packet was dropped and why. Also enable Transit Gateway Flow Logs (yes, they exist separately) for end-to-end visibility.
Real-World Cost and Scaling Considerations
Transit Gateway and hybrid networking cost adds up fast. Key levers:
Transit Gateway Pricing Components
- Attachment fee per hour per attachment, per AZ (for VPC attachments) — roughly $0.05/hour per attachment-AZ in us-east-1.
- Data processing fee per GB of data that traverses the Transit Gateway — roughly $0.02/GB in us-east-1.
- Inter-Region peering data transfer — per-GB cross-Region rates.
For 50 VPC attachments each spanning 3 AZs, attachment-hours alone run about $5,400/month before data processing. This is material — designs that avoid Transit Gateway for a handful of VPCs (use VPC Peering or PrivateLink) can save thousands.
Direct Connect Pricing Components
- Port-hour for Dedicated Connections: scales with speed, around $0.30/hour for 1 Gbps, $2.25/hour for 10 Gbps in US locations.
- Data transfer out over Direct Connect is roughly $0.02/GB, versus $0.09/GB for internet egress from us-east-1 — massive savings for egress-heavy workloads.
- Direct Connect pays back quickly when monthly egress exceeds roughly 50 TB, depending on speed tier.
NAT Gateway Ceiling
NAT Gateway caps at 45 Gbps and ~55,000 concurrent connections per destination IP/port. For central egress designs pushing beyond this, deploy multiple NAT Gateways per AZ and ECMP across them at the Egress VPC, or substitute a NAT fleet on EC2 behind a Network Load Balancer.
Common SAP-C02 Transit Gateway and Hybrid Networking Traps
Consolidated trap list — every item here has burned experienced candidates on SAP-C02.
- VPC Peering is non-transitive. A <-> B, B <-> C does not imply A <-> C. Transit Gateway is the answer.
- Direct Connect is not encrypted by default. Add MACsec or run IPSec over it.
- Transit VIFs are limited to 1 per Direct Connect connection. Plan accordingly.
- Direct Connect Gateway caps at 3 Transit Gateway associations. For more Regions, use Cloud WAN or multiple DX Gateways.
- Appliance mode is required for central egress and inspection VPC attachments — missing it breaks stateful flows.
- Overlapping CIDRs are not auto-resolved — use PrivateLink or Private NAT Gateway.
- Static routes beat propagated routes at same prefix length — easy to override by accident.
- Transit Gateway peering is static-only — no BGP between two Transit Gateways.
- Site-to-Site VPN caps at ~1.25 Gbps per tunnel — use ECMP or Transit Gateway Connect for more.
- BFD is off on the customer side by default — you must enable it on your router for sub-second failover.
- Dev/Test Direct Connect is 99.9% SLA, not 99.99% — only High Resilience and Maximum Resilience reach 99.99%.
- Accelerated VPN requires Transit Gateway, not a Virtual Private Gateway.
- Transit Gateway Multicast does not cross inter-Region peering — Region-local only.
- Route 53 Resolver endpoints are zonal — deploy one per AZ for redundancy in Shared Services VPC.
- Central egress savings assume appliance mode + single Egress VPC — replicating egress per AZ-segment negates the savings.
Frequently Asked Questions
Q1 — When should I choose Transit Gateway over VPC Peering?
Choose VPC Peering when you have 2 to 5 VPCs, the topology is static, you need the absolute lowest latency (no transit hop), and you do not expect to scale significantly. Choose Transit Gateway and hybrid networking once you have 6 or more VPCs, span multiple AWS accounts, need transitive routing to on-prem, or want centralized segmentation and logging. The breakeven on cost is roughly 10 VPCs; below that, peering often wins on monthly fees. Above that, Transit Gateway's operational simplicity dominates.
Q2 — Direct Connect Dedicated vs Hosted — how do I decide?
Decide on three axes: bandwidth, procurement time, and VIF count. Dedicated is the answer when you need 1 Gbps, 10 Gbps, or 100 Gbps, can wait weeks for cross-connect, and want multiple VIFs on one port (up to 50). Hosted is the answer when you need a specific bandwidth (50 Mbps, 200 Mbps, 500 Mbps, 1 Gbps) quickly through a partner's existing infrastructure, are fine with one VIF per Hosted Connection, and do not need LAG. SAP-C02 scenarios that say "needs to be live next week" almost always point to Hosted Connection or Site-to-Site VPN, not Dedicated.
Q3 — Does my Transit Gateway and hybrid networking design need appliance mode everywhere?
No — only on Transit Gateway attachments whose spoke VPC contains stateful devices that must see both directions of a flow. That means Inspection VPC attachments (firewall, IDS), central Egress VPC attachments (NAT Gateway is stateful for new connections), and any third-party load balancer appliance VPC attachment. Regular workload VPCs (app servers, databases) do not need appliance mode. Enabling appliance mode needlessly pins flows to a single AZ's Transit Gateway ENI, reducing cross-AZ load distribution — so use it surgically.
Q4 — How do I handle overlapping CIDRs in a merger scenario without re-IPing?
Rank by acquired-company application criticality. For a small number of specific services that need to cross the overlap, publish each service via AWS PrivateLink — the service appears as an Endpoint in the consumer VPC with a locally-unique IP, so CIDR overlap is irrelevant. For broader connectivity, deploy a Bridge VPC with a Private NAT Gateway that translates overlapping source and destination CIDRs to unique proxy ranges; route between the two sides via the bridge and Transit Gateway. Long-term, plan re-IP migrations, typically application by application, behind a DNS cutover and a short maintenance window per app.
Q5 — What is the difference between Direct Connect Gateway and Transit Gateway — do I need both?
They solve different problems and you usually want both in a multi-Region Transit Gateway and hybrid networking design. Direct Connect Gateway is a global aggregator that lets a single Direct Connect connection reach Virtual Private Gateways or Transit Gateways across many AWS Regions. It is not a data-plane hub — it exists purely to multiplex hybrid traffic across Regions. Transit Gateway is the regional data-plane hub that connects VPCs, VPNs, Direct Connect Gateways, other Transit Gateways, and Connect peers within one Region. In the canonical multi-Region design, Direct Connect connects to Direct Connect Gateway, and Direct Connect Gateway associates with one Transit Gateway per Region.
Q6 — How does Transit Gateway and hybrid networking integrate with DNS?
Put Route 53 Resolver inbound and outbound endpoints in the Shared Services VPC. Outbound endpoints forward specific domains (corp.example.com) to on-premises DNS servers over the Direct Connect or VPN path — on-prem DNS servers receive queries as if from the Shared Services VPC's ENIs. Inbound endpoints let on-premises DNS servers forward AWS private-hosted-zone queries into the VPC. Combined with Route 53 Resolver rules shared via RAM across all workload VPCs, this creates a seamless hybrid DNS experience across the entire Transit Gateway and hybrid networking footprint — on-prem names and AWS private names resolve everywhere without forwarding loops.
Q7 — Can I use Transit Gateway to run multicast to my on-premises datacenter?
No. Transit Gateway multicast is Region-local and VPC-only. Multicast packets do not traverse VPN attachments, Direct Connect attachments, or inter-Region peering. For on-prem to AWS multicast, you would need a multicast-aware overlay (PIM-based) running on your own routers or SD-WAN appliances, not native Transit Gateway multicast. If the SAP-C02 question includes "multicast to on-prem," the answer is typically "not supported natively — use an overlay" or "use a third-party SD-WAN appliance via Transit Gateway Connect."
Q8 — How do I minimize the monthly cost of a large Transit Gateway and hybrid networking deployment?
Four levers, in order of impact. First, consolidate egress — a single central Egress VPC often collapses monthly NAT Gateway hours and processing charges by 80%+ versus per-VPC NAT. Second, adopt PrivateLink for cross-account service access where appropriate — PrivateLink traffic is billed at endpoint rates and sometimes avoids Transit Gateway data-processing fees entirely. Third, push egress-heavy workloads over Direct Connect — the per-GB rate is roughly a quarter of the internet-egress rate, paying back the port-hour quickly above ~50 TB/month egress. Fourth, audit Transit Gateway attachments — an attachment you do not use still bills the attachment-hour; detach anything decommissioned.
Q9 — When would I choose AWS Cloud WAN over Transit Gateway for hybrid networking?
Cloud WAN is the right call when your Transit Gateway and hybrid networking footprint spans many Regions (five or more), you want a single policy document to describe segmentation globally instead of stitching Transit Gateway peerings and per-Region route tables, and you want native integration with SD-WAN partners and global routing attributes. Cloud WAN internally uses Transit Gateways but abstracts them behind a global network object. For small (one or two Regions) deployments, direct Transit Gateway is still simpler and cheaper. For 2026-era multi-Region enterprises, Cloud WAN is increasingly the default.
Q10 — Does SiteLink replace my MPLS provider, and how do I size it?
SiteLink can replace MPLS for inter-site connectivity when both sites already have Direct Connect or can procure it. It runs inter-site traffic over the AWS global backbone — typically lower latency and more consistent than public internet paths, and often cheaper than traditional Tier-1 MPLS at comparable bandwidths. Size SiteLink by measuring current MPLS utilization at the 95th percentile, then provision Direct Connect bandwidth at each site to cover that plus headroom. SiteLink pricing is a flat per-hour fee per enabled VIF plus data transfer; compare it to your MPLS monthly recurring cost and you often see a 30-60% saving for multi-site enterprises. Keep a backup path (a traditional ISP or a small MPLS circuit) until you have validated SiteLink under peak load.
Final Study Checklist for Transit Gateway and Hybrid Networking
Before sitting SAP-C02, confirm you can do each of the following from memory:
- Name all six Transit Gateway attachment types and a distinguishing constraint for each.
- Draw the association-vs-propagation matrix for Prod/Dev/Shared/On-prem segmentation.
- Recite the three Direct Connect resilience models and their exact SLAs.
- Explain why appliance mode is required for central egress and inspection VPCs.
- List three remediation patterns for overlapping CIDRs, in preference order.
- Diagram the canonical 50-VPC central-egress-plus-inspection design.
- Distinguish Direct Connect Gateway (global aggregator) from Transit Gateway (regional hub).
- Explain when to use Accelerated VPN, Transit Gateway Connect, and SiteLink.
- Diagnose an asymmetric-routing bug in under 5 checklist steps.
- Estimate the cost delta between central egress and per-VPC NAT for 50 VPCs.
If every item above is second nature, the Transit Gateway and hybrid networking questions on SAP-C02 will feel like pattern recognition, not problem solving — which is exactly the level at which Pro depth must be practiced.