Amazon VPC (Virtual Private Cloud) is the foundation of every AWS network design. Every Amazon EC2 instance, Amazon RDS database, Amazon ECS task, AWS Lambda function with VPC access, and Amazon ElastiCache cluster runs inside a VPC. On SAA-C03, VPC concepts appear in roughly one out of every three scenario questions — not just in the Security domain, but also in Resilience (Multi-AZ subnet placement), Performance (Transit Gateway, PrivateLink), and Cost Optimization (Gateway Endpoints vs NAT Gateway). If you only deeply learn one networking topic for SAA-C03, make it Amazon VPC.
This guide is the canonical VPC reference for every other SAA-C03 topic on ExamHub. When later notes mention "put the database in a private subnet", "use an Interface Endpoint to reach AWS services privately", or "attach a VPC to AWS Transit Gateway", the mechanics are defined here. Task Statement 1.2 ("Design secure workloads and applications") expects you to fluently combine CIDR planning, subnetting, routing, security groups, NACLs, and hybrid connectivity into a production-grade network design. This guide walks through every primitive, highlights the most common SAA-C03 traps, and gives you memorable analogies for the vocabulary.
What is Amazon VPC and Why It Matters for SAA-C03
Amazon VPC is the AWS service that lets you provision a logically isolated virtual network inside an AWS Region, complete with your own private IP address range, subnets, route tables, internet gateways, and security controls. A VPC is effectively your own software-defined data-center network, scoped to one AWS Region, spanning all of that region's Availability Zones (AZs). Every AWS account automatically starts with a default VPC in each AWS Region, plus you can create up to a default of 5 custom VPCs per region (soft limit, raisable).
A VPC gives you three simultaneous superpowers: isolation (your VPC's IP space is walled off from every other AWS customer), control (you pick the CIDR, subnets, routes, and filtering rules), and connectivity (you decide what reaches the internet, what goes through on-premises, and what stays fully private). That combination is why Amazon VPC is the foundational layer of every SAA-C03 Well-Architected design.
- Amazon VPC: an isolated virtual network in one AWS Region, defined by a primary IPv4 CIDR block.
- Subnet: a slice of a VPC's CIDR bound to a single Availability Zone; can be public or private.
- Route table: the per-subnet lookup table AWS uses to forward packets.
- Internet Gateway (IGW): the horizontally scaled, highly available VPC attachment that lets a subnet reach the public internet (and vice versa).
- NAT Gateway: a managed device that lets private subnets initiate outbound internet traffic without accepting inbound.
- Security Group: a stateful, instance-level virtual firewall.
- Network ACL (NACL): a stateless, subnet-level firewall.
- Reference: https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html
Why Amazon VPC is the foundational SAA-C03 topic
The SAA-C03 Design Secure Architectures domain carries 30% exam weight, and Task Statement 1.2 (Design secure workloads and applications) explicitly tests VPC design patterns — public/private subnet tiers, Security Groups, NACLs, VPC Endpoints, PrivateLink. But VPC also surfaces inside Domain 2 (Multi-AZ subnet placement for RDS failover), Domain 3 (Transit Gateway hub-and-spoke for performance and scale), and Domain 4 (Gateway Endpoints to avoid NAT Gateway data-processing charges). Almost every other ExamHub SAA-C03 topic links back to this one; learning Amazon VPC deeply pays off across all four domains.
Plain-Language Explanation: Amazon VPC
Abstract networking concepts get much easier when you map them to physical spaces. Here are three distinct Amazon VPC analogies that cover every major SAA-C03 Amazon VPC construct.
Analogy 1: The Office Building with Zoned Floors
Picture a private corporate office building. The building itself is your Amazon VPC — one physical structure, on one plot of land (one AWS Region), with a single street-address range (the VPC CIDR like 10.0.0.0/16). Each floor is a subnet, and each floor is deliberately assigned to one specific fire zone in the building (one Availability Zone) so that if a fire isolates zone A, zone B floors stay operational. Public floors (public subnets) are floors with street-facing windows and a door directly to the sidewalk — that door is the Internet Gateway (IGW). Private floors (private subnets) have no street-facing door; staff who need to step outside have to use the freight exit on the loading dock — the NAT Gateway — which lets them go out and come back with deliveries but nobody walks in off the street.
Route tables are the directory posted next to every elevator: "To reach floor 3, press 3; to reach the sidewalk, press L; to reach the sister building next door, press the skybridge button." The Security Group is the security guard stationed at each individual office door — they check every visitor against a personalized allowlist and remember who walked in, so those people can walk out again without being re-checked (stateful). The Network ACL (NACL) is the turnstile at the floor elevator lobby — it checks everyone who steps onto or off the floor against a numbered rulebook, and it has no memory of who just came in (stateless), so return traffic needs its own explicit rule.
A VPC Endpoint is a private corridor drilled straight from your floor to the AWS service wing without ever stepping outside the building — you reach Amazon S3 or Amazon DynamoDB via an internal hallway instead of going out to the street. VPC Peering is a skybridge between your building and a partner's building next door — fast, direct, but only connects those two specific buildings. AWS Transit Gateway is the central bus terminal downtown — every building in the district attaches to the terminal once, and buses route between them via the hub.
Analogy 2: The Gated Community with House Rules
Imagine a large gated residential community. The community gate with the guard booth at the entrance is the Internet Gateway — that gate is the only way cars can drive in from the public road. The internal cul-de-sacs are subnets, each assigned to one emergency-response precinct (an AZ). Houses on street-facing cul-de-sacs (public subnets) can order pizza and have the driver pull right up; houses on back cul-de-sacs (private subnets) have no direct driveway from the main road, so the residents have to call the community shuttle (NAT Gateway) to drive them out to run errands. The shuttle brings them back, but pizza delivery cars cannot spontaneously drive to the back cul-de-sac.
House rules posted on every front door are Security Groups — the homeowner writes "the plumber, the electrician, and grandma may enter", the rule travels with the house, and the bouncer at the door remembers whoever walked in so they can leave unchallenged. Cul-de-sac HOA rules at the street entrance are the NACL — numbered from 1 upward, evaluated in order, stopping at the first match; anyone entering or leaving the cul-de-sac must be explicitly allowed in both directions because the HOA rulebook doesn't remember visitors.
VPC Flow Logs are the community's CCTV system — every car that enters or leaves any cul-de-sac is logged with time, direction, plate number (source/destination IP), and whether the gate let them through (accept) or turned them away (reject).
Analogy 3: The Shipping Port with Customs Zones
An international shipping port captures Amazon VPC perfectly. The port (VPC) sits on a coastline (an AWS Region) and is divided into multiple docks (subnets), each assigned to a specific operational zone (AZ) so that if zone-A cranes break, zone-B docks keep running. The harbor mouth opening to the sea is the Internet Gateway — cargo ships enter and exit here. Public docks (public subnets) are directly on the harbor — ships can moor straight up. Private docks (private subnets) are inland warehouses with no sea access; their outbound cargo rides a shuttle truck (NAT Gateway) down to the harbor and comes back through the same shuttle.
The port's roadmap posted at every dock (route table) tells forklifts where each destination lives. The on-dock customs officer checking every container is the Security Group — stateful, remembers each inspection. The perimeter checkpoint between the port zones is the NACL — numbered rules, stateless, no memory. A private rail spur to the Amazon service depot is a Gateway Endpoint (reaches Amazon S3 or Amazon DynamoDB without going to sea) or an Interface Endpoint / PrivateLink (a private freight elevator to most other AWS services and third-party SaaS providers). An undersea cable to one specific partner port is VPC Peering; the regional cargo hub that connects every port in the district via one attachment is AWS Transit Gateway. An encrypted tunnel through the public sea lanes is an AWS Site-to-Site VPN, and a dedicated physical underwater cable laid to your on-premises port is AWS Direct Connect.
On exam day, when you see "private subnet with outbound internet" in a question, mentally picture the community shuttle or the freight-truck loading dock — that reminds you to place a NAT Gateway in a public subnet and point the private subnet's route table at it. Reference: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html
VPC Basics — CIDR Blocks, Address Reservations, and Region Scope
Before you can lay out a VPC, you need a solid grip on IPv4 CIDR math and the five addresses AWS silently reserves in every subnet.
VPC CIDR block sizing
An Amazon VPC is defined by a primary IPv4 CIDR block between /16 (65,536 addresses) and /28 (16 addresses). The most common production choice is /16 because it leaves plenty of room for future subnets and future multi-VPC designs. Allowed private ranges per RFC 1918 are:
10.0.0.0/8172.16.0.0/12192.168.0.0/16
AWS also lets you attach secondary CIDR blocks to a VPC (up to 5 by default) if you run out of addresses, and you can dual-stack with an IPv6 /56 block. For SAA-C03, assume IPv4 unless the question specifies IPv6.
Subnet CIDR sizing and AZ binding
Every subnet is a subset of the VPC CIDR and is bound to exactly one Availability Zone. Subnet sizes range from /28 (16 addresses) to /16 (same as the VPC). A subnet is either public (its route table has a default route to an Internet Gateway) or private (it does not). The public/private label is not a property of the subnet object itself — it is decided entirely by the route table attached to it.
The five reserved addresses per subnet
AWS reserves 5 IP addresses in every subnet regardless of size. For a 10.0.1.0/24 subnet:
| Address | Reserved for |
|---|---|
10.0.1.0 |
Network address |
10.0.1.1 |
VPC router (default gateway) |
10.0.1.2 |
AWS DNS (Amazon-provided DNS) |
10.0.1.3 |
Reserved for future AWS use |
10.0.1.255 |
Network broadcast address (not usable in VPC) |
A /28 subnet therefore has 16 − 5 = 11 usable IP addresses. This matters when you size subnets for Auto Scaling groups, EKS pods, or ENI-heavy workloads.
- VPC CIDR size: /16 to /28.
- Subnet CIDR size: /28 (16) to /16 — must sit inside a VPC CIDR.
- 5 reserved IPs per subnet: network, router, DNS, future, broadcast.
- Default soft limit: 5 VPCs per region, 200 subnets per VPC.
- Default soft limit: 5 secondary CIDRs per VPC.
- Amazon VPC is regional; subnets are AZ-scoped.
- Reference: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-subnets-commands-example.html
Subnets: Public vs Private — Route Tables, Internet Gateway, and NAT Gateway
Public vs private is the single most-tested subnet distinction on SAA-C03. Understanding exactly what makes a subnet "public" unlocks most networking scenario questions.
What makes a subnet public
A subnet is public when three conditions are all true:
- Its route table has a default route (
0.0.0.0/0) pointing to an Internet Gateway (IGW). - The IGW is attached to the VPC.
- Resources in the subnet have public IPv4 addresses (either auto-assigned via the subnet's
MapPublicIpOnLaunchsetting, or an Elastic IP attached manually).
Remove any one of those and the subnet is effectively private.
What makes a subnet private
A private subnet has no default route to an IGW. Resources there have only private IP addresses and cannot be reached directly from the public internet. Private subnets are where you put databases (Amazon RDS, Amazon ElastiCache), application servers behind a load balancer, and internal microservices — anything that should never accept unsolicited connections from the internet.
Internet Gateway (IGW)
An Internet Gateway is a horizontally scaled, redundant, highly available VPC component that performs two jobs: it acts as a target in route tables for internet-bound traffic, and it performs 1:1 NAT between private and public IPv4 addresses for EC2 instances that have public IPs. You attach exactly one IGW per VPC. There is no charge for the IGW itself, but you pay for the data transfer flowing through it.
NAT Gateway — outbound-only internet for private subnets
A NAT (Network Address Translation) Gateway is a managed AWS service that lets instances in a private subnet initiate outbound connections to the public internet (to download OS patches, reach third-party APIs, call AWS service endpoints that don't have a VPC endpoint) without accepting inbound connections from the internet. The NAT Gateway itself lives in a public subnet and uses an Elastic IP. Private-subnet route tables send 0.0.0.0/0 to the NAT Gateway, which then forwards those packets through the IGW on behalf of the originators.
NAT Gateway key properties SAA-C03 tests repeatedly:
- Zonal — each NAT Gateway lives in one AZ. For high availability you need one NAT Gateway per AZ with each private subnet's route table pointing at the NAT Gateway in its own AZ.
- Up to 45 Gbps bandwidth per NAT Gateway, scales automatically up to that ceiling.
- Charged per hour + per GB processed, which makes NAT Gateway a frequent cost-optimization target.
- Managed — no patching, no HA design inside the AZ.
NAT Instance (legacy)
A NAT Instance is a self-managed EC2 instance running NAT software. AWS still supports it, but it is considered legacy — no auto-HA, no auto-scaling, you patch it yourself. On SAA-C03, default to NAT Gateway unless a question explicitly asks about lowest-cost, very low traffic, or a specialized scenario where you need to run your own software on the NAT host.
A common SAA-C03 resilience mistake is deploying a single NAT Gateway in one AZ and routing every private subnet (in every AZ) to it. If that AZ fails, every private subnet loses outbound internet — you have introduced a cross-AZ single point of failure. The correct pattern is one NAT Gateway per AZ, and each private subnet's route table points at the NAT Gateway in its own AZ. This also avoids cross-AZ data transfer charges. Reference: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html
Route Tables — How Traffic Finds Its Way Out of a Subnet
Route tables are where most network mis-configurations hide. Every subnet has exactly one route table in effect, and that table's contents decide whether the subnet is public or private, whether it can reach peered VPCs, and whether it can reach on-premises.
Main route table vs custom route tables
Every VPC has a main route table created automatically. Subnets that are not explicitly associated with a custom route table inherit the main route table. For production, the recommended pattern is to explicitly associate a custom route table with every subnet so that a misconfigured main route table cannot accidentally change your topology.
Route evaluation: most specific match wins
When traffic leaves a subnet, Amazon VPC matches the destination against every route in the table and picks the most specific (longest prefix) match. A route to 10.0.2.0/24 beats a 10.0.0.0/16 route, which beats 0.0.0.0/0. The default local route (for the VPC's own CIDR) is always present and always wins for in-VPC traffic.
Common route table entries SAA-C03 tests
| Destination | Target | Purpose |
|---|---|---|
10.0.0.0/16 (VPC CIDR) |
local |
Automatic, in-VPC routing — always present |
0.0.0.0/0 |
igw-xxxx |
Public subnet default route to internet |
0.0.0.0/0 |
nat-xxxx |
Private subnet default route via NAT Gateway |
10.1.0.0/16 |
pcx-xxxx |
Route to peered VPC |
10.1.0.0/16 |
tgw-xxxx |
Route to Transit Gateway attachment |
192.168.0.0/16 |
vgw-xxxx |
Route to on-premises via VPN |
pl-xxxx (prefix list) |
vpce-xxxx |
Route to Gateway VPC Endpoint (S3 or DynamoDB) |
Security Groups vs Network ACLs — The Most-Tested Distinction
This distinction shows up in at least one SAA-C03 question on almost every exam attempt. Master it and you will earn easy points across Domain 1 and Domain 3.
Security Group — stateful, instance-level
A Security Group (SG) is a virtual firewall attached to elastic network interfaces (ENIs) — which means it lives at the instance level, not the subnet level. Each ENI can have up to 5 security groups, and each security group can have up to 60 inbound and 60 outbound rules (1,000 rules maximum per SG across both directions with a limit increase).
Key SG properties:
- Stateful — if you allow an inbound request, the return response is automatically allowed back out. You don't need a matching outbound rule for the reply.
- Allow rules only — you cannot write a deny rule. You only specify what is permitted; everything else is implicitly denied.
- References — a security group rule can reference another security group as its source/destination. This lets you say "allow inbound TCP 3306 from instances in the
app-tier-sg" without hardcoding IPs. - Default outbound — a new security group starts with an outbound
allow allrule. A new security group has no inbound rules by default. - Applied at ENI — the same SG can be attached to instances across multiple subnets and AZs.
Network ACL — stateless, subnet-level
A Network ACL (NACL) is a firewall that sits at the subnet boundary. Every packet entering or leaving the subnet is evaluated against the NACL's rules.
Key NACL properties:
- Stateless — inbound and outbound traffic are evaluated independently. If you allow inbound TCP 443, you must also write an outbound rule for the ephemeral port range (1024-65535 for most Linux, 49152-65535 for Windows) to let the response leave.
- Numbered rules, lowest number wins — rules are evaluated in order from lowest to highest rule number; the first match decides and AWS stops evaluating. Convention: space rules in 100s (100, 200, 300...) so you can insert later.
- Allow AND deny rules — NACLs support explicit deny, which is useful for blocking a known bad IP range.
- One NACL per subnet — a subnet can only be associated with one NACL, though one NACL can cover many subnets.
- Default NACL — allows all traffic in and out. Custom NACL — denies all traffic by default until you add rules.
Side-by-side — SG vs NACL
| Aspect | Security Group | Network ACL |
|---|---|---|
| Scope | ENI (instance) | Subnet |
| State | Stateful | Stateless |
| Rule types | Allow only | Allow and Deny |
| Rule order | All evaluated (implicit deny last) | Numbered, first match wins |
| Default inbound | Deny all | Default NACL: allow; Custom: deny |
| Default outbound | Allow all | Default NACL: allow; Custom: deny |
| Can reference other SG? | Yes | No (CIDR only) |
| Typical use | Fine-grained app-tier filtering | Coarse subnet-perimeter guardrails |
Defense in depth — use both
Security Groups and NACLs are not either/or — AWS well-architected designs use both. Security Groups handle fine-grained, reference-based rules at the instance layer (web tier talks to app tier, app tier talks to DB tier), and NACLs provide a coarse-grained subnet-perimeter guardrail (no traffic from known-bad IP ranges, deny all SMB/NetBIOS at the subnet edge).
The single most common SAA-C03 network trap is forgetting that NACLs are stateless. If a question shows a NACL inbound rule allowing HTTPS 443 but omits the outbound ephemeral port rule, the return traffic gets dropped and the connection fails — even though the inbound rule looks correct. Security Groups, by contrast, always allow return traffic automatically. When a question describes "connection times out even though inbound rule is allowed", suspect the NACL ephemeral port range. Reference: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html
- Security Group = Stateful = Instance level = Allow only = References other SGs.
- NACL = Stateless = Subnet level = Allow + Deny = Numbered rules, first match wins.
- Return traffic through NACL requires an explicit ephemeral port range rule (usually 1024-65535).
- Security Groups: default deny inbound, allow outbound.
- Custom NACLs: default deny both directions until you add rules.
- Reference: https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html
VPC Endpoints — Reach AWS Services Privately
A VPC Endpoint is a virtual device that lets resources in your VPC reach AWS services (and SaaS partners via AWS PrivateLink) without traversing the public internet, without needing an Internet Gateway, without needing a NAT Gateway, and without needing a VPN. Two distinct endpoint types exist and SAA-C03 tests both.
Gateway Endpoint — S3 and DynamoDB only, free
A Gateway Endpoint is a route-table target that lets you reach Amazon S3 or Amazon DynamoDB privately. You create the endpoint, AWS adds a prefix-list route to the route tables you pick, and traffic to S3 or DynamoDB now flows through the endpoint instead of the IGW/NAT path.
Gateway Endpoint key properties:
- Supported services: Amazon S3 and Amazon DynamoDB — only these two.
- No additional charge — Gateway Endpoints are free and do not incur data processing fees.
- VPC-scoped — works only inside the VPC, no cross-region or cross-account connection.
- Prefix-list routing — AWS maintains the list of S3/DynamoDB IPs; you just reference the
pl-xxxxprefix list. - Great for cost optimization — if your private subnet's primary outbound traffic is to S3, a Gateway Endpoint can dramatically cut your NAT Gateway data-processing bill.
Interface Endpoint — AWS PrivateLink, most AWS services and SaaS
An Interface Endpoint is an ENI placed in one or more of your subnets with a private IP from the subnet's range. Traffic to the endpoint reaches the target service privately via AWS PrivateLink. Interface Endpoints support most AWS services (Amazon EC2 API, AWS KMS, AWS Systems Manager, Amazon SQS, Amazon SNS, AWS Secrets Manager, Amazon ECR, and 100+ others) plus third-party SaaS providers published through AWS PrivateLink (Datadog, Snowflake, MongoDB Atlas, and many more).
Interface Endpoint key properties:
- Charged per hour per AZ + per GB processed.
- Each endpoint lives in the subnets you pick — for HA, pick subnets in multiple AZs.
- Resolves via private DNS to the PrivateLink ENI IP when "Enable Private DNS" is on, so your application code doesn't need to change the endpoint URL.
- Controlled by endpoint policies (a resource-based IAM policy scoping which API actions are allowed through the endpoint) and Security Groups on the endpoint ENI.
Gateway vs Interface Endpoint — the SAA-C03 decision
| Aspect | Gateway Endpoint | Interface Endpoint (PrivateLink) |
|---|---|---|
| Supported services | S3, DynamoDB only | Most AWS services + SaaS partners |
| Mechanism | Route-table prefix list | ENI with private IP |
| Cost | Free | Per hour + per GB processed |
| Scope | In-VPC only | In-VPC, cross-region, cross-account |
| DNS | No change needed | Private DNS resolves to ENI IP |
| Use when | Reaching S3/DynamoDB from private subnets | Reaching SQS, KMS, SSM, Secrets Manager, etc., privately |
A classic SAA-C03 cost-optimization question describes a private workload making heavy requests to Amazon S3 and paying high NAT Gateway data-processing fees. The answer is a Gateway Endpoint for Amazon S3 — it is free, removes S3 traffic from the NAT Gateway path entirely, and requires only a route-table change. If the question mentions S3 or DynamoDB, default to Gateway Endpoint. If it mentions any other AWS service (KMS, Secrets Manager, SSM, SQS, SNS, ECR), default to Interface Endpoint. Reference: https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints.html
AWS PrivateLink — Private Service Exposure Without the Public Internet
AWS PrivateLink is the underlying technology that powers Interface Endpoints. It lets a service provider (another AWS account, or a SaaS vendor) expose a service as an endpoint service backed by a Network Load Balancer, and lets service consumers reach it via Interface Endpoints in their own VPCs — without either party's traffic ever touching the public internet, and without requiring VPC Peering or Transit Gateway between them.
Why PrivateLink matters for SAA-C03
- No CIDR overlap constraint — unlike VPC Peering or Transit Gateway, PrivateLink works even if the provider and consumer VPCs have overlapping IP ranges, because each consumer only sees the ENI IPs in its own subnet.
- Unidirectional — the consumer initiates to the provider; there is no return path the other way unless the provider also sets up PrivateLink going the opposite direction.
- One-to-many — one service provider can publish to many consumer VPCs across many AWS accounts.
- SaaS use case — AWS customers expose internal microservices to other business units, and SaaS vendors sell services over PrivateLink to keep customer data off the internet.
Typical SAA-C03 PrivateLink scenarios
- "A company wants to expose an internal application to partner VPCs without using VPC Peering or the public internet" → AWS PrivateLink endpoint service.
- "A SaaS vendor delivers its API to customer VPCs privately with no CIDR planning coordination" → AWS PrivateLink.
- "A workload in a private subnet needs to call AWS Secrets Manager without going through the NAT Gateway" → Interface Endpoint (which is PrivateLink under the hood).
VPC Peering — Non-Transitive One-to-One Connection
VPC Peering is a networking connection between exactly two VPCs that allows traffic to route between them using private IPv4 or IPv6 addresses. Peering works in the same AWS Region, across AWS Regions (inter-region peering), and across AWS accounts.
VPC Peering key properties
- CIDR blocks must not overlap. Peering will not route between overlapping IP ranges; you must plan CIDRs up front.
- Non-transitive. If VPC A is peered with VPC B, and VPC B is peered with VPC C, VPC A cannot reach VPC C through B. You must create a separate peering between A and C.
- Route table configuration is required on both sides — each VPC needs a route for the peer's CIDR pointing at
pcx-xxxx. - Security Groups can reference peer SGs only within the same region (not across regions for peering; cross-region peering requires CIDR-based rules).
- No single point of failure, no bandwidth bottleneck — peering uses the AWS backbone directly and scales automatically.
- Pricing — no hourly charge, you pay only for inter-AZ or inter-region data transfer on traffic traversing the peering.
When to use VPC Peering vs AWS Transit Gateway
| Scenario | Recommended |
|---|---|
| 2-3 VPCs, simple point-to-point connectivity, no overlap | VPC Peering |
| Many VPCs (5+), hub-and-spoke or full mesh | AWS Transit Gateway |
| You need transitive routing between multiple VPCs | AWS Transit Gateway |
| You want to connect on-premises to many VPCs over one VPN/DX | AWS Transit Gateway |
A recurring SAA-C03 trap is a three-VPC scenario: A peered to B, B peered to C, question asks whether A can reach C. Answer: no — VPC Peering is non-transitive. The correct fix is either a direct A-to-C peering (which doesn't scale) or moving to AWS Transit Gateway (which is transitive by design). If the scenario mentions "many VPCs" or "transitive routing", the answer is almost always Transit Gateway. Reference: https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html
AWS Transit Gateway — Hub-and-Spoke for Many VPCs
AWS Transit Gateway (TGW) is a regional, highly available network transit hub that you attach many VPCs, VPN connections, Direct Connect gateways, and Transit Gateway peerings to. Instead of the N×(N−1)/2 peerings a full mesh of N VPCs would require, TGW reduces the design to N attachments hanging off a single hub.
Transit Gateway key properties
- Regional — one TGW per region, scales to thousands of VPC attachments.
- Transitive routing — any attachment can route to any other attachment (subject to route-table rules).
- Route tables — a Transit Gateway has its own route tables, separate from VPC route tables. Each attachment is associated with one TGW route table and can propagate routes to others.
- Inter-region TGW peering — link two TGWs in different regions to build a global backbone.
- Direct Connect integration — connects your on-premises network to every attached VPC via one Direct Connect gateway.
- Bandwidth — up to 50 Gbps per attachment.
- Pricing — hourly per-attachment fee plus per-GB data-processing fee. Transit Gateway is powerful but noticeably more expensive than peering.
When Transit Gateway is the right answer
- 5+ VPCs need connectivity between each other.
- Hybrid network where on-premises needs to reach many VPCs.
- Multi-account landing zone with dozens of workload accounts sharing a central egress or shared-services VPC.
- Transitive routing required (VPC A ↔ VPC B ↔ VPC C over one hub).
For SAA-C03, if the scenario describes a company with many AWS accounts (via AWS Organizations) and many VPCs that need to share connectivity, AWS Transit Gateway is almost always the correct answer. VPC Peering is acceptable only for a small number of VPCs with no transitive requirement. Do watch the cost trade-off — TGW charges per attachment-hour and per GB processed, which is why cost-optimization questions sometimes still prefer peering for a 2-VPC setup. Reference: https://docs.aws.amazon.com/vpc/latest/tgw/what-is-transit-gateway.html
AWS Site-to-Site VPN — Encrypted Tunnel Over the Public Internet
AWS Site-to-Site VPN creates two IPsec tunnels (for redundancy) between your on-premises customer gateway device and an AWS-side Virtual Private Gateway (VGW) attached to a VPC, or a Transit Gateway. Traffic flows over the public internet but is encrypted end-to-end.
Site-to-Site VPN key properties
- Two tunnels per VPN connection, each terminated on a different AWS availability endpoint for HA.
- Up to 1.25 Gbps per tunnel (aggregate throughput typically capped around this range on the AWS side).
- BGP or static routing — BGP allows dynamic failover; static routing requires manual route updates.
- Quick to stand up — provisioned in minutes, no cross-connect ordering.
- Encrypted over the internet — traffic is encrypted, but performance depends on internet path quality (variable latency, occasional jitter).
- Cost — low hourly charge + standard data-transfer-out pricing.
AWS Client VPN vs Site-to-Site VPN
These are two different AWS services. AWS Site-to-Site VPN connects entire on-premises networks (whole data centers) to AWS. AWS Client VPN is an OpenVPN-based managed service that connects individual user devices (laptops, phones) to a VPC for remote-worker access. SAA-C03 questions about "branch office to AWS" mean Site-to-Site VPN; questions about "developers connecting to a private VPC from home" mean Client VPN.
AWS Direct Connect — Dedicated Private Fiber to AWS
AWS Direct Connect (DX) is a physical, dedicated network connection between your on-premises data center (or a colocation cage) and an AWS Direct Connect location. It bypasses the public internet entirely, delivering consistent low latency and predictable bandwidth.
Direct Connect key properties
- Port speeds: 1 Gbps, 10 Gbps, 100 Gbps (dedicated connections); sub-1 Gbps available via Direct Connect partners (hosted connections).
- Physical cross-connect at a Direct Connect location — takes weeks to months to provision (order, ship, install fiber).
- Not encrypted by default — Direct Connect is private but the physical link does not encrypt Layer 2. For encryption, layer AWS Site-to-Site VPN over Direct Connect (sometimes called "DX + VPN").
- Virtual Interfaces (VIFs): a private VIF reaches one VPC's VGW or a Direct Connect gateway; a public VIF reaches AWS public service endpoints (S3, DynamoDB) privately; a transit VIF attaches to a Direct Connect gateway that fronts a Transit Gateway.
- Direct Connect Gateway lets one Direct Connect connection reach multiple VPCs in multiple regions.
- Resilience tiers — AWS publishes "Maximum Resiliency" (4 DX connections across 2 locations) and "High Resiliency" (2 DX connections across 2 locations) reference architectures.
VPN vs Direct Connect — the SAA-C03 decision
| Requirement | Site-to-Site VPN | Direct Connect |
|---|---|---|
| Provisioning time | Minutes | Weeks to months |
| Bandwidth | Up to 1.25 Gbps/tunnel | 1, 10, or 100 Gbps |
| Latency consistency | Variable (internet) | Consistent, low |
| Cost | Lowest hourly + data transfer | Highest — port fee + cross-connect + data transfer |
| Encryption | Built-in (IPsec) | Not by default (add VPN on top) |
| Use case | Low/medium traffic, fast to provision, short-term | High-throughput, steady-state, latency-sensitive |
If a SAA-C03 scenario says "a company needs to connect on-premises to AWS today" or mentions "short-term" or "backup connection", the answer is Site-to-Site VPN. If the scenario mentions "sustained 10 Gbps", "consistent latency for real-time trading", "large database replication from on-premises", or "reduce long-term data transfer cost at scale", the answer is AWS Direct Connect — optionally with a VPN on top for encryption. Reference: https://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html
VPC Flow Logs — Visibility for Security and Troubleshooting
VPC Flow Logs capture metadata about every IP packet flowing to or from network interfaces in your VPC. You can enable Flow Logs at the VPC level, subnet level, or individual ENI level, and publish the records to Amazon CloudWatch Logs, Amazon S3, or Amazon Kinesis Data Firehose.
What a Flow Log record contains
Each record captures (in the default format): source IP, destination IP, source port, destination port, protocol, packets, bytes, start/end window, action (ACCEPT or REJECT), and the evaluation result. Custom formats let you add fields like the VPC ID, subnet ID, instance ID, AWS region, and TCP flags.
What Flow Logs can and cannot do
Flow Logs do capture:
- Traffic allowed or blocked by security groups and NACLs.
- Packets to and from Elastic Network Interfaces including NAT Gateway and load balancer ENIs.
- Rejected traffic — extremely useful for diagnosing "why can't A reach B?" issues.
Flow Logs do not capture:
- Traffic to and from Amazon-provided DNS (169.254.169.253 or the
.2resolver). - Traffic to the Amazon EC2 instance metadata service (
169.254.169.254). - DHCP traffic.
- Mirror traffic (for that use VPC Traffic Mirroring).
- The payload of packets — Flow Logs are metadata only, not a packet capture.
Typical SAA-C03 Flow Logs scenarios
- "Troubleshoot why an EC2 instance cannot connect to an RDS database in a private subnet" → enable VPC Flow Logs and check for REJECT records on the relevant NACL or security group.
- "Forensic / compliance requirement to retain a record of all network traffic in the VPC for a year" → Flow Logs published to S3 with lifecycle to Glacier.
- "Real-time network monitoring and anomaly detection" → Flow Logs to Kinesis Data Firehose into OpenSearch, or let Amazon GuardDuty consume them automatically for threat detection.
For SAA-C03, remember that Flow Logs show who talked to whom and whether it was allowed, not what they said. If a scenario needs the actual packet contents (intrusion analysis, deep packet inspection), the correct service is VPC Traffic Mirroring, not Flow Logs. Reference: https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html
Network Segmentation Strategies — Tiered Subnet Architecture
A production Amazon VPC for SAA-C03 usually follows a three-tier subnet architecture per Availability Zone:
- Public subnet tier — holds internet-facing load balancers (Application Load Balancer, Network Load Balancer), NAT Gateways, and bastion hosts. Route table has
0.0.0.0/0 → IGW. - Private application subnet tier — holds EC2 application servers, ECS/EKS worker nodes, container tasks, and Lambda functions with VPC access. No direct internet ingress. Route table has
0.0.0.0/0 → NAT Gateway. - Private data subnet tier — holds Amazon RDS, Amazon ElastiCache, Amazon OpenSearch. No outbound internet unless required. Security group only accepts traffic from the application-tier security group.
Deploy the same three tiers in each AZ (typically 2-3 AZs for HA). That gives you 6-9 subnets per VPC, which is the canonical SAA-C03 layout and matches the AWS Well-Architected reference architecture.
Key Numbers and Must-Memorize Facts for Amazon VPC
Memorize this short list — it covers the majority of VPC numeric and categorical traps on SAA-C03.
- VPC CIDR range: /16 (65,536) to /28 (16). RFC 1918 ranges only for private.
- Reserved IPs per subnet: 5 (network, router, DNS, future, broadcast).
- Soft limits: 5 VPCs/region, 200 subnets/VPC, 5 SGs/ENI, 60 rules/SG direction.
- Security Group = stateful = instance level = allow only.
- NACL = stateless = subnet level = allow and deny = numbered rules.
- One IGW per VPC, horizontally scaled, free (pay data transfer only).
- NAT Gateway = zonal = up to 45 Gbps = hourly + per-GB fee.
- Gateway Endpoint = S3 and DynamoDB only = FREE.
- Interface Endpoint = PrivateLink = most services = hourly + per-GB fee.
- VPC Peering = non-transitive, no CIDR overlap, no hourly fee.
- Transit Gateway = transitive, regional hub, hourly + per-GB fee.
- Site-to-Site VPN = 2 tunnels, up to 1.25 Gbps/tunnel, IPsec encryption.
- Direct Connect = dedicated fiber, 1/10/100 Gbps, not encrypted by default.
- VPC Flow Logs = metadata only, publishable to CloudWatch, S3, or Firehose.
- Reference: https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html
Common Exam Traps — Amazon VPC
Expect at least two of these on every SAA-C03 attempt. Learn to spot the pattern before reading the answer choices.
Trap 1: Security Group stateful vs NACL stateless
Answer choices that imply you need a matching outbound rule in a Security Group for return traffic are distractors — SGs are stateful. Conversely, answers that forget the NACL ephemeral-port outbound rule will produce broken traffic in NACL-tuned scenarios. If the question shows a NACL rule and the connection is failing, suspect missing ephemeral ports (1024-65535).
Trap 2: VPC Peering is non-transitive
A three-VPC scenario where A ↔ B and B ↔ C but the question asks whether A reaches C through B. The answer is no, and the fix is AWS Transit Gateway, not another peering.
Trap 3: Gateway Endpoint covers only S3 and DynamoDB
Any scenario involving "private access to AWS service X without NAT Gateway" defaults to Interface Endpoint (PrivateLink) — except when X is Amazon S3 or Amazon DynamoDB, in which case the Gateway Endpoint is free and is the correct answer.
Trap 4: NAT Gateway HA requires one per AZ
Deploying a single NAT Gateway and routing all private subnets to it creates a cross-AZ single point of failure. The correct HA pattern is one NAT Gateway per AZ, each private subnet's route table pointing at the NAT Gateway in its own AZ.
Trap 5: Direct Connect is not encrypted by default
Direct Connect is private (not internet-routed) but its Layer 2 is not encrypted. For compliance requirements that mandate encryption, layer Site-to-Site VPN on top of Direct Connect.
Trap 6: Subnet = one AZ only
Subnets cannot span AZs. For Multi-AZ RDS, Auto Scaling across AZs, and similar resilience requirements, you create one subnet per AZ. Answer choices that describe "a subnet spanning multiple Availability Zones" are distractors.
Trap 7: VPC is regional, subnet is AZ-scoped
Amazon VPC is a regional construct — you do not create a VPC per AZ. Each subnet inside the VPC is AZ-scoped. Answers that imply "one VPC per AZ" are wrong.
VPC as the Foundational Topic — How Other SAA-C03 Domains Reference It
This topic is the foundation that downstream SAA-C03 topics link back to. Here is how each domain builds on the VPC concepts in this note:
- Domain 1 —
application-security-protection: AWS WAF sits in front of Application Load Balancers placed in public subnets. AWS Shield protects public-facing ENIs. Amazon GuardDuty consumes VPC Flow Logs for threat detection. AWS Network Firewall sits in an inspection subnet. - Domain 1 —
data-encryption-key-management: AWS KMS accessed privately from workloads in private subnets via an Interface Endpoint. - Domain 2 —
high-availability-multi-az: Multi-AZ RDS, ElastiCache, and Auto Scaling groups require one subnet per AZ in each tier from this note's tiered subnet architecture. - Domain 2 —
disaster-recovery-strategies: Cross-region VPC Peering or inter-region Transit Gateway peering connects primary and DR regions. - Domain 3 —
high-performing-network-architectures: Deep-dives into Transit Gateway, PrivateLink performance, Direct Connect virtual interfaces, and CIDR planning at scale. - Domain 3 —
data-transfer-solutions: AWS DataSync and AWS DMS are placed in private subnets and reach S3 via a Gateway Endpoint. - Domain 4 —
cost-optimized-network: Gateway Endpoints to avoid NAT Gateway fees, shared NAT Gateway patterns, Transit Gateway attachment cost versus peering, Direct Connect vs VPN cost analysis.
Every one of these downstream topics assumes you know the primitives in this note. When in doubt, come back here.
FAQ — Amazon VPC Top Questions
Q1: What is the difference between a Security Group and a NACL?
A Security Group is a stateful, instance-level firewall attached to ENIs. It supports allow rules only, can reference other Security Groups as sources, and automatically permits return traffic. A NACL is a stateless, subnet-level firewall with numbered rules (evaluated lowest first, first match wins), supports both allow and deny, and requires explicit rules for both inbound and outbound directions — including the ephemeral port range on the return path. Use Security Groups for fine-grained instance-to-instance rules and NACLs for coarse subnet-perimeter guardrails. They complement each other and a strong SAA-C03 design uses both.
Q2: What is the difference between a public subnet and a private subnet?
A public subnet has a route table entry sending 0.0.0.0/0 to an Internet Gateway, and resources in it have public IPv4 addresses. A private subnet does not have that default route — its resources have only private IPs and cannot be reached directly from the internet. Private subnets typically route 0.0.0.0/0 to a NAT Gateway (in a public subnet) so that outbound calls still work, but inbound connections from the internet are impossible. The distinction is entirely about the route table and the IGW — there is no "public" flag on the subnet object itself.
Q3: When should I use a Gateway Endpoint vs an Interface Endpoint?
Use a Gateway Endpoint when you want private access to Amazon S3 or Amazon DynamoDB — these are the only two services Gateway Endpoints support, and they are free. Use an Interface Endpoint (AWS PrivateLink) for every other AWS service (AWS KMS, AWS Secrets Manager, AWS Systems Manager, Amazon SQS, Amazon SNS, Amazon ECR, etc.) and for third-party SaaS services published through PrivateLink. Interface Endpoints charge an hourly fee per AZ plus per-GB processed, but they scale much more broadly. For cost optimization, always check whether an S3 Gateway Endpoint can replace NAT Gateway data-processing charges first.
Q4: When should I use VPC Peering vs AWS Transit Gateway?
Use VPC Peering when you have a small number of VPCs (typically 2-4), no transitive routing requirement, and non-overlapping CIDRs. Peering has no hourly charge and uses the AWS backbone with no bandwidth ceiling. Use AWS Transit Gateway when you have many VPCs (5+), a hub-and-spoke or fully-meshed topology, or a transitive routing requirement (VPC A needs to reach VPC C through a central hub). Transit Gateway also consolidates hybrid connectivity — one Direct Connect gateway or VPN attachment can reach every spoke VPC. Transit Gateway charges per attachment-hour and per GB processed, which makes it costlier than peering at very small scale but far more manageable at medium to large scale.
Q5: When should I use Site-to-Site VPN vs AWS Direct Connect?
Use AWS Site-to-Site VPN when you need to stand up hybrid connectivity in minutes to hours, have low-to-medium bandwidth requirements (up to roughly 1.25 Gbps per tunnel), or need an encrypted link over the public internet. Use AWS Direct Connect when you need consistent low latency, 1-100 Gbps of dedicated bandwidth, sustained high-volume data transfer, or a provably private network path — but plan for weeks to months of provisioning time. Many real architectures use both: Direct Connect as the primary steady-state path and VPN as a backup, or VPN layered on top of Direct Connect to add encryption on top of the dedicated fiber.
Q6: How do VPC Flow Logs help with troubleshooting and security?
VPC Flow Logs capture metadata about every IP packet traversing ENIs in your VPC — source/destination IP and port, protocol, bytes, packets, and whether it was ACCEPTed or REJECTed by the combined security group and NACL evaluation. That makes Flow Logs the first tool to reach for when a workload cannot connect to another workload in the VPC: search for REJECT records on the relevant ENI to find the offending NACL or Security Group rule. Flow Logs also feed Amazon GuardDuty for automated threat detection. Publish Flow Logs to Amazon CloudWatch Logs for live querying, to Amazon S3 for long-term archive, or to Amazon Kinesis Data Firehose for real-time pipelines. Remember: Flow Logs are metadata, not packet capture — for full-packet inspection, use VPC Traffic Mirroring.
Q7: How do I design a Multi-AZ, production-ready Amazon VPC for SAA-C03?
Start with a /16 VPC CIDR in an RFC 1918 range. Split it into three tiers per AZ over at least 2-3 AZs: a public subnet per AZ for internet-facing load balancers and NAT Gateways; a private application subnet per AZ for EC2/ECS/EKS workloads, with the NAT Gateway in the same AZ as its default route; and a private data subnet per AZ for RDS, ElastiCache, and OpenSearch. Deploy one NAT Gateway per AZ to avoid cross-AZ failure. Add Gateway Endpoints for S3 and DynamoDB to cut NAT costs, and Interface Endpoints for any AWS service your private workloads call frequently. Attach the VPC to AWS Transit Gateway if you expect multiple VPCs, and use AWS Direct Connect with a transit VIF for hybrid connectivity. Enable VPC Flow Logs to S3 for audit. This is the canonical SAA-C03 Well-Architected VPC design.