AWS network security on SCS-C02 is not the same conversation as it is on SAA-C03. Where the architect exam asks "which subnet should the database go in?", the Security Specialty exam asks "the security team must inspect every TCP flow leaving the inspection VPC, decrypt TLS to look for exfiltration patterns, log every dropped packet to a tamper-evident store, and apply the same baseline rule set to every account in the AWS Organization without letting a workload owner disable it." That is a layered design problem that draws together security groups, network ACLs, AWS Network Firewall, Transit Gateway appliance mode, VPC endpoints with restrictive policies, Traffic Mirroring sessions, VPC Flow Logs v5 fields, Site-to-Site VPN with BGP, Direct Connect with MACsec, Lambda VPC attachment behavior, and AWS Firewall Manager policies — and SCS-C02 routinely tests every one of those boxes inside a single 5-line scenario.
This topic is Domain 3 (Infrastructure Security, 20 percent of the exam) Task Statement 3.2 in its entirety. The official SCS-C02 Exam Guide v1.0 lists the knowledge bullets verbatim: "VPC security mechanisms (security groups, network ACLs, AWS Network Firewall)", "Inter-VPC connectivity (AWS Transit Gateway, VPC endpoints)", "Security telemetry sources (Traffic Mirroring, VPC Flow Logs)", "VPN technology, terminology, and usage", and "On-premises connectivity options (AWS VPN, AWS Direct Connect)". The skills bullets push you to design network segmentation, design controls that permit or prevent traffic, keep data off the public internet, choose the right telemetry source, and manage the whole thing centrally with AWS Firewall Manager. This guide walks the entire stack from the most granular control (security group rule) to the most centralised control (Firewall Manager policy enforced via AWS Organizations).
Why Network Security Is the Linchpin of SCS-C02 Domain 3
Domain 3 is worth 20 percent of SCS-C02 — more than any other domain — and Task Statement 3.2 is the single broadest task in the exam guide, naming AWS Network Firewall, Transit Gateway, VPC endpoints, Traffic Mirroring, VPC Flow Logs, AWS VPN, AWS Direct Connect, MACsec, and AWS Firewall Manager in the knowledge and skills lists. Expect six to ten exam questions in this exact territory, almost every one of them in scenario form rather than recall. A typical question describes a four-account organization with an inspection VPC, asks which Transit Gateway feature ensures a flow is inspected on its return path, and offers four answers that look superficially similar.
Network security is also the layer the other Domain 3 tasks plug into. Edge protection (3.1) terminates at an ALB whose security group governs east-west reachability into the VPC tier; compute hardening (3.3) relies on the host-based firewall, instance metadata service, and IAM instance profile, but the network reachability that an attacker would exploit is governed by 3.2 controls; troubleshooting network security (3.4) is literally the same constructs in failure mode. So mastering 3.2 is the highest-leverage study activity in the entire infrastructure security domain.
The framing across the topic is defense in depth at the network layer. SCS-C02 will rarely accept a single answer like "use a security group" — the right answer combines two or three layers: SG plus NACL, NACL plus Network Firewall, Network Firewall plus VPC endpoint policy, VPN plus Direct Connect with MACsec. If a candidate's mental model has a single perimeter, they will lose half of these questions to distractors that mention only the most-common control. The mental model SCS-C02 rewards is layered control with central governance and rich telemetry — exactly the AWS Security Reference Architecture inspection-VPC pattern.
Plain-Language Explanation: Network Security in AWS VPCs
VPC network security stacks five distinct constructs (SG, NACL, Network Firewall, VPC endpoint policies, Firewall Manager) plus a transit fabric (TGW, VPN, Direct Connect) plus telemetry (Flow Logs, Traffic Mirroring). Three analogies anchor the moving parts.
Analogy 1: The Office Building With Layered Access Control
Think of every VPC as a multi-tenant office building. The security group is the smart-lock badge reader on each office door — stateful, allow-only, and personalised: each employee (instance) carries a badge (security-group membership) that lists the doors they may open. Once a badge has unlocked a door, the same person can leave through it without scanning again — this is the stateful return-traffic property. The network ACL is the building's front-desk security guard with a clipboard — stateless, allow-and-deny, and indiscriminate. The guard checks every entrance and exit independently, every time, and consults the clipboard's numbered rules in order; an entry-pass on the way in does not give you an exit-pass on the way out. AWS Network Firewall is the central security checkpoint in the lobby with metal detectors, X-ray belts, and a Suricata-trained dog that sniffs payloads as packets cross — it inspects deep into the application protocol, can decrypt sealed packages with the right keys (TLS inspection), and logs every alert. Transit Gateway with appliance mode is the convention-center campus rule that every visitor walking between two buildings on campus must pass through the lobby checkpoint both ways, so the dog never misses the return trip. VPC endpoints are the inter-floor pneumatic tube system that connects your office directly to the AWS service "supply room" without anyone having to walk outside onto the street (the public internet). Firewall Manager is the building management company that enforces the same lobby checkpoint and badge policy across every building (account) the company owns, and refuses to let an individual office change the lock standard.
Analogy 2: The Shipping Port
A VPC is a shipping port with multiple piers (subnets). Cargo containers (packets) arrive, get unloaded, and are forwarded to warehouses (instances). The security group is the per-warehouse loading-dock guard who only opens the door for trucks on the approved list — this guard remembers which trucks they let in and lets the same trucks back out without rechecking. The NACL is the port-perimeter customs inspector who stamps every container in and out independently, refers to a numbered tariff schedule (rule numbers in evaluation order), and slams the gate on banned shipments. AWS Network Firewall is the specialised cargo X-ray and bomb-sniffing facility in the middle of the port — every flagged container gets opened (Suricata stateful rule), and high-risk shipments must be decrypted-and-re-encrypted (TLS inspection) so the inspector can see what is inside. Traffic Mirroring is the CCTV footage copied off to a separate analyst office for forensic review of past shipments — without disrupting the port. VPC Flow Logs are the port's manifest log of every container that came and went, who tried to enter and was refused, and what the reject reason was. Site-to-Site VPN is the dedicated armored convoy that drives between the port and your factory across public roads, and Direct Connect with MACsec is the private rail line with steel-encased rails (layer 2 encryption) running directly between the two locations on land you own.
Analogy 3: The Hospital With Triage and Forensics
A VPC's network controls are a hospital's access regime. Security groups are the doctor-and-nurse ID badges that allow access to specific wards and patients (resource-to-resource); the badge is checked at every door and the system remembers when staff entered and lets them leave. Network ACLs are the hospital perimeter security gates with a posted no-entry list — anyone on the list is refused, anyone else passes; the gate is dumb and stateless. AWS Network Firewall is the infection-control checkpoint that swabs every visitor for known pathogens (Suricata signatures), can mandate antibody testing (TLS inspection), and quarantines anyone matching a known-malware signature. Transit Gateway appliance mode is the rule that every patient transfer between hospital wings must go through the same infection-control checkpoint both on the way out and on the way back — without it, return paths could bypass screening. Traffic Mirroring is the medical record duplication sent to an external lab for forensic analysis without disturbing patient care. VPC Flow Logs are the hospital admission and discharge ledger showing who entered, who left, and which entries were refused. Firewall Manager is the hospital network's central infection-control office that mandates the same screening protocol across every hospital in the system.
For SCS-C02, the office building analogy is the most useful when a question mixes security groups, NACLs, and Network Firewall in the same scenario — the layered locks, guard, and lobby checkpoint map cleanly. For Transit Gateway appliance mode and inspection VPCs, the convention-center campus rule sub-analogy is the highest-yield mental model. For TLS inspection and Suricata rule groups, the bomb-sniffing dog with X-ray image makes the deep-packet semantics intuitive. Reference: https://docs.aws.amazon.com/network-firewall/latest/developerguide/what-is-aws-network-firewall.html
Security Groups vs Network ACLs: Stateful vs Stateless
Security groups and network ACLs are the two foundational VPC network controls and the most-tested pair on SCS-C02. They are not interchangeable, they are not redundant, and they fail in different ways.
Security groups — stateful, allow-only, attached to ENIs
A security group is attached to an elastic network interface (ENI), not to a subnet. It is stateful: when an inbound rule allows a connection, the return traffic on the ephemeral port range is automatically allowed without an explicit outbound rule. Security groups support allow rules only — there is no deny rule. The default security group on a new VPC allows all inbound from itself (same security-group source) and all outbound to anywhere. A custom security group starts with no inbound rules and "allow all outbound to 0.0.0.0/0".
Security groups support up to 60 inbound and 60 outbound rules per group by default (raisable to 1000 combined via Service Quotas), and each ENI can have up to 5 security groups (raisable to 16). Source and destination can be a CIDR, another security group, a prefix list, or a managed prefix list. Referencing another security group as a source is the SCS-C02 sanctioned pattern for east-west micro-segmentation: the database SG allows port 5432 from the application-tier SG, not from a CIDR.
Network ACLs — stateless, allow-and-deny, attached to subnets
A network ACL is attached to a subnet and applies to every ENI in that subnet. It is stateless: inbound and outbound rules are evaluated independently, and you must explicitly allow the ephemeral port range (1024–65535) on the return path. NACLs support both allow and deny rules, evaluated in rule-number order, lowest first. The default NACL allows all inbound and all outbound; a custom NACL denies everything by default.
NACLs are most useful when you need an explicit deny that a security group cannot express — for example, blocking a specific source CIDR (a known attacker, a noisy scanner, a sanctioned country) at the subnet boundary so the workload owner cannot accidentally re-enable it from a security group rule.
The single most-tested mistake on SCS-C02 is forgetting that NACLs are stateless. A candidate adds an inbound NACL rule allowing port 443 from 0.0.0.0/0 and assumes the response can leave — it cannot, because the response leaves on a high ephemeral port (1024–65535), which the outbound NACL must explicitly allow. Linux kernels use 32768–60999 by default; Windows uses 49152–65535; AWS NLB uses 1024–65535. The safe default outbound NACL allow rule is 1024–65535. If a question describes a working SG configuration, a correctly attached NACL inbound rule, and connections still failing — the answer is the missing ephemeral-port outbound allow on the NACL. Reference: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html
When to layer SG + NACL
The SCS-C02 expectation is layered, not either/or. Use the security group for resource-to-resource allow rules with stateful semantics (the natural place to express application architecture). Use the NACL for subnet-wide explicit denies — known-bad CIDRs, sanctioned regions, internal segments that must never reach a sensitive subnet. The NACL provides defense-in-depth when a workload owner accidentally over-permissions a security group; the security group provides flexibility and statefulness without forcing the operator to think about ephemeral ports.
- Security group: stateful, allow-only, ENI-scoped firewall.
- Network ACL: stateless, allow-or-deny, subnet-scoped firewall with rule-number ordering.
- Stateful: return traffic to an allowed connection is automatically permitted.
- Stateless: every packet evaluated independently in each direction.
- Ephemeral port range: 1024–65535 (RFC range), source port chosen by the client kernel.
- Prefix list: a managed set of CIDRs you can reference by name; AWS-managed prefix lists exist for S3, DynamoDB, CloudFront, and EC2 instance-connect.
- Default deny: NACLs and Network Firewall stateful default to "drop everything not explicitly allowed".
- Reference: https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html
AWS Network Firewall: Suricata-Compatible Stateful Inspection
AWS Network Firewall is a managed network firewall service that runs inside your VPC and inspects all north-south and (with appropriate routing) east-west traffic at layer 3 through layer 7. It is the AWS-managed, horizontally scaled equivalent of a Palo Alto, Fortinet, or Check Point appliance, and on SCS-C02 it is the answer whenever a scenario asks for deep packet inspection, domain-name filtering, Suricata-style intrusion detection, or TLS inspection inside a VPC.
Architecture — firewall, firewall policy, rule groups
Three constructs nest together. A firewall is the deployed instance — one firewall per VPC, with a firewall endpoint elastic network interface (ENI) in each Availability Zone you protect. A firewall policy is a reusable bundle of stateless and stateful rule groups plus default actions. A rule group is the unit of authoring — either stateless (5-tuple match, evaluated like an ACL with priority) or stateful (Suricata-compatible signatures with flow tracking). Stateless groups are evaluated first, then stateful.
Stateless rule groups — 5-tuple match
Stateless rules match on source/destination IP and port and protocol, with priority-ordered actions: pass, drop, forward to stateful, or custom actions. Use stateless rules when the decision is purely 5-tuple — block all traffic to a known C2 IP, drop UDP from a spoofable source, fast-path TCP/443 to bypass stateful inspection for a known-trusted destination.
Stateful rule groups — Suricata-compatible
Stateful rules track connection state and can match on domain names (Server Name Indication for HTTPS, Host header for HTTP, DNS query name), Suricata signatures (the open-source IDS rule format), or 5-tuple with flow tracking. Suricata syntax means commercial threat-intelligence feeds and open-source rule sets (ET Open, ET Pro, Talos) drop into Network Firewall with minimal translation. Domain-name filtering is the most-tested SCS-C02 capability — "allow only *.amazonaws.com, *.example-corp.com, and *.windowsupdate.com; drop all other outbound HTTPS" is a one-stateful-rule-group answer.
TLS inspection
Without TLS inspection, Network Firewall sees only the SNI on encrypted flows; it cannot inspect the payload. TLS inspection terminates the TLS session at the firewall, decrypts, applies stateful rules to the cleartext, and re-encrypts to the destination. Requirements: an ACM-issued or imported server certificate and a certificate authority (CA) bundle the firewall trusts for outbound flows. Clients must trust the firewall's internal CA. TLS inspection currently supports TLS 1.2 and 1.3 with specific cipher suite restrictions; some flows (mTLS, certain pinned applications) cannot be inspected and must be allow-listed by SNI.
Decrypting TLS at a firewall is technically capable but operationally consequential. You will break: certificate-pinned mobile apps, mTLS APIs that authenticate the client, applications using QUIC/HTTP3 (UDP, not yet supported on AWS Network Firewall TLS inspection at the time of writing), and any flow where the destination requires a specific client cert. Legally, decrypting employee or third-party traffic implicates data-protection regimes (GDPR, HIPAA) — get legal sign-off before enabling, and document the inspection in your acceptable use policy. SCS-C02 questions that mention "decrypt and inspect" want you to recognise both the capability and the gotchas. Reference: https://docs.aws.amazon.com/network-firewall/latest/developerguide/tls-inspection.html
Deployment in an inspection VPC
The canonical SCS-C02 deployment is a dedicated inspection VPC referenced by AWS Security Reference Architecture. All spoke VPCs route through Transit Gateway to the inspection VPC, which contains the Network Firewall endpoints in each AZ. The inspection VPC has no compute workloads — just the firewall and a NAT gateway for egress to the internet (if outbound is allowed). This gives a single chokepoint, single rule-policy author, single logging destination, and single rate-limit decision.
- One firewall per VPC, with a firewall endpoint ENI per protected AZ.
- Stateless evaluated first, then stateful — order matters in policy authoring.
- Suricata-compatible stateful rules — drop in commercial feeds (ET Pro, Talos).
- Domain-name list filtering for HTTP Host and HTTPS SNI without TLS inspection.
- TLS inspection requires ACM cert + CA bundle; supports TLS 1.2/1.3 only.
- Logs: alert logs (matched stateful rules) and flow logs (every flow), to S3, CloudWatch Logs, or Kinesis Firehose.
- Stateful default actions: drop established, drop strict (drop-all-not-matching), or alert.
- No mTLS or QUIC inspection at time of writing — allow-list by SNI.
- Reference: https://docs.aws.amazon.com/network-firewall/latest/developerguide/suricata-rule-groups.html
Transit Gateway Security Patterns and Appliance Mode
AWS Transit Gateway (TGW) is the hub-and-spoke transit fabric for VPC-to-VPC, VPC-to-on-prem, and VPC-to-VPN connectivity. On SCS-C02 the testable security patterns are inspection VPC routing, appliance mode for symmetric paths, route table isolation for blast-radius control, and resource sharing across accounts via AWS RAM.
Inspection VPC pattern
Every spoke VPC has a default route (0.0.0.0/0) pointing to the TGW. The TGW route table sends spoke traffic through a inspection VPC attachment before forwarding to other spokes or to the internet. Network Firewall in the inspection VPC inspects the flows. Return traffic from the destination spoke also routes through TGW back to the inspection VPC. Without appliance mode, asymmetric routing can cause TGW to send the request through one Network Firewall AZ and the response through a different AZ — and Network Firewall (being stateful) will drop the response because it never saw the request.
Appliance mode — the must-know feature
Enabling appliance mode on the inspection VPC's TGW attachment forces TGW to keep both directions of a flow on the same attachment ENI in the same AZ. This is the SCS-C02 canonical answer to "the inspection VPC is dropping return traffic on multi-AZ scenarios". Appliance mode is a per-attachment toggle, applied at attachment creation or via modify-transit-gateway-vpc-attachment API. Without it, stateful firewalls in inspection VPCs will not work for cross-AZ flows.
Route table isolation
TGW supports multiple route tables, each with its own associations and propagations. The standard isolation pattern: production VPCs associate to a "prod" TGW route table that propagates only prod and shared-services routes; non-production VPCs associate to a separate "non-prod" route table; the inspection VPC sits in a third table that sees everything. This prevents a non-prod VPC from routing directly to a prod VPC even if both are attached to the same TGW.
A frequent SCS-C02 distractor: a scenario describes inspection traffic working perfectly within one AZ, but failing on cross-AZ flows. Candidates jump to security group, NACL, or Network Firewall rule causes — but the actual root cause is the TGW attachment not being in appliance mode. AWS publishes this exact warning in the inspection-VPC reference architecture. Memorise the symptom-to-fix mapping: "stateful firewall, asymmetric drop on cross-AZ flow" → enable TGW attachment appliance mode. Reference: https://docs.aws.amazon.com/vpc/latest/tgw/tgw-appliance-scenario.html
Cross-account TGW with AWS RAM
A central networking account owns the Transit Gateway. AWS Resource Access Manager (RAM) shares the TGW ID with member accounts so they can attach their own VPCs without recreating connectivity per account. The central network team controls TGW route tables; member accounts control their own VPC route tables. This is the AWS Security Reference Architecture canonical layout and is heavily tested on the Specialty exam.
VPC Endpoints: Gateway vs Interface (PrivateLink)
VPC endpoints keep traffic between a VPC and supported AWS services on the AWS network without traversing the public internet. There are two flavours and they are not interchangeable.
Gateway endpoints — S3 and DynamoDB only
A gateway endpoint is a routing target added to a VPC route table. Only Amazon S3 and Amazon DynamoDB are accessible via gateway endpoints. The endpoint is free, region-scoped, and does not have an ENI — it is a route-table prefix list entry that intercepts traffic destined to the service's public IP ranges and routes it across the AWS backbone. Gateway endpoints support endpoint policies to restrict which buckets, tables, or principals are reachable through them.
Interface endpoints — PrivateLink for everything else
An interface endpoint is one or more ENIs in your subnets that proxy requests to the AWS service or a third-party SaaS. Powered by AWS PrivateLink, interface endpoints support hundreds of AWS services (KMS, Secrets Manager, STS, Systems Manager, ECR, SNS, SQS, etc.) plus customer- and partner-published services. Each ENI has a private IP in your subnet; you point clients at the endpoint DNS name (which resolves to the ENI IPs) or enable Private DNS on the endpoint to override the public service DNS for in-VPC clients.
Interface endpoints cost per-AZ per-hour plus per-GB processed. They support security groups on the ENI (control which clients can reach the endpoint) and endpoint policies (control what those clients can do).
Endpoint policies — the data perimeter lever
A VPC endpoint policy is an IAM policy attached to the endpoint that restricts which principals, actions, and resources can flow through it. The most powerful SCS-C02 pattern is the data perimeter: an S3 gateway endpoint policy that only allows access to buckets in the organisation's own account list (using aws:PrincipalOrgID and s3:ResourceAccount conditions), preventing exfiltration to attacker-controlled buckets even if a compromised credential is used inside the VPC.
SCS-C02 expects you to recognise the three-layer data perimeter. (a) VPC endpoint policy restricts which buckets the network path can reach. (b) S3 bucket policy with aws:SourceVpce condition requires access to come from a known endpoint. (c) Service Control Policy (SCP) denies access from outside the organisation entirely. Together they prevent both data exfiltration to external buckets and data ingestion from external accounts — a closed two-way perimeter. A question describing "prevent any S3 GetObject to a bucket outside the organisation" wants this triad, not just the endpoint policy alone. Reference: https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-access.html
Interface endpoint and security groups
Because interface endpoints have ENIs, they have security groups. The endpoint security group must allow inbound from the client subnets (typically TCP 443). The default behaviour when you create an interface endpoint is the VPC default security group, which is usually too permissive for production — replace it with a tightly scoped SG.
Traffic Mirroring for Forensics and Deep Packet Inspection
VPC Traffic Mirroring copies network packets from a source ENI to a target for out-of-band analysis. It is the SCS-C02 answer to "capture full packet payloads for forensic review without disrupting production".
Components — source, target, filter, session
A mirror source is an ENI on a Nitro-based EC2 instance (older instance types are not supported). A mirror target is a Network Load Balancer, Gateway Load Balancer, or another ENI — typically pointing to a security analytics appliance, a Suricata IDS cluster, or a Zeek sensor. A mirror filter defines which traffic to copy (5-tuple match plus rule number ordering). A mirror session ties source + target + filter together with a session number that determines precedence when an ENI matches multiple sessions.
Traffic Mirroring captures full packet contents (layer 2 onward), VLAN-encapsulated, supporting payload inspection that VPC Flow Logs cannot provide. The downside is cost: every mirrored byte is duplicated, and high-traffic instances can saturate the target NLB.
Use cases
- Forensic capture during an incident — mirror an instance suspected of compromise to a forensic VPC for offline analysis.
- Threat hunting — feed mirrored traffic to a Suricata IDS for signature and behavioural detection beyond GuardDuty.
- Compliance packet retention — some regulated industries require N days of full-packet capture; Traffic Mirroring + S3 (via Kinesis Firehose pulling from a custom collector) is a pattern.
- Deep performance debugging — capture and replay TCP flows to investigate retransmits and protocol-level issues.
A common SCS-C02 distractor pairs Traffic Mirroring with VPC Flow Logs as if they are alternatives. They are complements: Flow Logs are aggregated 5-tuple records (source IP, dest IP, ports, protocol, action, bytes, packets) — cheap, always-on, perfect for GuardDuty and broad telemetry; Traffic Mirroring is full packet capture — expensive, targeted, perfect for forensic investigation. The right answer to "we need to inspect HTTP request bodies for a known XSS payload pattern" is Traffic Mirroring, not Flow Logs. The right answer to "detect an unusual volume of outbound connections to TCP 22 from a database subnet" is Flow Logs. Reference: https://docs.aws.amazon.com/vpc/latest/mirroring/what-is-traffic-mirroring.html
VPC Flow Logs: Telemetry for Network Security
VPC Flow Logs capture metadata about IP traffic flowing through ENIs, subnets, or entire VPCs. They are the foundational network-security telemetry source on AWS and the data feed for Amazon GuardDuty, Amazon Detective, and most third-party SIEMs.
Capture types and destinations
A flow log can capture ACCEPT records (allowed traffic), REJECT records (dropped traffic by SG or NACL), or ALL (both). Destinations: CloudWatch Logs for near-real-time queries with Logs Insights, S3 for cheap long-term storage and Athena queries, or Kinesis Data Firehose for streaming into a SIEM or data lake.
v2 vs v3 vs v4 vs v5 record formats
Flow Logs support multiple record-format versions, each adding fields:
- v2 (default) — original 14 fields: version, account-id, interface-id, srcaddr, dstaddr, srcport, dstport, protocol, packets, bytes, start, end, action, log-status.
- v3 — adds VPC ID, subnet ID, instance ID, TCP flags, traffic type, packet src/dst.
- v4 — adds AWS service, flow direction, traffic path.
- v5 — adds 6 more fields including ECS task ARN, sublocation type, sublocation ID — the richest format for container and Outposts workloads.
Pick v5 when you need the full feature set for SIEM ingestion or container-aware analysis. The cost is identical regardless of format — Flow Logs charge per ingested GB.
Flow Logs and rejected traffic
A REJECT record is logged when a security group or NACL drops the packet. A NACL drop and an SG drop both produce REJECT entries, but Flow Logs do not distinguish which control did the dropping — that requires inference (if the source is allowed by the SG and dropped, it must have been the NACL). For Network Firewall drops you need Network Firewall flow logs and alert logs in addition; Network Firewall drops do not appear as VPC Flow Log REJECTs.
SCS-C02 distractor: candidates assume Flow Logs are exhaustive. They are not. Flow Logs explicitly exclude: traffic to and from the Amazon DNS server (169.254.169.253), Windows license activation traffic, instance metadata service requests (169.254.169.254 — including IMDSv2), Amazon Time Sync (169.254.169.123), DHCP traffic, traffic to the VPC router reserved address, and traffic between endpoints when using a Gateway Load Balancer. Forensic completeness for these cases requires Traffic Mirroring or host-based logging. Reference: https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html
Flow Logs in a security-account aggregation pattern
The AWS Security Reference Architecture pattern: every VPC in every member account writes Flow Logs to a centralised S3 bucket in the Log Archive account, partitioned by account/region/date. The bucket is immutable (Object Lock + bucket policy denying delete) and queryable by Athena from the Security Tooling account. This is the gold standard SCS-C02 expects when a question asks "centralise network telemetry across an organisation".
Site-to-Site VPN: IPsec, BGP, and Accelerated VPN
AWS Site-to-Site VPN establishes IPsec tunnels between an AWS Virtual Private Gateway (VGW) or Transit Gateway and an on-premises customer gateway device. It is the cheap, fast-to-stand-up option for hybrid connectivity and the SCS-C02 answer when the scenario does not justify Direct Connect.
IPsec tunnel design
Each Site-to-Site VPN connection comprises two IPsec tunnels terminating on different AWS endpoints in different physical infrastructure for high availability. Both tunnels are active by default with active/active routing when using BGP, or active/passive with one preferred and one standby. The customer gateway must be configured to use both — using only one halves your availability and produces immediate impact if AWS performs maintenance on the active endpoint.
Default encryption: AES-256, SHA-2 integrity, DH group 14 or higher, IKEv2. SCS-C02 expects you to recognise these as modern defaults; legacy IKEv1 with SHA-1 should be flagged as non-compliant in any question describing a fresh deployment.
BGP routing
Border Gateway Protocol (BGP) routing on the VPN tunnels is the recommended pattern. BGP enables dynamic route propagation, automatic failover between tunnels, and adding/removing on-prem CIDRs without recreating routes in the VPC route table. Static routing is the fallback for customer gateways that do not support BGP and is acceptable on SCS-C02 only if explicitly stated.
Accelerated Site-to-Site VPN
Accelerated VPN routes the IPsec tunnels through the AWS Global Accelerator edge network, reducing latency and jitter for distant on-prem sites. Enabled per-VPN-connection at creation, accelerated VPN is the answer when latency-sensitive workloads (real-time gaming, VoIP, financial trading) traverse a Site-to-Site VPN. Restrictions apply: accelerated VPN is incompatible with VGW (Transit Gateway only), and there is an additional per-hour fee.
- Two IPsec tunnels per VPN connection, terminating on different AWS endpoints.
- BGP recommended, static routing as fallback for non-BGP customer gateways.
- AES-256, SHA-2, DH 14+, IKEv2 modern defaults.
- Accelerated VPN routes through AWS Global Accelerator, TGW only, extra fee.
- Customer gateway is the on-prem device (router or firewall); AWS provides config templates for major vendors (Cisco, Juniper, Palo Alto, etc.).
- Reference: https://docs.aws.amazon.com/vpn/latest/s2svpn/VPC_VPN.html
Direct Connect with MACsec: Layer 2 Encryption at the Cross-Connect
AWS Direct Connect is a dedicated, private, layer-2 connection between an on-premises router and an AWS Direct Connect location. MACsec (IEEE 802.1AE) encrypts at layer 2 on the dedicated link, providing line-rate encryption that does not consume CPU on either end.
When Direct Connect, when MACsec
Direct Connect is the answer when scenarios mention predictable bandwidth (1, 10, or 100 Gbps), consistent low latency, regulatory requirements for non-internet transit, or bulk data transfer at high volume. Without MACsec, a Direct Connect link is private but unencrypted at layer 2 — IPsec (VPN over Direct Connect) is required for confidentiality. With MACsec, the layer 2 link itself is encrypted with AES-256-GCM at line rate, making layered IPsec optional or unnecessary for confidentiality requirements.
MACsec requirements: a dedicated 10 Gbps or 100 Gbps Direct Connect connection at a MACsec-capable Direct Connect location, both ends supporting IEEE 802.1AE-2006 with the GCM-AES-256 cipher suite, and a pre-shared Connectivity Association Key (CAK) configured on both routers.
Public, private, and transit VIFs
A Direct Connect connection carries virtual interfaces (VIFs) of three types:
- Public VIF — access to AWS public services (S3, DynamoDB) over the private link, bypassing the public internet without using VPC endpoints.
- Private VIF — access to a single VPC via a Virtual Private Gateway.
- Transit VIF — access to a Direct Connect Gateway, which fans out to multiple VPCs across regions via Transit Gateways.
For multi-VPC, multi-region access, the transit VIF + Direct Connect Gateway + Transit Gateway combination is the SCS-C02 canonical answer.
Direct Connect resilience
A single Direct Connect connection is a single point of failure. AWS-recommended resilience: two Direct Connect connections at two Direct Connect locations, with BGP weighted routing or AS-PATH prepending for active/active or active/passive failover. Site-to-Site VPN as a backup to a single Direct Connect is acceptable for less-critical workloads. SCS-C02 questions that mention "tier 1 mission-critical hybrid connectivity" expect dual Direct Connect plus VPN tertiary failover.
A subtle SCS-C02 distinction: MACsec encrypts only the dedicated cross-connect from your router to the AWS Direct Connect device. It does not encrypt traffic beyond the AWS edge — within AWS, traffic on the Direct Connect Gateway is on the AWS backbone (private but not encrypted by MACsec). For end-to-end encryption from your data center to a specific VPC instance, you still need application-layer TLS or a layered IPsec VPN over the Direct Connect. MACsec satisfies "encrypt the cross-connect"; it does not satisfy "encrypt all data in transit end-to-end". Reference: https://docs.aws.amazon.com/directconnect/latest/UserGuide/MACsec.html
Lambda in VPC: ENI Behavior and Outbound Egress
AWS Lambda functions configured to access VPC resources create elastic network interfaces inside the VPC's subnets. SCS-C02 tests this for two reasons: ENI scaling behavior under sudden load, and outbound egress paths for VPC-attached functions.
ENI provisioning — Hyperplane
Older Lambda VPC integration created one ENI per function-concurrency unit, causing slow cold starts and ENI exhaustion at high concurrency. Modern Lambda uses the AWS Hyperplane network proxy: ENIs are shared across function invocations and pre-warmed, so cold start latency is measured in hundreds of milliseconds rather than tens of seconds. The number of ENIs scales with unique combinations of subnet × security group, not concurrency.
Outbound egress
A VPC-attached Lambda function can reach the public internet only via a NAT gateway in a public subnet routed from the function's private subnets. Without NAT, the function cannot reach the internet — including AWS service endpoints, unless those services have interface endpoints in the function's VPC. The SCS-C02 sanctioned design for a VPC-attached function that calls Secrets Manager, STS, or KMS is to add interface VPC endpoints for those services rather than route through NAT — keeping the call on the AWS backbone and reducing per-GB NAT cost.
Security groups
Each Lambda function has at least one security group attached. The function's outbound rules govern where it can reach. The destination service (RDS database, EC2 instance) must allow inbound from the Lambda security group — referencing it by SG ID is the canonical pattern.
A SCS-C02 design pattern: when a VPC-attached Lambda function only needs to call AWS services (Secrets Manager, STS, KMS, Systems Manager Parameter Store, S3 via gateway endpoint), provision interface VPC endpoints for those services plus an S3 gateway endpoint, and the function needs no NAT gateway at all. The function's subnets are pure private (no internet route), all AWS calls go over PrivateLink, and the data perimeter is tight. NAT is reserved for outbound to non-AWS internet destinations — most lambdas do not need this. Reference: https://docs.aws.amazon.com/lambda/latest/dg/configuration-vpc.html
AWS Firewall Manager: Org-Wide Policy Enforcement
AWS Firewall Manager is the central management plane for network security policies across an AWS Organisation. It is the SCS-C02 answer to "enforce the same baseline firewall configuration across every account, every VPC, every region, and prevent workload owners from disabling it".
What Firewall Manager manages
Firewall Manager applies four policy families:
- AWS WAF policies — apply WAF web ACLs to ALBs, CloudFront distributions, API Gateway, and Cognito user pools across the org.
- AWS Shield Advanced policies — apply Shield Advanced protections to specific resources org-wide.
- AWS Network Firewall policies — deploy a centrally authored Network Firewall policy and rule groups into every VPC in the org, automatically.
- Security group policies — three sub-types: common security groups (apply baseline SG to specified resources), audit existing security groups (flag and optionally remediate non-compliant SGs), and usage audit (find unused SGs).
Prerequisites
Firewall Manager requires: AWS Organizations enabled with all features (not just consolidated billing), AWS Config enabled in every member account and region (Firewall Manager uses Config to discover resources), and a delegated administrator account designated for Firewall Manager (typically the Security Tooling account, not the management account).
Common patterns
- Mandatory baseline Network Firewall — every VPC in the org must have Network Firewall with a centrally authored "block known-bad domains, allow only enterprise SaaS SNIs" policy. Firewall Manager auto-deploys when a new VPC is created.
- Mandatory WAF baseline — every ALB and CloudFront in the org must have the AWS Managed Rule sets (Core rule set, SQL injection, known-bad inputs) applied, with auto-remediation if a workload owner removes them.
- Audit security groups — flag any security group with
0.0.0.0/0on port 22 or 3389 and remediate by removing those rules. - Block public S3 endpoints via SCP + Firewall Manager combination (SCP is the IAM-layer enforcement; Firewall Manager handles the network-layer enforcement).
A common SCS-C02 distractor: a multi-account scenario asks for a baseline WAF or Network Firewall configuration across the org, and the answer choices include "deploy via CloudFormation StackSets" and "deploy via Firewall Manager". StackSets work but lack the auto-remediation and policy-violation detection that Firewall Manager provides. The exam favors Firewall Manager because it composes with Organizations, Config, and Security Hub natively. StackSets is the second-best answer when Firewall Manager is not in the option list. Reference: https://docs.aws.amazon.com/waf/latest/developerguide/fms-chapter.html
Common Traps Recap — Network Security on SCS-C02
Every SCS-C02 attempt encounters most of these distractors.
Trap 1: NACLs are stateful
Wrong. NACLs are stateless. The ephemeral port outbound allow is mandatory for any inbound flow that expects a response.
Trap 2: Security groups have deny rules
Wrong. Security groups are allow-only. To deny, use a NACL or a Network Firewall stateful drop.
Trap 3: TGW inspection works without appliance mode
Wrong. Cross-AZ asymmetric routing breaks stateful inspection in inspection VPCs without appliance mode enabled.
Trap 4: Detailed monitoring or anything else replaces the CloudWatch agent
Different topic, but the analogue here: VPC Flow Logs do not capture all traffic (DNS, IMDS, link-local addresses are excluded). For complete capture use Traffic Mirroring.
Trap 5: Gateway endpoints exist for KMS, Secrets Manager, etc.
Wrong. Only S3 and DynamoDB have gateway endpoints. Everything else uses interface endpoints (PrivateLink).
Trap 6: Endpoint policy alone closes the data perimeter
Insufficient. The data perimeter requires endpoint policy plus bucket policy (aws:SourceVpce) plus an SCP at the org level. Each layer alone has a bypass.
Trap 7: MACsec encrypts end-to-end across AWS
Wrong. MACsec encrypts only the dedicated cross-connect. Within the AWS backbone, traffic is private but not MACsec-encrypted.
Trap 8: Site-to-Site VPN single tunnel is enough
Wrong. Both tunnels must be configured at the customer gateway for HA; otherwise an AWS-side endpoint maintenance kills connectivity.
Trap 9: Lambda in VPC always needs NAT
Wrong. If the function only calls AWS services, interface endpoints replace NAT entirely and tighten the perimeter.
Trap 10: Network Firewall drops appear in VPC Flow Logs
Wrong. Network Firewall has its own flow logs and alert logs, separate from VPC Flow Logs. SG/NACL drops appear as REJECT in VPC Flow Logs; Network Firewall drops do not.
Trap 11: Firewall Manager works without AWS Config
Wrong. Firewall Manager depends on AWS Config in every member account and region for resource discovery.
Trap 12: Traffic Mirroring works on all instance types
Wrong. Traffic Mirroring sources must be ENIs on Nitro-based EC2 instances. Older instance types (m4, c4, etc.) cannot be mirror sources.
Decision Matrix — Network Security Construct for Each SCS-C02 Goal
Use this lookup during the exam.
| Security goal | Primary construct | Notes |
|---|---|---|
| Allow east-west between specific tiers | Security group with SG-as-source | Stateful, allow-only, ENI-scoped. |
| Block known-bad CIDR at subnet boundary | NACL deny rule | Stateless; remember ephemeral port range. |
| Deep packet inspection / Suricata IDS | AWS Network Firewall stateful rule group | Suricata-compatible. |
| Domain-name allow-list for outbound HTTPS | Network Firewall stateful with SNI/Host filtering | TLS inspection optional. |
| Symmetric flow through inspection VPC | TGW attachment in appliance mode | Mandatory for stateful firewalls. |
| Private access to S3/DynamoDB | Gateway endpoint | Free, region-scoped, route-table entry. |
| Private access to KMS, Secrets Manager, etc. | Interface endpoint (PrivateLink) | Per-AZ ENI, has SG and policy. |
| Block exfiltration to non-org buckets | Endpoint policy + bucket policy + SCP triad | Data perimeter. |
| Forensic full packet capture | Traffic Mirroring to NLB / appliance | Nitro-only sources. |
| Network metadata for SIEM | VPC Flow Logs v5 to S3 / Firehose | Cheap, always-on, partitioned by account/region. |
| Hybrid connectivity, low setup cost | Site-to-Site VPN with BGP | Two tunnels, encrypted IPsec. |
| Latency-sensitive VPN to distant site | Accelerated Site-to-Site VPN | TGW only, extra fee. |
| Predictable bandwidth, regulated transit | Direct Connect with private/transit VIF | Layer 2 dedicated. |
| Encrypt the cross-connect at line rate | Direct Connect with MACsec | 10/100G dedicated, 802.1AE. |
| Lambda calling AWS services privately | VPC-attached Lambda + interface endpoints | NAT not required. |
| Org-wide WAF / Network Firewall baseline | Firewall Manager policy | Requires Org + Config. |
| Detect non-compliant security groups | Firewall Manager audit security group policy | Auto-remediation optional. |
FAQ — Network Security in AWS VPCs
Q1: When does a stateless NACL ephemeral port range cause connection failures versus when does the security group's statefulness rescue me?
NACLs are stateless and security groups are stateful. This means the security group automatically allows return traffic on the ephemeral port range without any explicit outbound rule, but the NACL does not — every direction requires its own NACL rule. So a fully working SG configuration plus a misconfigured NACL with no ephemeral outbound allow leads to inbound packets succeeding and the response being dropped. The fix is an outbound NACL rule allowing TCP and UDP on ports 1024–65535 (or the specific kernel-defined ephemeral range — 32768–60999 on modern Linux, 49152–65535 on Windows). The SCS-C02 exam version of this question typically shows working SG rules and a partial NACL with only the inbound allow set; the right answer is "add the ephemeral outbound NACL rule".
Q2: When should I use AWS Network Firewall versus a Gateway Load Balancer with a third-party appliance?
Use AWS Network Firewall when the requirements are met by Suricata-style stateful rules, domain-name filtering, and basic TLS inspection — typical enterprise outbound egress filtering, malware C2 blocking, and prevention of data exfiltration via DNS or HTTPS to known-bad domains. The setup is fully managed, scales horizontally, and integrates with Firewall Manager for org-wide deployment. Use Gateway Load Balancer + a third-party appliance (Palo Alto, Fortinet, Check Point, Aviatrix) when you need vendor-specific features Network Firewall does not have: advanced sandboxing, application-aware policy with custom protocol decoders, deep mTLS handling, or strict regulatory requirements that mandate a specific commercial stack. For SCS-C02, default to Network Firewall unless the scenario explicitly calls for a third-party feature.
Q3: Why do I need both a VPC endpoint policy and an S3 bucket policy for the data perimeter?
Because each layer protects a different vector. The VPC endpoint policy restricts which buckets the network path through the endpoint can reach — a defence against an in-VPC compromise where the attacker has valid AWS credentials but the endpoint refuses the call to an outside-org bucket. The S3 bucket policy with aws:SourceVpce condition restricts which endpoints can reach this bucket — a defence against access from the public internet or from an unauthorised VPC. Together they form a closed two-way perimeter: no exfiltration to outside-org buckets, no ingestion to your buckets except from authorised VPCs. Add an org-level SCP denying any S3 action with aws:ResourceAccount not in the org account list, and the attacker's path is closed at the IAM authorisation layer too. SCS-C02 expects all three.
Q4: Does Transit Gateway appliance mode affect performance or cost?
Appliance mode forces both directions of a flow onto the same Transit Gateway attachment ENI in the same AZ. This eliminates asymmetric routing (which breaks stateful firewalls) at no extra cost — appliance mode itself has no additional charge, and the per-GB Transit Gateway data processing fee is unchanged. The only practical impact is slightly less flexibility in cross-AZ path selection, but for inspection VPC scenarios this is the desired behavior. Always enable appliance mode on the TGW attachment for any VPC that hosts a stateful inspection function (Network Firewall, third-party firewall via Gateway Load Balancer, Palo Alto VM-Series). Forgetting appliance mode is the highest-frequency cause of "intermittent connection drops on multi-AZ flows" troubleshooting questions on SCS-C02.
Q5: What is the difference between a Public VIF, Private VIF, and Transit VIF on Direct Connect?
A Public VIF carries traffic destined to AWS public service IPs (S3, DynamoDB, public-IP load balancers) over the private cross-connect, bypassing the internet — useful when regulatory rules forbid internet transit even to AWS-owned addresses. A Private VIF carries traffic to a single VPC via a Virtual Private Gateway, used in single-region single-VPC hybrid deployments. A Transit VIF carries traffic to a Direct Connect Gateway, which fans out to one or more Transit Gateways across multiple AWS regions, enabling multi-VPC multi-region hybrid connectivity with one VIF. SCS-C02 expects you to recognise that Transit VIF is the canonical pattern for any modern multi-region multi-account architecture; Private VIF is legacy single-region; Public VIF is niche.
Q6: How do I architect a Lambda function in a VPC that needs to call KMS, Secrets Manager, and an external HTTPS API?
Put the function in private subnets. Add interface VPC endpoints for KMS, Secrets Manager, STS, and any other AWS service the function calls — these route over PrivateLink, do not need NAT, are cheaper at scale, and are auditable in CloudTrail. Add a NAT gateway in a public subnet (with route from the function's private subnets) only for the external HTTPS API call, and tightly scope the NAT to a specific destination via security groups on the destination — or better, use a Network Firewall with a domain-name allow-list as the egress filter, with the NAT gateway downstream. For the data perimeter, attach an endpoint policy to the interface endpoints restricting principals to the function's role and resources to the expected secrets and keys. This pattern is the SCS-C02 best-practice envelope for "secure VPC-attached Lambda with selective internet egress".
Q7: Which VPC Flow Logs version should I choose, and what does ALL capture that ACCEPT and REJECT individually do not?
Choose v5 — it is the richest format with the most fields (VPC ID, subnet ID, instance ID, traffic path, ECS task ARN, sublocation type) and costs the same per ingested GB as older formats. ACCEPT captures only allowed flows; REJECT captures only blocked flows; ALL captures both. Choose ALL for security investigations and SIEM ingestion — denied flows (REJECT) are critical signals for detecting reconnaissance scans, misconfigured clients, and policy violations, while allowed flows (ACCEPT) are needed for behavioral baselines and lateral-movement detection. Choose REJECT alone only when storage cost is the binding constraint and you accept losing the baseline. SCS-C02 default expectation: v5 + ALL + S3 destination + Athena partitioning.
Q8: When should I use Traffic Mirroring instead of VPC Flow Logs, and what are the gotchas?
Use Traffic Mirroring when payload inspection is required — full-packet capture for forensic analysis, IDS signature matching on packet contents, replay debugging, or compliance requirements for full packet retention. Use Flow Logs for metadata-only telemetry — connection records, security analytics baselines, GuardDuty input, broad observability. Traffic Mirroring gotchas: source ENIs must be on Nitro-based instances (m5/m6i/c5/c6i and newer); the target NLB or appliance must scale to mirrored bandwidth (which doubles your effective network load); mirrored traffic is billed both for the mirror session and the target processing; VLAN encapsulation is added so the analytics tool must handle it. Traffic Mirroring is also point-in-time — it does not retroactively capture; turn it on before the incident, or be ready to enable it during.
Q9: How does Firewall Manager differ from CloudFormation StackSets for org-wide network security policy?
Both deploy resources to multiple accounts. CloudFormation StackSets is generic infrastructure-as-code; you author a template, push it to N accounts, and managed-by-stack drift detection alerts you when someone changes a resource. Firewall Manager is purpose-built for security policy: it understands WAF, Shield, Network Firewall, and security group semantics, supports automatic remediation when violations are detected (StackSets can detect drift but not auto-fix), composes with AWS Config for resource discovery, integrates with Security Hub findings, and is tightly bound to AWS Organisations policy hierarchy. SCS-C02 expects Firewall Manager for any "enforce a security baseline org-wide" question; StackSets is the second-best answer only when Firewall Manager does not support the resource type (e.g., custom VPC settings).
Q10: What is the right way to encrypt traffic between an on-premises data center and a workload in a private VPC subnet?
Multiple layered options, by stringency of requirements. (a) Site-to-Site VPN with IPsec — encrypted at IPsec layer, sufficient for most regulatory regimes, low setup cost, two tunnels for HA. (b) Direct Connect alone — private but unencrypted; pair with MACsec for layer-2 encryption on the cross-connect (line-rate, low latency, strong confidentiality on the dedicated link). (c) Direct Connect plus Site-to-Site VPN over it (VPN over DX) — IPsec encryption end-to-end through Direct Connect, satisfying both "private transit" and "encrypted at IPsec" requirements; the canonical answer for the most stringent regulated industries. (d) Application-layer TLS end-to-end — orthogonal to all of the above; the workload itself terminates TLS, so even AWS-side compromise cannot decrypt. For SCS-C02, (c) is the highest-credit answer when scenarios mention "regulated", "compliance", "FedRAMP", or "must encrypt all data in transit even on private links". For (d), look for "end-to-end encryption" or "AWS cannot decrypt" phrasing.
Further Reading and Related Operational Patterns
- VPC Security Groups — User Guide
- Network ACLs — User Guide
- What is AWS Network Firewall
- Suricata-Compatible Rule Groups for Network Firewall
- TLS Inspection in AWS Network Firewall
- Transit Gateway Appliance in a Shared Services VPC
- AWS PrivateLink and VPC Endpoints
- Gateway Endpoints for S3
- VPC Endpoint Policies
- VPC Traffic Mirroring
- VPC Flow Logs
- VPC Flow Logs Record Format Examples
- Site-to-Site VPN
- Accelerated Site-to-Site VPN
- Direct Connect MACsec
- Configuring Lambda Functions to Access VPC Resources
- AWS Firewall Manager Administrator Guide
- AWS SCS-C02 Exam Guide v1.0 (PDF)
Once VPC network security is in place, the natural next operational layers on SCS-C02 are: edge protection with CloudFront, AWS WAF, and AWS Shield Advanced (Domain 3.1) for the public-facing tier in front of the VPC; VPC endpoint policies and the data perimeter for the IAM-meets-network-policy boundary; GuardDuty threat detection consuming VPC Flow Logs and DNS logs for behavioral threats; Security Hub aggregation of Network Firewall, GuardDuty, and Firewall Manager findings into a single pane; and KMS encryption in transit and at rest to layer cryptographic confidentiality on top of the network controls.