examhub .cc The most efficient path to the most valuable certifications.
In this note ≈ 32 min

VPC Security Essentials for Developers

6,400 words · ≈ 32 min read

VPC security for developers is the slice of Amazon VPC knowledge that shows up in application-layer DVA-C02 scenario questions — how a Lambda function reaches Amazon RDS in a private subnet, how an ECS task on Fargate calls AWS Secrets Manager without a NAT Gateway, how a security group chains one microservice to another, and why VPC security for developers is usually tested through the lens of "my Lambda times out" or "my container cannot reach DynamoDB" rather than through CIDR-planning puzzles. On DVA-C02, VPC security for developers surfaces inside Domain 2 (Security, Task 2.1 and 2.2) but it also shows up indirectly throughout Domain 1 (event source wiring) and Domain 4 (troubleshooting cold starts and connection failures). If you are a solutions architect you care about subnet tiers; if you are a developer you care about why your code cannot talk to the thing it needs to talk to, and VPC security for developers answers exactly that.

This guide is the developer-oriented companion to the broader SAA-C03 VPC notes. It deliberately narrows VPC security for developers to the concepts the DVA-C02 exam tests in application scenarios: security groups as a first line of defense, a brief NACL reminder, VPC endpoints for calling AWS APIs privately, Lambda-in-VPC mechanics including Hyperplane ENIs and cold-start implications, Fargate awsvpc networking, ECS service discovery via AWS Cloud Map, private-subnet outbound patterns, runtime dependencies on Interface Endpoints, cross-VPC access, and stateful reply-port behavior. VPC security for developers is a narrower topic than full VPC networking, and that narrower framing is exactly how DVA-C02 tests it.

What VPC Security for Developers Actually Means on DVA-C02

Developers rarely design VPCs from scratch on DVA-C02. Instead, the exam puts you inside an already-built VPC and asks you to make application-level decisions: which security group should your Lambda function use, which endpoint does your container need, why is the SDK call hanging. That is the lens for VPC security for developers.

The developer's four questions

Every VPC security for developers scenario on DVA-C02 reduces to one of four developer questions:

  1. Can my compute reach its dependency? Can Lambda reach RDS, can Fargate reach ElastiCache, can EC2 reach DynamoDB. The answer lives in security groups, route tables, and VPC endpoints.
  2. Can my compute reach AWS APIs privately? When Lambda calls AWS Secrets Manager, does the traffic traverse the internet, a NAT Gateway, or an Interface Endpoint. VPC security for developers frames this as a cost and latency question with a privacy backstop.
  3. Can services discover each other? ECS tasks come and go; how does order-service find payment-service without hardcoding IPs. Answer: AWS Cloud Map and private hosted zones.
  4. Why is my code timing out? A Lambda connects to RDS for 15 seconds and dies. Is it the security group, the subnet, the ENI, the cold start, or DNS. VPC security for developers gives you the diagnostic ladder.
  • Security Group (SG): stateful, ENI-level virtual firewall. First line of defense in VPC security for developers.
  • NACL: stateless, subnet-level firewall. Developers rarely touch it but must know it exists.
  • Gateway Endpoint: a route-table entry that privately reaches Amazon S3 or Amazon DynamoDB; free.
  • Interface Endpoint: an ENI in your subnet powered by AWS PrivateLink that privately reaches most other AWS services; charged hourly per AZ plus per GB.
  • Hyperplane ENI: the shared, fleet-level ENI AWS Lambda uses for VPC-attached functions since 2019.
  • awsvpc network mode: the ECS/Fargate mode that gives every task its own ENI and its own security group.
  • AWS Cloud Map: the service-discovery registry ECS integrates with to map a service name to current task IPs.
  • Reference: https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html

Why VPC security for developers has its own DVA-C02 topic

The Solutions Architect exam tests CIDR blocks and Transit Gateway; the Developer exam tests "my Lambda cannot reach Secrets Manager." Both exams share the same underlying primitives, but the scenarios differ sharply. VPC security for developers on DVA-C02 centers on the application edge: the ENI attached to Lambda, the task ENI on Fargate, the security group wrapping RDS. That is where developers configure things, debug things, and where the exam expects you to know the correct answer in under 90 seconds.

Plain-Language Explanation: VPC Security for Developers

Three different analogies clarify VPC security for developers. Pick the one that sticks for you and mentally snap back to it during the exam.

Analogy 1: The Office Building with Door Guards and a Private Elevator

Picture your application as a multi-floor office building. The building itself is the VPC, and each floor is a subnet. VPC security for developers is about the guards at each office door (security groups) and the private elevators (VPC endpoints) that connect your floor to a secure service wing without stepping outside.

Each office door has a bouncer with a personalized guest list — that is a security group. The bouncer is stateful: if the bouncer lets the pizza delivery person walk in, the bouncer automatically remembers them and lets them walk back out carrying an empty box. A guard who forgets faces (stateless, like a NACL) would stop the delivery person on the way out and make them prove they belonged. Developers only configure the bouncer; the operations team configures the stateless guards at the floor turnstiles.

A private elevator that runs from your office straight to the AWS service wing is a VPC endpoint. There are two kinds: the freight elevator for S3 and DynamoDB only, which is free (a Gateway Endpoint), and the regular elevator that reaches every other office in the service wing, which charges a small hourly fee plus a fee per passenger (an Interface Endpoint powered by AWS PrivateLink). Without these elevators, staff have to walk out the front door, cross the street to the bus depot (NAT Gateway), ride across town, and back — slower and more expensive.

A Lambda function inside the VPC is a temp worker desk that AWS sets up on your floor. In the old days AWS rented a new desk for every temp (one ENI per concurrency unit, slow cold start). Since 2019 AWS uses a Hyperplane ENI, which is a shared hot-desk pool on your floor — when a new temp arrives they sit at an already-prepared desk and start working immediately. That is the cold-start improvement VPC security for developers hinges on.

Analogy 2: The Restaurant Kitchen with Vendor Passes

A large restaurant has a central kitchen (your application tier), a pantry (RDS), a spice rack (ElastiCache), and a supplier wall (AWS services like S3, DynamoDB, Secrets Manager). VPC security for developers is about which staff can walk where, and how ingredients enter the kitchen.

Each station has a color-coded vendor pass — that is a security group. The pantry only opens for staff holding the app-tier-sg pass; the app-tier bouncer only accepts orders from staff with the web-tier-sg pass. This chain of passes is a security group self-reference, and it is the cleanest way to wire microservices in VPC security for developers because it does not hardcode IPs. When a container restarts with a new IP, the pass (the security group) is still valid and the chain still works.

The loading dock door to the street is the NAT Gateway — staff can walk out to pick up a delivery but the public cannot walk in. The internal pneumatic tubes to the AWS supplier wall are VPC endpoints — the chef shoots an order to Secrets Manager and the sealed capsule comes back with the ingredient without anyone leaving the kitchen.

When a Lambda function (a freelance line cook) is hired to work inside this kitchen, AWS does not set up a brand-new workstation every time — there is a shared shelf of pre-staged knives (the Hyperplane ENI) the cook grabs from. If the cook has to step outside for a napkin (call Secrets Manager), they should take the pneumatic tube, not walk to the loading dock — that is the VPC endpoint vs NAT Gateway choice in VPC security for developers.

Analogy 3: The Airport with Airside Corridors

An airport (your VPC) has terminals (subnets) each assigned to a specific concourse (AZ). Passengers (packets) need passes to enter each gate (security groups), and the whole airport is watched by perimeter fences (NACLs) that only matter when someone tries to go over the fence rather than through a gate.

Airside corridors connect every gate to a secure vendor zone where concessions (AWS APIs) live. The corridor to the duty-free megastore (S3/DynamoDB) is free — that is the Gateway Endpoint. The corridor to individual boutique shops (Secrets Manager, KMS, SQS) charges an access fee per hour per concourse and a fee per passenger — that is the Interface Endpoint. Without these airside corridors, a traveler has to leave the secure area, go landside, hail a taxi (NAT Gateway), and come back through security.

An ECS task on Fargate is like a chartered aircraft that parks at its own dedicated gate — each task gets its own ENI with its own IP and its own gate pass (security group) under awsvpc network mode. A Lambda function is a gate-share arrangement where AWS pre-stages the jet bridge (Hyperplane ENI) so boarding is fast.

When a DVA-C02 question asks "why does my Lambda time out connecting to RDS", mentally picture the bouncer at the pantry door. Is the pantry's security group (db-sg) expecting the Lambda's security group (lambda-sg) on the guest list? If not, the connection dies at the door. VPC security for developers on DVA-C02 is 70% security-group-chain questions. Reference: https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html

Security Groups — The Developer's First Line of Defense

Security groups are where VPC security for developers starts and where most DVA-C02 questions live. A security group is a stateful, ENI-level virtual firewall, and developers configure them constantly — on Lambda, on ECS task definitions, on RDS instances, on ElastiCache, on load balancers.

Stateful behavior and why developers love it

A security group is stateful. If you allow inbound TCP 5432 from lambda-sg to db-sg, the reply packets from PostgreSQL back to Lambda are automatically permitted. You never write an outbound rule on db-sg for the reply, and you never write an inbound rule on lambda-sg for the reply. This stateful behavior is the single biggest reason VPC security for developers feels simpler than raw firewall management — the state machine does the reply logic for you.

Contrast with a NACL (covered next): a NACL is stateless, so you would have to explicitly allow outbound ephemeral ports (1024–65535) on the DB subnet and inbound ephemeral ports on the Lambda subnet to let the reply home. Developers almost never think about this because SGs handle it.

Allow-only, no deny rules

Security groups support allow rules only — you cannot write "deny IP 10.0.5.0/24" on a security group. If you need to block a specific source, you either tighten your allow rules or push the deny down to the NACL. For the vast majority of VPC security for developers scenarios, allow-only is fine because you are expressing positive application intent ("the app tier calls the data tier on port 5432").

Source references — the self-reference and chain patterns

The most powerful feature in VPC security for developers is the ability for a security group rule to reference another security group as its source or destination. You do not hardcode IPs; you say "allow inbound TCP 5432 from any ENI tagged with security group app-tier-sg". When the app tier scales, re-IPs, or redeploys, the rule still works.

A self-reference (a security group allowing its own sg-id as source) is the standard pattern for same-tier peer traffic — ECS tasks in the same service talking to each other, ElastiCache nodes talking to each other in a cluster, Redis Sentinel nodes gossiping. VPC security for developers covers this because self-referencing SGs are the only correct way to allow "any instance with this SG to talk to any other instance with this SG" without hardcoding the VPC CIDR (which would over-grant permissions).

Default outbound and default inbound

A brand-new security group has no inbound rules (nothing allowed in) and an outbound 0.0.0.0/0 allow-all rule (everything allowed out). Developers often forget the outbound default is wide open — if you need to restrict egress (e.g., compliance says "only call api.stripe.com"), you must replace the default outbound rule with a narrower one.

Rule limits and SG limits developers hit in practice

  • Up to 5 security groups per ENI (can be raised to 16).
  • Up to 60 inbound rules and 60 outbound rules per SG (1000 total per SG with limit increase).
  • Up to 10,000 security groups per region (soft limit).

In VPC security for developers, the idiomatic pattern is to assign a security group per tier (web-sg, app-sg, db-sg, cache-sg) and then reference them in rules: db-sg inbound allows TCP 5432 from app-sg, cache-sg inbound allows TCP 6379 from app-sg. Never allow the VPC CIDR broadly — that is over-privileging. Never hardcode instance IPs — they change. Always reference security groups. Reference: https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html

Security-group chaining example — Lambda to RDS to ElastiCache

lambda-sg:
  inbound: (none — Lambda is invoked, not contacted directly)
  outbound: allow-all (default)

db-sg (attached to RDS):
  inbound: TCP 5432 from lambda-sg
  outbound: allow-all (default)

cache-sg (attached to ElastiCache Redis):
  inbound: TCP 6379 from lambda-sg
  outbound: allow-all (default)

That is the entire security configuration for a Lambda function that reads from RDS and caches in ElastiCache. Clean, referentially stable, passes security review. VPC security for developers rewards this pattern.

NACLs — What Developers Need to Know (Just Enough)

Developers rarely configure NACLs on DVA-C02, but the exam expects you to recognize when a NACL is the culprit versus a security group. Keep this section short and sharp.

Stateless, subnet-level, numbered

A NACL sits at the subnet boundary and evaluates every packet in and out independently. Because it is stateless, you must write both inbound and outbound rules. Because it is numbered, evaluation runs from lowest rule number upward and stops at the first match — whether that match is allow or deny.

Default vs custom NACLs

  • Default NACL: allows all traffic in both directions. Most VPCs use this.
  • Custom NACL: denies all traffic until you add rules. You almost never need one unless you are implementing subnet-level guardrails.

When VPC security for developers blames the NACL

Most of the time, a connection failure in VPC security for developers is a security group problem, not a NACL problem. But if a question says:

  • Inbound rule looks right but connections still fail — suspect the NACL is missing the ephemeral-port outbound rule (1024–65535).
  • Packet from one subnet to another is dropped — suspect the NACL at the destination subnet's inbound direction.

SG vs NACL one-table cheat

Aspect Security Group NACL
Scope ENI (instance/Lambda/task) Subnet
State Stateful Stateless
Rules Allow only Allow + Deny
Order All evaluated Numbered, first match wins
SG reference as source? Yes No, CIDR only
Default new inbound Deny Default NACL allow, custom deny

A DVA-C02 VPC security for developers question describes an inbound NACL rule allowing HTTPS 443 but omits the outbound ephemeral-ports rule. The symptom is "connection times out even though the rule looks right." The fix is to add an outbound NACL rule allowing TCP 1024–65535 back to the caller. Security groups do not have this trap because they are stateful. Reference: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html

VPC Endpoints for Calling AWS APIs Privately

This is the highest-value section of VPC security for developers for DVA-C02. Expect at least one question on Gateway vs Interface endpoints, and at least one question on "how does my Lambda reach Secrets Manager without going over the internet."

Why developers care

When your Lambda function calls secretsmanager.us-east-1.amazonaws.com, it is calling a public AWS endpoint. From a VPC-attached Lambda in a private subnet, that public endpoint is reachable only if there is a path to the internet — either via a NAT Gateway (costs money per GB) or via an Interface Endpoint that lives inside the VPC (also costs money, but less at scale, plus it never leaves AWS's network). VPC security for developers is largely the decision matrix between these paths.

Gateway Endpoint — free, S3 and DynamoDB only

A Gateway Endpoint is a route-table target. You create one for Amazon S3 or Amazon DynamoDB, AWS inserts a prefix-list route into the route tables you select, and traffic to those services now leaves the subnet through the endpoint instead of going out to the internet.

Gateway Endpoint properties developers must know:

  • Supported services: Amazon S3 and Amazon DynamoDB — nothing else.
  • Free. No hourly charge, no per-GB charge.
  • Route-table based. Your code does not change; the SDK keeps calling the public endpoint hostname, DNS resolves normally, but the packets take the private route.
  • VPC-scoped. No cross-region, no cross-account.
  • Huge cost-optimization win. If your NAT Gateway bill is dominated by S3 traffic, adding a Gateway Endpoint cuts that to zero immediately.

An Interface Endpoint is an ENI placed in your subnet with a private IP from the subnet's CIDR, powered by AWS PrivateLink. Most AWS services support Interface Endpoints, including:

  • AWS Secrets Manager
  • AWS Systems Manager (Parameter Store)
  • AWS KMS
  • Amazon SQS
  • Amazon SNS
  • AWS Step Functions
  • Amazon ECR (API + Docker)
  • Amazon CloudWatch Logs
  • AWS Lambda (to invoke other Lambdas privately)
  • Amazon API Gateway (private APIs)
  • Amazon EventBridge

Interface Endpoint properties developers must know:

  • Charged hourly per AZ plus per GB processed. The hourly fee per AZ adds up, so for small workloads you pick only the AZs you use.
  • Enable Private DNS. When the "Enable DNS name" checkbox is on, AWS overrides DNS for the service's public hostname inside your VPC so your SDK code does not change.
  • Security group on the endpoint ENI. Yes, the endpoint ENI itself has a security group. It must allow inbound HTTPS 443 from your application's security group.
  • Endpoint policy. A resource-policy-style IAM document that scopes which actions are allowed through the endpoint (e.g., "only secretsmanager:GetSecretValue, nothing else").

Gateway vs Interface Endpoint decision table

Scenario Answer
Lambda in private subnet calls S3 Gateway Endpoint (free)
Lambda in private subnet calls DynamoDB Gateway Endpoint (free)
Lambda in private subnet calls Secrets Manager Interface Endpoint
ECS task on Fargate calls KMS Interface Endpoint
Anything involving a SaaS vendor over PrivateLink Interface Endpoint
Cross-region private access to an AWS service Interface Endpoint (Gateway is VPC-local)

Cost example that shows up on DVA-C02

A Lambda function calls AWS Secrets Manager 1 million times a day from a private subnet. Without an Interface Endpoint, every call goes through the NAT Gateway: you pay NAT Gateway hourly plus data processing per GB. With an Interface Endpoint, you pay the endpoint hourly per AZ plus data processing per GB, and your traffic never leaves AWS's network. At moderate volume, the Interface Endpoint wins on both cost and latency — and it removes the hard dependency on a NAT Gateway, which is a single point of failure design concern.

When a DVA-C02 question in VPC security for developers mentions S3 or DynamoDB and asks about private access, the answer is Gateway Endpoint — free, no code change. When it mentions any other AWS service (KMS, Secrets Manager, SQS, SNS, ECR, Step Functions, EventBridge), the answer is Interface Endpoint — hourly per AZ plus per GB processed. If the question adds "cross-account" or "SaaS partner", it is PrivateLink under the hood, which is an Interface Endpoint. Reference: https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints.html

Lambda in a VPC — ENIs, Hyperplane, and Cold Starts

This is the single most-asked VPC security for developers question on DVA-C02: when do I put a Lambda function in a VPC, what does it cost me, and why is my cold start suddenly slow.

Default: Lambda is not in your VPC

By default, a Lambda function runs in AWS's own service VPC, not yours. It has internet access out of the box, but it cannot reach private resources in your VPC (RDS, ElastiCache, private EC2 instances, internal ALBs). If the Lambda needs to talk to any of those, you attach the function to your VPC.

Attaching Lambda to a VPC

When you configure a Lambda function's VPC settings, you pick:

  • One or more private subnets (pick subnets in at least two AZs for HA).
  • One or more security groups for the Lambda ENI.

Lambda then places ENIs in those subnets on your behalf so the function has a path into the VPC.

Hyperplane ENI — the 2019 cold-start win

Pre-2019, AWS Lambda provisioned one ENI per concurrency unit when a VPC-attached function scaled up. ENI creation took 10–30 seconds, which showed up as a devastating cold-start penalty — a 15-second spike on the first invocation of a scaled-up concurrency slot was not unusual.

In 2019 AWS introduced Hyperplane ENIs: a shared, fleet-level ENI that multiple execution environments share. Hyperplane ENIs are pre-provisioned asynchronously and amortized across many Lambda concurrency units. The impact on VPC security for developers:

  • Attaching Lambda to a VPC no longer imposes a meaningful cold-start penalty compared to non-VPC Lambdas. The cold start is comparable to the non-VPC case.
  • ENI creation still happens at the function level during configuration and at scale boundaries, but the execution-time ENI attach is effectively zero-cost now.
  • IP address consumption from the subnet is lower because ENIs are shared. You no longer have to allocate massive /20 subnets just to fit Lambda concurrency.

For DVA-C02: if a question asks about historical Lambda-in-VPC cold-start pain, the correct answer references Hyperplane ENIs as the fix. If a question asks about current cold-start mitigation, the answer is provisioned concurrency plus code-level optimizations, and the VPC attachment is usually not the primary culprit anymore.

VPC-attached Lambda reaching AWS APIs

Once a Lambda is in your VPC, it loses default internet access. If the function needs to call AWS Secrets Manager or AWS KMS, your options are:

  1. Add an Interface Endpoint for Secrets Manager, KMS, and any other AWS service the function calls. The Lambda reaches the endpoint via its private IP. No NAT, no internet.
  2. Route 0.0.0.0/0 through a NAT Gateway in a public subnet. The Lambda reaches the public AWS endpoint over the internet from AWS's network. Works, but costs per GB.

VPC security for developers on DVA-C02 almost always prefers option 1: Interface Endpoints for the AWS services the Lambda depends on. The NAT Gateway route is fine for one-off calls to external HTTPS APIs (e.g., a Stripe webhook callback) but wasteful for AWS API traffic.

Lambda VPC checklist

  • Pick at least two private subnets in different AZs for HA.
  • Attach a security group with no inbound rules needed (Lambda is invoked by the service, not contacted directly) and outbound allow-all by default.
  • Add the target's security group inbound rule to allow the Lambda SG as a source (e.g., db-sg inbound TCP 5432 from lambda-sg).
  • For AWS API calls, add Interface Endpoints for the specific services the function uses. Avoid NAT Gateway for AWS API traffic at scale.
  • For public internet calls (Stripe, Twilio, etc.), route 0.0.0.0/0 to a NAT Gateway in a public subnet.
  • Default Lambda runs outside your VPC — has internet, no private VPC access.
  • Attaching to VPC requires subnets (pick 2+ AZs) and security groups for the Lambda ENI.
  • Hyperplane ENI (since 2019) removed the historic cold-start penalty for VPC-attached Lambdas.
  • VPC-attached Lambda loses default internet — use Interface Endpoints (preferred) or NAT Gateway (fallback) for AWS API calls.
  • Reference: https://docs.aws.amazon.com/lambda/latest/dg/foundation-networking.html

Fargate Networking — awsvpc Mode and Task ENIs

Fargate (and ECS on EC2 with awsvpc mode) behaves differently from classic ECS on EC2. Every task gets its own ENI, its own private IP, and its own security group. VPC security for developers on DVA-C02 tests this because it changes how you chain security groups and consume IPs.

awsvpc gives every task its own ENI

Under awsvpc network mode:

  • Each ECS task receives a dedicated ENI attached to a subnet you pick in the task definition / service configuration.
  • Each task has its own private IP from that subnet's CIDR.
  • Each task carries its own security group separate from the EC2 host or the Fargate infrastructure.

Compare with the legacy bridge mode where many containers shared the EC2 host's ENI and port mappings. awsvpc is simpler, more secure, and is required for Fargate. On DVA-C02, assume awsvpc unless the question explicitly says otherwise.

Implications for VPC security for developers

  • Security-group granularity per task. If you run three services in the same cluster, each can have its own SG. Chains like orders-sgpayments-sg are trivial.
  • IP consumption is linear with task count. A service running 100 tasks consumes 100 IPs from the subnet. Size your subnets accordingly (a /24 gives you 251 usable IPs minus the 5 AWS reserves).
  • No separate ENI per task in Fargate host model. That is historical confusion — there is one ENI per task, the Fargate host abstraction is hidden from you. The point the exam tests is that the task itself is the networking unit, not the container instance.
  • Task role for AWS API calls. Each task assumes an IAM task role (different from the ECS task execution role used to pull images and push logs). The task role is what governs which AWS APIs the app code can call.

Fargate reaching AWS APIs privately

Same pattern as Lambda:

  • S3 or DynamoDB → Gateway Endpoint, free.
  • Any other AWS service → Interface Endpoint, hourly per AZ plus per GB.
  • Pulling images from ECR → Interface Endpoints for ecr.api, ecr.dkr, and S3 (Gateway Endpoint) because ECR stores layers in S3.
  • Writing logs to CloudWatch Logs → Interface Endpoint for logs if you want to avoid the NAT path.

ECS Service Discovery via AWS Cloud Map and Private Hosted Zones

Services come and go. A naive design hardcodes IPs, which breaks the moment ECS replaces a task. VPC security for developers expects you to know how ECS handles this problem.

AWS Cloud Map + Route 53 private hosted zones

ECS integrates with AWS Cloud Map to automatically register and deregister service instances. Cloud Map can back a service with:

  • DNS A-record entries in a Route 53 private hosted zone scoped to your VPC.
  • SRV records for port-aware discovery.
  • API-based discovery via the DiscoverInstances API for more advanced use cases.

When you configure serviceRegistries on an ECS service, Cloud Map creates an entry like orders.internal. with A-records pointing to every current task ENI IP. The client just does a DNS lookup on orders.internal. and round-robins across the current task IPs.

Why this matters for VPC security for developers

  • No hardcoded IPs. Your code reads a hostname from an environment variable and DNS does the rest.
  • Security groups still gate the traffic. DNS discovery tells the client where to connect; the security group on the target task decides whether to accept the connection.
  • Works with awsvpc only because every task needs its own IP for the A-record.
  • Private hosted zone scoping. The private hosted zone is associated with one or more VPCs; Cloud Map entries resolve only from inside those VPCs.

Alternative: ALB with path or host-based routing

For HTTP services, putting an internal Application Load Balancer in front of each service and using host-based or path-based routing is the more common pattern. Cloud Map is preferred for non-HTTP protocols (gRPC with custom ports, raw TCP services) and for cases where you want client-side load balancing without a middlebox.

Private Subnet Outbound — NAT Gateway vs Interface Endpoints

Private subnets by definition have no default route to an Internet Gateway. Developers have two practical outbound options, and VPC security for developers tests when to pick which.

NAT Gateway — outbound to anything, charged per GB

A NAT Gateway in a public subnet gives private-subnet resources outbound access to the internet and to public AWS service endpoints. It is the right choice when:

  • Your code calls third-party APIs over the public internet (Stripe, SendGrid, GitHub, etc.).
  • You call a long tail of AWS services where adding Interface Endpoints for every one would cost more than the NAT path.
  • You prefer operational simplicity and the NAT bill is manageable.

Developer gotchas:

  • One NAT Gateway per AZ for resilience. A single NAT Gateway is a cross-AZ single point of failure.
  • Per-GB processing charge — S3 or DynamoDB traffic flowing through NAT is wasteful when a free Gateway Endpoint would do.
  • Hourly charge per NAT Gateway.

Interface Endpoints — private, cheaper at scale for AWS APIs

When your Lambda or Fargate task talks primarily to AWS APIs, Interface Endpoints are almost always the right answer:

  • Traffic never leaves AWS's network.
  • No NAT Gateway dependency — one less single point of failure.
  • At sustained volume, cheaper than NAT Gateway data processing.
  • Endpoint policy gives you an extra IAM-style guardrail.

Combined pattern — Endpoints for AWS, NAT for external

A production VPC security for developers pattern:

  1. Gateway Endpoint for Amazon S3 and Amazon DynamoDB (free, easy win).
  2. Interface Endpoints for the specific AWS services your application calls heavily (Secrets Manager, KMS, SQS, SNS, ECR, CloudWatch Logs).
  3. NAT Gateway only for public internet egress to third-party APIs.

That combination minimizes NAT cost and NAT dependency while keeping the developer ergonomics.

Runtime Dependencies on VPC Endpoints

Once you adopt VPC endpoints, your application has new runtime dependencies that VPC security for developers exams often probe.

Security-group traps on the endpoint ENI

Interface Endpoints have their own ENI with their own security group. If your Lambda or task cannot reach the endpoint, the first suspect is:

  • Endpoint SG inbound rule missing HTTPS 443 from your application's SG.
  • Application SG outbound rule restricted and not allowing HTTPS 443 to the endpoint's private IP or to the endpoint's SG.

Both SGs must permit the flow. Because SG defaults are open outbound and closed inbound, the usual missing rule is the inbound rule on the endpoint.

DNS — Enable Private DNS

When you create an Interface Endpoint, make sure Enable Private DNS is on (the default). That overrides the public AWS service hostname (e.g., secretsmanager.us-east-1.amazonaws.com) inside your VPC so the name resolves to the endpoint's private IP. Without this, you have to point your SDK at a custom endpoint URL, which is fragile.

For Private DNS to work, the VPC must have both enableDnsSupport and enableDnsHostnames set to true. These are default-true on new VPCs but sometimes disabled in custom setups.

Endpoint policies — IAM-style allowlists

Every VPC endpoint has an endpoint policy (a resource-based policy document). By default it allows all actions; for zero-trust VPC security for developers designs, you scope it down:

{
  "Statement": [{
    "Effect": "Allow",
    "Principal": "*",
    "Action": ["secretsmanager:GetSecretValue"],
    "Resource": "arn:aws:secretsmanager:us-east-1:111122223333:secret:prod/app-*"
  }]
}

That says "only GetSecretValue on secrets under the prod/app-* namespace is allowed through this endpoint." A misconfigured Lambda that tries to read a different secret will be denied at the endpoint.

Region mismatch

Interface Endpoints are regional. An endpoint in us-east-1 does not serve a request bound for us-west-2. If your code explicitly constructs cross-region endpoint URLs, the VPC endpoint will not intercept them. For cross-region private access you need a different pattern (Transit Gateway + cross-region endpoint, or a cross-region PrivateLink setup).

Developers regularly need their workload in one VPC to call a service in another VPC — a shared-services account, a partner SaaS, another microservice team's VPC. VPC security for developers on DVA-C02 tests the decision between VPC Peering and AWS PrivateLink.

VPC Peering

A VPC Peering Connection links two VPCs at the network layer. Properties:

  • Non-transitive. A peered to B and B peered to C does not give A access to C.
  • CIDRs must not overlap.
  • Both sides update route tables to include the peer's CIDR.
  • Security groups can reference peer SGs in the same region.
  • No hourly fee; you pay for data transfer.
  • All ports, all protocols — once routed, the two VPCs can address each other as if they were one.

When to pick peering: two trusted VPCs in the same account or same organization, where broad any-to-any connectivity is intended, and CIDRs are planned.

AWS PrivateLink exposes a service via an Interface Endpoint in the consumer VPC. Properties:

  • Unidirectional. The consumer initiates; the provider does not reach into the consumer VPC.
  • No CIDR overlap constraint. Each side only sees the endpoint's private IP in its own subnet.
  • Scoped to a specific service (an NLB in the provider VPC), not the whole VPC.
  • One-to-many. One service can be consumed by many customer VPCs.
  • Charged per hour per AZ plus per GB on the consumer side.
  • SaaS-friendly. Most VPC-security-for-developers scenarios involving a SaaS vendor imply PrivateLink.

When to pick PrivateLink: you expose one specific service (an API, a database proxy) to many consumers; CIDRs may overlap; you want a least-privilege connection, not a full network merge.

Decision table

Requirement VPC Peering PrivateLink
CIDRs overlap Not allowed Allowed
Expose one specific service Awkward Natural
Full any-to-any connectivity Natural Not supported
SaaS vendor to customer Not appropriate Ideal
Many consumers, many producers Scales poorly Scales cleanly
Cost Data transfer only Hourly per AZ + per GB

A classic VPC security for developers trap: VPC A peered to hub B, hub B peered to VPC C, question asks whether A reaches C. Answer: no, VPC Peering is non-transitive. The fix is either explicit A-to-C peering, or Transit Gateway, or — if you only need one specific service — PrivateLink from A into C. Reference: https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html

Stateful Reply Ports and the Ephemeral Range

VPC security for developers leans heavily on the stateful nature of security groups, so it is worth being precise about what "stateful" means in practice.

How connection tracking works

When a client on an ENI protected by client-sg opens a TCP connection to a server on server-sg:

  1. The client sends a SYN from client-ip:50123 (an ephemeral source port from the OS's pool) to server-ip:443.
  2. server-sg must have an inbound rule allowing TCP 443 from client-sg.
  3. server-sg is stateful, so the reverse 5-tuple (server-ip:443 → client-ip:50123) is automatically permitted to return.
  4. client-sg is also stateful on its outbound side — the SYN-ACK reply comes back without a specific inbound rule.

There is nothing for the developer to configure on the reply path. This is the fundamental difference between security groups and NACLs for VPC security for developers.

Where stateless breaks: NACLs

If the traffic also crosses a NACL, remember:

  • The outbound NACL on the server subnet must allow TCP 1024–65535 (the ephemeral reply range) back to the client.
  • The inbound NACL on the client subnet must allow TCP 1024–65535 from the server.

If a VPC security for developers scenario has a NACL and the application "times out" even though the SG looks correct, this is almost always the culprit.

Ephemeral range specifics

  • Linux typically uses 32768–60999 (kernel default).
  • Windows (newer) uses 49152–65535.
  • AWS recommendation for NACLs is to allow 1024–65535 to cover both.

In VPC security for developers, your default mental model should be: "security groups handle replies for me." Only when a NACL is explicitly in the architecture do you need to reason about the ephemeral port range. If exam text mentions a NACL, check its outbound rule for ephemeral ports; if it does not, assume SG-only and trust the stateful return path. Reference: https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html

Key Numbers and Must-Memorize Facts for VPC Security for Developers

  • 5 security groups per ENI (default), up to 16 with limit increase.
  • 60 inbound + 60 outbound rules per SG by default.
  • Security Group = stateful = ENI-level = allow only = can reference other SGs.
  • NACL = stateless = subnet-level = allow + deny = numbered, first-match-wins.
  • Gateway Endpoint supports only Amazon S3 and Amazon DynamoDB, and it is free.
  • Interface Endpoint (AWS PrivateLink) covers most AWS services, charged hourly per AZ + per GB.
  • Hyperplane ENI (2019) removed the Lambda-in-VPC cold-start penalty.
  • VPC-attached Lambda loses default internet; add Interface Endpoints or a NAT Gateway.
  • Fargate awsvpc mode: one ENI per task, own IP, own SG.
  • ECS Service Discovery uses AWS Cloud Map + Route 53 private hosted zone.
  • VPC Peering: non-transitive, no CIDR overlap, free (data transfer only).
  • PrivateLink: unidirectional, overlap-safe, charged per AZ hour + per GB.
  • NAT Gateway: zonal, HA requires one-per-AZ, charged per hour + per GB.
  • Ephemeral ports for NACL replies: allow TCP 1024–65535.
  • Reference: https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html

Common Exam Traps — VPC Security for Developers

Expect at least one of these on DVA-C02. Learn to spot the pattern before reading the options.

Trap 1: Lambda in VPC cannot reach Secrets Manager

A question describes a Lambda function recently attached to a VPC that now fails to call AWS Secrets Manager. The instinct is "add a NAT Gateway." The better answer is "add an Interface Endpoint for Secrets Manager" — cheaper at sustained volume, stays inside AWS's network, removes the NAT dependency. The NAT Gateway answer is acceptable but suboptimal, and DVA-C02 distractors often include both.

Trap 2: Security-group self-reference confusion

A scenario has ElastiCache Redis nodes that need to gossip among themselves. The wrong fix is "allow the VPC CIDR" on port 6379. The right fix is the cache SG references itself as a source on the cluster gossip ports. Self-reference is the idiomatic VPC security for developers pattern.

Trap 3: Gateway vs Interface Endpoint picker

A question says "the team wants to save on NAT Gateway cost by privately reaching DynamoDB." Answer: Gateway Endpoint (free). Then the same exam throws a second question: "the team wants to save on NAT Gateway cost by privately reaching Secrets Manager." Answer: Interface Endpoint (because Gateway does not support Secrets Manager). Know the two-service list (S3, DynamoDB) cold.

Trap 4: Lambda VPC attachment blamed for cold starts

A distractor suggests "remove the Lambda from the VPC to fix cold starts." Post-Hyperplane (2019), VPC attachment is not the primary cold-start cause. The correct fix is usually provisioned concurrency, smaller deployment packages, or moving expensive initialization outside the handler. VPC security for developers no longer pays a cold-start tax just for being VPC-attached.

Trap 5: Fargate task role confused with execution role

In awsvpc mode, each task has two IAM roles: the task role (what the app code assumes to call AWS APIs) and the task execution role (what the Fargate infrastructure uses to pull images from ECR and push logs to CloudWatch). If the task cannot start, the execution role is the suspect; if the task runs but GetSecretValue fails, the task role is the suspect. VPC security for developers tests this distinction alongside the networking layer.

Trap 6: Peering is non-transitive

A three-VPC hub-and-spoke drawn with peering will not give spoke-to-spoke connectivity. For DVA-C02 VPC security for developers, either answer is Transit Gateway (for broad connectivity) or PrivateLink (for a single exposed service).

Trap 7: Direct VPC CIDR allow over SG reference

Some distractors suggest "allow the VPC CIDR on port 5432 in db-sg" to fix a Lambda-to-RDS problem. This works but over-grants. The correct answer is always "allow lambda-sg as the source" — VPC security for developers rewards least privilege.

Trap 8: Missing Enable Private DNS on Interface Endpoint

A Lambda is VPC-attached, an Interface Endpoint for Secrets Manager exists, but SDK calls still fail. Suspect: Enable Private DNS is off on the endpoint, so the SDK's hostname still resolves to the public endpoint, which is unreachable because the Lambda has no internet path. Turning on Private DNS fixes it.

VPC security for developers shows up as a dependency in most other DVA-C02 topics. Here is how each downstream topic builds on this note:

  • iam-roles-policies: Lambda execution role and Fargate task role are assumed by VPC-attached compute; the role controls AWS API access while VPC endpoints control the network path.
  • lambda-fundamentals: When a function needs to reach a private resource, the VPC configuration on the function object and the Hyperplane ENI mechanics in this note apply directly.
  • lambda-performance-optimization: Cold-start mitigation sometimes involves the VPC attachment; the Hyperplane ENI conversation lives here.
  • kms-encryption: KMS is usually reached via Interface Endpoint when the application is in a private subnet.
  • secrets-manager-parameter-store: Both Secrets Manager and SSM Parameter Store are commonly reached via Interface Endpoints from VPC-attached compute.
  • container-deployment: ECS task networking in awsvpc mode is covered here; ECR image pull from a private subnet uses Interface Endpoints for ecr.api and ecr.dkr plus a Gateway Endpoint for S3.
  • xray-and-debugging: X-Ray traces show which hop in the VPC security for developers chain stalled — SG block, DNS, or endpoint — and Interface Endpoint for X-Ray keeps trace data private.

Every one of these downstream topics assumes the primitives in this note. When in doubt, come back here.

FAQ — VPC Security for Developers Top Questions

Q1: Does putting a Lambda function in a VPC still cause slow cold starts in 2026?

No, not the way it used to. Before 2019, attaching a Lambda function to a VPC added a 10–30 second ENI-provisioning penalty on every cold start at scale. Since 2019, AWS Lambda uses Hyperplane ENIs — a shared, pre-provisioned, fleet-level network interface — that effectively eliminates the VPC-attachment cold-start penalty. VPC-attached Lambdas now have cold start times comparable to non-VPC Lambdas. The remaining cold-start factors are init code, deployment package size, runtime choice, and provisioned concurrency. On DVA-C02, if a distractor says "remove the Lambda from the VPC to fix cold starts", it is usually wrong — fix the init code or add provisioned concurrency instead.

Q2: When should my Lambda call AWS services through an Interface Endpoint instead of a NAT Gateway?

Almost always, if the calls are sustained. Interface Endpoints keep traffic inside AWS's network, remove the NAT Gateway as a single point of failure, and are cheaper per GB than NAT data processing at sustained volume. The typical VPC security for developers pattern is: Gateway Endpoint for S3 and DynamoDB (free), Interface Endpoints for Secrets Manager, KMS, SQS, SNS, ECR, CloudWatch Logs, and any other AWS API your function calls heavily, and NAT Gateway only for external internet calls to third-party APIs (Stripe, Twilio, GitHub). For a function that only calls AWS APIs, the NAT Gateway can often be removed entirely.

Q3: What is the right way to let ECS task A talk to ECS task B securely?

Use security group chaining with awsvpc network mode. Task A runs with sg-a, task B runs with sg-b. On sg-b, add an inbound rule allow TCP <port> from sg-a. That is it. For service discovery so that task A knows where task B lives, register task B in AWS Cloud Map with a Route 53 private hosted zone so task A can DNS-resolve task-b.internal. to the current task IPs. This combination — Cloud Map for discovery, security groups for authorization — is the idiomatic VPC security for developers pattern for intra-cluster service communication.

Q4: What is the difference between a security group on the endpoint ENI and a security group on my application?

Both must allow the traffic. The application security group (on your Lambda ENI or Fargate task ENI) controls outbound to the endpoint — its default outbound allow-all rule is usually sufficient. The endpoint security group (on the Interface Endpoint's ENI) controls inbound to the endpoint — you must add a rule allowing TCP 443 from your application's security group or from the VPC CIDR. If the endpoint security group is missing the inbound rule, SDK calls fail with connection errors and it looks like a DNS or IAM issue when it is actually a security group issue. In VPC security for developers, always check both SGs.

Use VPC Peering when you want broad, any-to-any network connectivity between two VPCs and you control both sides' CIDR planning. Peering is free (data transfer only), but non-transitive and blocked by CIDR overlap. Use AWS PrivateLink when you want to expose one specific service (an API behind an NLB) to many consumer VPCs, when CIDR overlap is possible (especially across accounts), or when you are a SaaS vendor serving customer VPCs. PrivateLink is unidirectional — the consumer reaches the provider, not the reverse — which is usually a security advantage. For DVA-C02 VPC security for developers, think "peering merges networks; PrivateLink exposes a service."

Q6: Why does my VPC-attached Lambda fail to resolve secretsmanager.us-east-1.amazonaws.com even after I created the Interface Endpoint?

Three likely causes. First, Enable Private DNS is off on the Interface Endpoint — turn it on so the SDK's default hostname resolves to the endpoint's private IP. Second, the VPC's DNS settings (enableDnsSupport and enableDnsHostnames) must both be true for Private DNS to work. Third, the endpoint security group is missing an inbound rule for HTTPS 443 from your Lambda's security group. Check all three — in VPC security for developers, this is the most common Interface Endpoint misconfiguration.

Q7: If my private subnet has no internet and no VPC endpoints, can my code still call AWS services?

No. A private subnet by definition has no default route to the internet (no 0.0.0.0/0 → IGW). Without a NAT Gateway or an Interface Endpoint, there is no path to AWS's public service endpoints. Your SDK call will hang until the configured timeout and then fail. The fix is either (a) add a NAT Gateway in a public subnet and point the private subnet's route table at it, or (b) add Interface Endpoints for the specific AWS services your code needs. For S3 and DynamoDB, always add a free Gateway Endpoint. VPC security for developers assumes this basic connectivity check as step zero of any scenario.

Further Reading

Official sources