examhub .cc The most efficient path to the most valuable certifications.
In this note ≈ 36 min

AWS Lambda Fundamentals

7,100 words · ≈ 36 min read

AWS Lambda is the single most tested service on the entire AWS Certified Developer Associate (DVA-C02) exam. Task statement 1.2 — "Develop code for AWS Lambda" — is only one line in the exam guide, but the scenarios it spawns cover every corner of AWS Lambda: handler signatures per runtime, synchronous vs asynchronous vs poll-based event sources, memory and timeout tuning, /tmp ephemeral storage, Lambda Layers, Lambda Extensions, container images, VPC networking, execution roles, destinations, versions, aliases, SnapStart, Provisioned Concurrency, Reserved Concurrency, X-Ray active tracing, and power tuning. This chapter trains you to recognize every AWS Lambda concept the DVA-C02 exam can throw at you and to map each to the correct answer under time pressure. AWS Lambda deserves 10–15 percent of your total study time; mastering AWS Lambda fundamentals is the single highest-leverage move you can make on DVA-C02.

What Is AWS Lambda?

AWS Lambda is a serverless compute service that runs your code in response to events. With AWS Lambda, you upload a function, configure a trigger, and AWS Lambda automatically provisions execution environments, scales concurrency, patches the underlying operating system, and bills you only for the milliseconds your code runs. AWS Lambda removes every layer of infrastructure management below the handler function. The DVA-C02 exam treats AWS Lambda as the default compute primitive for event-driven workloads, API backends, stream processors, and glue code between other AWS services.

How AWS Lambda Fits the DVA-C02 Exam Map

On DVA-C02, AWS Lambda appears across every domain:

  • Domain 1 (Development, 32%): handler signatures, event sources, concurrency, Layers, environment variables, destinations, versions, aliases.
  • Domain 2 (Security, 26%): AWS Lambda execution role, resource-based policies, environment variable encryption with AWS KMS, VPC configuration.
  • Domain 3 (Deployment, 24%): AWS Lambda container images, AWS SAM templates, CodeDeploy canary and linear traffic shifting with AWS Lambda aliases.
  • Domain 4 (Troubleshooting, 18%): AWS Lambda cold starts, Provisioned Concurrency, Reserved Concurrency, X-Ray active tracing, Lambda Power Tuning.

No matter which domain a question targets, AWS Lambda is usually a correct answer or a supporting actor. Memorize AWS Lambda fundamentals and Domain 1 scoring becomes a walkover.

The AWS Lambda Execution Model at 30,000 Feet

Every AWS Lambda invocation follows the same lifecycle: (1) AWS Lambda receives the invocation via the AWS SDK or an event source; (2) AWS Lambda selects an existing warm execution environment or initializes a new one (cold start); (3) the runtime loads your handler; (4) the handler processes the event and returns a response or throws an error; (5) AWS Lambda freezes the environment for reuse. Understanding this lifecycle unlocks every optimization question on the exam, because AWS Lambda billing, cold-start behavior, connection reuse, and /tmp persistence all derive from it.

Analogy 3 — The Airport Conveyor Belt

AWS Lambda 事件模型像機場的行李輸送帶。同步(synchronous)呼叫像客服櫃檯 — 乘客(API Gateway request)站在櫃檯等 AWS Lambda 回話,最多等 29 秒(API Gateway 上限)。非同步(asynchronous)呼叫像行李託運 — 你把行李丟上 S3 櫃檯,人先走,AWS Lambda 自動重試最多 2 次,若失敗送到 DLQ / Destinations 做失物招領。Poll-based 事件來源像行李輸送帶的感應器 — AWS Lambda 服務自己巡邏 SQS、Kinesis、DynamoDB Streams,把一批行李(batch)捆好一次處理。

三個類比串起來,AWS Lambda 的「無伺服器 × 事件驅動 × 預暖機 × 批次處理」四大性格就全清晰。

An AWS Lambda execution environment is the isolated micro-VM (Firecracker) that hosts a single concurrent invocation. It contains the runtime, your deployment package (or container image), the /tmp filesystem, and any Lambda Extensions. AWS Lambda freezes the environment between invocations and reuses it for warm starts until the environment is recycled. Reference: https://docs.aws.amazon.com/lambda/latest/dg/lambda-runtimes.html

The AWS Lambda Handler: Signatures Per Runtime

Every AWS Lambda function has exactly one entry point — the handler. The handler signature is runtime-specific, and DVA-C02 loves to ask which signature matches which language.

Node.js Handler Signature

For Node.js (currently Node.js 18.x and 20.x), AWS Lambda expects an async function or a callback-style function:

exports.handler = async (event, context) => {
  // event: the invocation payload
  // context: runtime metadata (requestId, remaining time, log group)
  return { statusCode: 200, body: "ok" };
};

The two standard parameters are event (invocation payload, already JSON-parsed) and context (Lambda runtime metadata). Node.js supports async/await natively; older callback style (event, context, callback) still works.

Python Handler Signature

For Python (3.9, 3.10, 3.11, 3.12), AWS Lambda expects a two-parameter function in a module:

def lambda_handler(event, context):
    return {"statusCode": 200, "body": "ok"}

Configure the handler as module_name.lambda_handler in the AWS Lambda console or SAM template.

Java Handler Signature

Java (11, 17, 21) supports two styles. The AWS Lambda interface RequestHandler<I, O>:

public class Handler implements RequestHandler<APIGatewayProxyRequestEvent, APIGatewayProxyResponseEvent> {
  public APIGatewayProxyResponseEvent handleRequest(APIGatewayProxyRequestEvent event, Context context) { ... }
}

Or the raw RequestStreamHandler for binary streams. Java AWS Lambda functions have historically suffered the worst cold starts — AWS Lambda SnapStart exists to fix this specifically.

.NET Handler Signature

.NET (6, 8) handlers are static or instance methods:

public class Function {
  public APIGatewayProxyResponse FunctionHandler(APIGatewayProxyRequest request, ILambdaContext context) { ... }
}

Configure the handler as AssemblyName::Namespace.Function::FunctionHandler.

Go Handler Signature

Go used to have a managed runtime; AWS Lambda now requires Go to run on the provided.al2023 custom runtime or as a container image with the AWS Lambda Go library:

func Handler(ctx context.Context, event events.APIGatewayProxyRequest) (events.APIGatewayProxyResponse, error) { ... }
func main() { lambda.Start(Handler) }

Ruby Handler Signature

Ruby (3.2, 3.3) expects a top-level method:

def lambda_handler(event:, context:)
  { statusCode: 200, body: "ok" }
end

All AWS Lambda handlers accept (event, context). Node.js returns a value or Promise; Python returns a dict; Java implements RequestHandler<I, O>; .NET uses ILambdaContext; Go uses lambda.Start(Handler); Ruby uses keyword args (event:, context:). Memorize "event + context, runtime-shaped" and you will never miss a handler question on DVA-C02. Reference: https://docs.aws.amazon.com/lambda/latest/dg/lambda-runtimes.html

AWS Lambda Runtime Support

AWS Lambda provides managed runtimes that include the language interpreter, the AWS SDK, and the Lambda runtime API shim. DVA-C02 expects you to recognize every supported runtime family.

Managed Runtimes

AWS Lambda currently offers managed runtimes for:

  • Node.js — 18.x, 20.x (LTS releases).
  • Python — 3.9, 3.10, 3.11, 3.12.
  • Java — 11, 17, 21 (Corretto).
  • .NET — 6, 8 (LTS).
  • Ruby — 3.2, 3.3.

Older runtimes (Node.js 16, Python 3.7, Java 8, .NET Core 3.1) are deprecated or in "deprecated, runtime removed" status. On DVA-C02, any question mentioning "deprecated Lambda runtime" wants you to migrate to a supported version.

Custom Runtimes and provided.al2 / provided.al2023

AWS Lambda supports custom runtimes via the provided.al2 and provided.al2023 base runtimes (Amazon Linux 2 and Amazon Linux 2023). You implement the Lambda runtime API — an HTTP contract with /runtime/invocation/next, /runtime/invocation/{requestId}/response, and /runtime/invocation/{requestId}/error endpoints — and deliver the runtime as a Lambda Layer or as part of the deployment package. Go, Rust, and any custom language use this path.

Container Image Deployment

AWS Lambda accepts container images up to 10 GB in size from Amazon ECR. You build a Docker image from the AWS-provided base images (e.g., public.ecr.aws/lambda/python:3.12) or from scratch as long as you implement the Runtime API. Container images unlock large ML models, custom system libraries, and existing container-based CI pipelines. Everything else — triggering, concurrency, billing — stays identical to ZIP-deployed AWS Lambda functions.

Choose ZIP deployment for most AWS Lambda functions under 250 MB unzipped. Choose container image when your dependencies exceed 250 MB unzipped, you need custom system libraries, or your team already ships container images through a unified pipeline. The 10 GB container image limit buys huge dependency headroom (think ML inference with PyTorch). Reference: https://docs.aws.amazon.com/lambda/latest/dg/images-create.html

AWS Lambda Event Sources: Synchronous vs Asynchronous vs Poll-Based

This is the single most-tested AWS Lambda concept on DVA-C02. Every event source falls into one of three categories, and each category has different retry, DLQ, and concurrency behavior.

Synchronous Invocation

In synchronous AWS Lambda invocation, the caller holds the connection open and receives the response. The caller is responsible for retries. Common synchronous event sources:

  • Amazon API Gateway (REST, HTTP, WebSocket).
  • Application Load Balancer (ALB target).
  • Amazon Cognito (triggers).
  • AWS Lambda function URLs.
  • AWS SDK Invoke API with InvocationType=RequestResponse (the default).
  • Amazon Lex, Alexa, CloudFront (Lambda@Edge).

Synchronous payload limit: 6 MB request and response.

Asynchronous Invocation

In asynchronous AWS Lambda invocation, the caller hands the event to AWS Lambda, AWS Lambda queues it internally, and the caller gets an immediate 202 Accepted. AWS Lambda then invokes the function. Common asynchronous event sources:

  • Amazon S3 event notifications.
  • Amazon SNS.
  • Amazon EventBridge (rules and scheduler).
  • AWS SDK Invoke with InvocationType=Event.
  • AWS CodeCommit, AWS Config, AWS IoT, Amazon SES (receive rule).

Asynchronous payload limit: 256 KB per event.

Asynchronous invocation retries up to 2 times (3 total attempts) with exponential backoff when the function returns an error. On final failure, AWS Lambda sends the event to the Dead-Letter Queue (DLQ) or to a Lambda Destination (on-failure).

Poll-Based Invocation (Event Source Mappings)

Poll-based, also called event source mappings, means AWS Lambda itself polls the source and delivers batches. Common poll-based sources:

  • Amazon SQS (Standard and FIFO queues).
  • Amazon Kinesis Data Streams.
  • Amazon DynamoDB Streams.
  • Amazon MQ (Apache ActiveMQ, RabbitMQ).
  • Self-managed / Amazon MSK (Kafka).
  • Amazon DocumentDB streams.

AWS Lambda batches messages (configurable batch size), invokes the function synchronously with the batch, and — on success — deletes the messages from the source (SQS) or advances the shard iterator (Kinesis / DynamoDB Streams).

Synchronous = caller waits, caller retries, 6 MB payload. Asynchronous = caller fire-and-forget, AWS Lambda retries 2 times, 256 KB payload, supports DLQ/Destinations. Poll-based = AWS Lambda polls, delivers batches, checkpoint-based. Memorize which source belongs to which category — this one table answers 30+ distinct DVA-C02 scenario patterns. Reference: https://docs.aws.amazon.com/lambda/latest/dg/lambda-invocation.html

AWS Lambda Memory, CPU, and Timeout

AWS Lambda bills you for memory allocated multiplied by duration — so memory choice is also a CPU and cost decision.

Memory: 128 MB to 10,240 MB

You configure AWS Lambda memory between 128 MB and 10,240 MB (10 GB), in 1 MB increments. CPU is allocated proportionally to memory — at approximately 1,769 MB your function receives one full vCPU, and at 10,240 MB it receives roughly six vCPUs. This means AWS Lambda performance tuning is really memory tuning.

Timeout: Up to 900 Seconds

Every AWS Lambda function has a configurable timeout from 1 second up to 900 seconds (15 minutes). The default is 3 seconds — a frequent source of production bugs. If your function exceeds the timeout, AWS Lambda kills the invocation and logs Task timed out after X.XX seconds.

Ephemeral Storage (/tmp): Up to 10 GB

Every AWS Lambda execution environment ships with /tmp, a writable ephemeral filesystem. The default size is 512 MB; you can configure up to 10,240 MB (10 GB). /tmp persists across warm invocations on the same execution environment (useful for caching downloaded files) but is lost when the environment is recycled.

Payload Limits

  • Synchronous invocation: 6 MB request and response.
  • Asynchronous event: 256 KB per event.
  • Deployment package: 50 MB zipped direct upload, 250 MB unzipped from S3.
  • Container image: 10 GB.

A common production bug in AWS Lambda: leaving the default 3-second timeout and then calling a downstream service that takes 5 seconds. Your function times out, the client retries, and you pile on duplicate work. On DVA-C02, if a scenario mentions "timeout," always compute: does the default 3 s plus any downstream latency exceed the need? Adjust timeout deliberately per function. Reference: https://docs.aws.amazon.com/lambda/latest/dg/configuration-function-common.html

AWS Lambda Environment Variables and Encryption

AWS Lambda environment variables let you inject configuration (database URLs, feature flags, regional endpoints) without rebuilding code.

Plain-Text vs Encrypted Variables

By default, AWS Lambda encrypts environment variables at rest using an AWS-managed KMS key. For extra protection or to encrypt in transit for the Lambda console, you can enable encryption helpers and supply a customer-managed KMS key. Inside your handler, you call kms:Decrypt to decrypt sensitive values. Best practice on DVA-C02: use AWS Secrets Manager or AWS Systems Manager Parameter Store for true secrets, and reserve environment variables for non-sensitive configuration.

Total Size Limit: 4 KB

All AWS Lambda environment variables combined cannot exceed 4 KB. If you need more configuration, mount it from Parameter Store or read it from /tmp at startup.

Reserved Environment Variables

AWS Lambda injects several reserved environment variables — AWS_REGION, AWS_LAMBDA_FUNCTION_NAME, AWS_LAMBDA_FUNCTION_VERSION, AWS_LAMBDA_LOG_GROUP_NAME, AWS_LAMBDA_FUNCTION_MEMORY_SIZE, and credential variables. You cannot override them.

AWS Lambda Layers: Sharing Code and Dependencies

Lambda Layers let you package shared dependencies separately from your function code.

What Is a Lambda Layer?

A Lambda Layer is a ZIP archive of libraries, custom runtimes, or other function dependencies that you upload once and attach to many AWS Lambda functions. At invocation time, AWS Lambda extracts the Layer into /opt inside the execution environment. Your handler imports from /opt as if the files were bundled in the function.

Layer Limits

  • A single AWS Lambda function can reference up to 5 Layers.
  • The total unzipped size of function code plus all attached Layers cannot exceed 250 MB.
  • Layers are versioned — each publish produces a new immutable version.
  • Layers can be shared cross-account via resource-based policies.

Typical Layer Use Cases

  • Shared utility libraries across a microservice fleet.
  • Third-party dependencies too heavy to inline in each function (NumPy, Pandas, Sharp).
  • Custom runtimes (Go, Rust, Bash) for the provided.al2023 base.
  • AWS Lambda Extensions (see next section).

5 Layers maximum per AWS Lambda function. 250 MB maximum total unzipped (function code + all Layers combined). These two numbers answer most DVA-C02 Layer questions directly. Reference: https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html

AWS Lambda Extensions

Lambda Extensions are companion processes that run alongside your handler inside the execution environment.

What Extensions Do

Extensions hook into the AWS Lambda lifecycle — INIT, INVOKE, SHUTDOWN — and run in parallel with your function. They are perfect for observability agents, secrets caching, and configuration fetchers that should not block the handler. Popular extensions include the AWS AppConfig agent, the AWS Parameters and Secrets Lambda Extension, Datadog, New Relic, Dynatrace, and Lumigo.

Internal vs External Extensions

  • External extensions run as separate processes. They receive a bootstrap from /opt/extensions/, subscribe to lifecycle events, and communicate with AWS Lambda via the Extensions API. They can keep running during SHUTDOWN (up to 2 seconds) to flush telemetry.
  • Internal extensions run inside the runtime process (e.g., as JVM agents for Java).

Extensions are delivered as Lambda Layers, which is why they share the 5-Layer and 250 MB unzipped limits.

AWS Lambda VPC Configuration

AWS Lambda functions run in an AWS-managed VPC by default. If you need access to private resources — Amazon RDS in a private subnet, ElastiCache, internal services behind PrivateLink — you enable VPC access.

How VPC-Connected Lambda Works

When you attach AWS Lambda to a VPC, you pick one or more subnets and one or more security groups. AWS Lambda uses Hyperplane ENIs — shared elastic network interfaces — so that scaling to thousands of concurrent invocations does not create thousands of ENIs. Hyperplane ENIs are provisioned lazily on the first function creation / configuration change and persist for reuse.

Cold Start Implications

Historically (pre-2019) VPC AWS Lambda cold starts added 10+ seconds because AWS Lambda attached a dedicated ENI per sandbox. Today, Hyperplane ENIs reduced VPC cold start overhead to tens of milliseconds. Still, the DVA-C02 exam may test this history — remember that attaching AWS Lambda to a VPC is no longer a cold-start showstopper, but it does require subnets in multiple AZs for HA and IP capacity planning.

Outbound Internet Access from VPC-Attached Lambda

A VPC-attached AWS Lambda cannot reach the public internet unless the subnet routes outbound traffic through a NAT Gateway or through a VPC Endpoint for the specific AWS service. Common deployment pattern: VPC-attached AWS Lambda + VPC Endpoint for Amazon S3, DynamoDB, Secrets Manager, etc., to avoid NAT data costs.

A classic DVA-C02 trap: a VPC-attached AWS Lambda cannot reach Stripe / Twilio / a public API because the private subnet has no route to the internet. Add a NAT Gateway in a public subnet, or better, use a VPC Endpoint if the target is an AWS service (S3, DynamoDB, Secrets Manager, KMS, STS). Reference: https://docs.aws.amazon.com/lambda/latest/dg/configuration-vpc.html

AWS Lambda Execution Role and Resource Policies

AWS Lambda permissions flow through two different IAM constructs: the execution role and the resource-based policy.

The Lambda Execution Role

The execution role is an IAM role that AWS Lambda assumes to call AWS APIs from inside the function. Its trust policy allows lambda.amazonaws.com to assume it; its permission policy grants whatever your code needs — dynamodb:PutItem, s3:GetObject, kms:Decrypt, logs:CreateLogStream, etc. At minimum, every AWS Lambda function needs the AWS-managed AWSLambdaBasicExecutionRole policy (CloudWatch Logs write).

The Lambda Resource-Based Policy

The resource-based policy (also called the function policy) controls who can invoke the AWS Lambda function. It is attached directly to the function. When Amazon S3, API Gateway, SNS, or EventBridge wants to invoke your function, the resource-based policy must explicitly allow the calling service principal. You typically set it with aws lambda add-permission or via SAM Events: blocks.

Two Roles, Two Questions

  • Execution role answers: "What can the AWS Lambda function do to AWS?"
  • Resource-based policy answers: "Who is allowed to invoke this AWS Lambda function?"

On DVA-C02, when a question asks "why can S3 not trigger my Lambda?" the answer is almost always the resource-based policy (the function policy), not the execution role. When a question asks "why can my Lambda not read from DynamoDB?" the answer is the execution role. These two flip all the time in scenario wording. Reference: https://docs.aws.amazon.com/lambda/latest/dg/access-control-resource-based.html

AWS Lambda Destinations and Dead-Letter Queues

For asynchronous invocations, AWS Lambda can route the result (success or failure) to another AWS service automatically.

Lambda Destinations

Lambda Destinations let you configure an on-success destination and an on-failure destination per function. Supported destinations include Amazon SQS, Amazon SNS, Amazon EventBridge, and another AWS Lambda function. Destinations carry richer metadata than DLQs (the original event, response payload or error info, invocation context) and are the modern best practice.

Dead-Letter Queues (DLQ)

The older mechanism is the Dead-Letter Queue — an SQS queue or SNS topic attached to the function that receives events after all retry attempts fail. DLQs still work and still appear on the exam, but AWS recommends Destinations for new work because the payload is richer.

Retry Behavior Summary

  • Synchronous: caller retries. AWS Lambda itself does not retry.
  • Asynchronous: AWS Lambda retries up to 2 times with exponential backoff (maximum age configurable, default 6 hours). After failure, send to Destination (on-failure) or DLQ.
  • Poll-based:
    • SQS: message returns to the queue after visibility timeout; goes to the SQS DLQ after maxReceiveCount breaches.
    • Kinesis / DynamoDB Streams: AWS Lambda retries with exponential backoff up to MaximumRetryAttempts or until MaximumRecordAgeInSeconds elapses; optional BisectBatchOnFunctionError bisects the batch to isolate poison pills; failure records can go to an on-failure destination.

A surprisingly common DVA-C02 trap: "API Gateway → Lambda fails, does AWS Lambda retry automatically?" Answer: no. Synchronous AWS Lambda returns the error to the caller. If API Gateway is the caller, the client must retry (or API Gateway can be configured with a Step Functions integration that retries). DLQs and Destinations do NOT fire on synchronous failures. Reference: https://docs.aws.amazon.com/lambda/latest/dg/invocation-async.html

AWS Lambda Versions and Aliases

AWS Lambda versions and aliases are the foundation of safe AWS Lambda deployment.

Lambda Versions

Every time you publish an AWS Lambda version, AWS Lambda freezes the code and configuration as an immutable snapshot with a numeric ID (1, 2, 3, ...). $LATEST is the mutable working copy — not suitable for production traffic. Published versions expose a unique ARN: arn:aws:lambda:REGION:ACCOUNT:function:NAME:VERSION.

Lambda Aliases

A Lambda alias is a named pointer (e.g., prod, staging, canary) that targets a specific version or splits weighted traffic across two versions. Aliases provide a stable ARN that callers use regardless of which underlying version is currently deployed.

Weighted Aliases for Traffic Shifting

Aliases support weighted routing — e.g., 95% of invocations go to version 7, 5% to version 8. This is the primitive that AWS CodeDeploy uses to implement canary (one shift at X%, wait, then 100%) and linear (N% every M minutes) traffic shifting for AWS Lambda deployments. DVA-C02 frequently tests canary vs linear vs all-at-once AWS Lambda deployment configurations.

AWS Lambda SnapStart

AWS Lambda SnapStart is the silver bullet for Java cold-start latency.

How SnapStart Works

SnapStart (available for Java 11, 17, and 21 managed runtimes, plus Python and .NET in newer releases) snapshots the initialized execution environment after INIT phase — including the JVM heap, loaded classes, and any runtime-initialized state — and caches it. On cold invocation, AWS Lambda resumes from the snapshot instead of running INIT from scratch. Java cold starts drop from 2–6 seconds to 200–400 milliseconds.

SnapStart Caveats

  • Requires published versions (not $LATEST).
  • Uniqueness must be re-seeded after resume (random numbers, crypto RNG, cached credentials, DB connection IDs). Use CRaCResource hooks (beforeCheckpoint / afterRestore) to refresh state.
  • No additional cost (at launch in late 2022). Check current pricing page for updates.

Enable AWS Lambda SnapStart for any latency-sensitive Java function behind API Gateway or ALB. It is the highest-ROI performance switch in AWS Lambda for the Java ecosystem. For Node.js and Python, cold starts are already short enough that SnapStart matters less — prefer Provisioned Concurrency if you need sub-100 ms guaranteed warm starts. Reference: https://docs.aws.amazon.com/lambda/latest/dg/snapstart.html

AWS Lambda Concurrency: Provisioned vs Reserved

Concurrency is the most commonly confused AWS Lambda topic on DVA-C02. These are two different knobs with different purposes.

Account Concurrency Limits

Every AWS account starts with a regional default concurrency limit of 1,000 concurrent AWS Lambda executions (raisable via support ticket). When you scale, AWS Lambda increases concurrency by 1,000 per minute after an initial burst of 500–3000 depending on region.

Reserved Concurrency

Reserved Concurrency carves out a guaranteed slice of the account pool for one function. If you set Reserved = 100 on function A:

  • Function A can use up to 100 concurrent executions (and only up to 100).
  • The other 900 is shared by every other function in the account.
  • Reserved Concurrency acts as both a floor (guarantees capacity) and a ceiling (caps the function).

Use Reserved Concurrency to (a) protect a critical function from starvation and (b) rate-limit a function that talks to a downstream service with connection limits (e.g., RDS).

Provisioned Concurrency

Provisioned Concurrency pre-initializes N execution environments so they are fully warm when traffic arrives. If you set Provisioned = 50 on alias prod:

  • 50 sandboxes are pre-warmed, fully through the INIT phase.
  • Invocations up to 50 concurrent hit warm environments with near-zero cold start.
  • Invocations beyond 50 fall back to on-demand (and may cold-start) unless you combine with Reserved Concurrency.
  • You pay a small per-GB-second fee for pre-warmed capacity, plus normal invocation cost.

Provisioned Concurrency integrates with Application Auto Scaling for scheduled (business-hours) or target-tracking scaling.

Reserved Concurrency = slice of the concurrency pool (floor + ceiling), free, does NOT remove cold starts. Provisioned Concurrency = pre-warmed sandboxes, costs money, removes cold starts. Use both together when you need guaranteed latency AND guaranteed capacity. Confusing the two is one of the most-missed exam questions. Reference: https://docs.aws.amazon.com/lambda/latest/dg/configuration-concurrency.html

AWS Lambda X-Ray Active Tracing

AWS X-Ray active tracing turns AWS Lambda functions into first-class citizens in distributed traces.

Enabling Active Tracing

Toggle TracingConfig: Active on the function (console checkbox, SAM Tracing: Active, or CLI --tracing-config Mode=Active). Two things happen:

  • AWS Lambda automatically wraps each invocation in an X-Ray segment and records init, invocation, and any AWS SDK calls (if you use the X-Ray SDK instrumented client).
  • The sampling decision is made by AWS Lambda itself — approximately 1 request per second plus 5% of additional requests (configurable via X-Ray sampling rules).

Segments, Subsegments, Annotations, Metadata

  • Segment: the whole AWS Lambda invocation.
  • Subsegment: a downstream call (DynamoDB query, HTTPS call).
  • Annotations: indexed key-value pairs (filter by in the X-Ray console).
  • Metadata: non-indexed structured data.

The X-Ray topic explores this in depth; for AWS Lambda fundamentals, remember that active tracing is a single switch and requires the execution role to have AWSXRayDaemonWriteAccess (or equivalent xray:PutTraceSegments, xray:PutTelemetryRecords).

AWS Lambda Power Tuning

AWS Lambda Power Tuning is an open-source state machine (Step Functions) that finds the optimal memory setting for a function.

How Power Tuning Works

You deploy the Power Tuning state machine from the AWS Serverless Application Repository. You pass in the function ARN, a payload, and a list of memory sizes to test (e.g., 128, 512, 1024, 1536, 3008). The state machine invokes the function N times at each memory setting, measures duration and cost, and returns a visualization showing the sweet spot — often counterintuitive, because higher memory gives more CPU and can make a function both faster and cheaper.

Power Tuning Strategies

  • Speed: optimize purely for shortest duration.
  • Cost: optimize for lowest invocation cost.
  • Balanced: weighted blend of both.

On DVA-C02, recognize AWS Lambda Power Tuning as the recommended approach for empirical memory sizing rather than guessing.

AWS Lambda Limits Cheat Sheet

Memorize these AWS Lambda hard limits cold:

  • Timeout: 1 s – 900 s (15 min) per invocation.
  • Memory: 128 MB – 10,240 MB (10 GB).
  • Ephemeral /tmp: 512 MB default, up to 10,240 MB (10 GB).
  • Deployment package: 50 MB zipped direct, 250 MB unzipped from S3, 10 GB container image.
  • Payload: 6 MB synchronous, 256 KB asynchronous.
  • Layers: 5 per function, 250 MB unzipped total (function + layers).
  • Environment variables: 4 KB total.
  • Concurrent executions: 1,000 default per region per account (soft limit).
  • Burst concurrency: 500–3,000 instant, then +1,000 per minute.
  • Async retry attempts: 0, 1, or 2 (default 2).
  • Async event max age: 60 s – 6 h (default 6 h).
  • Function / layer / alias name length: 64 characters.

AWS Lambda Common Exam Traps

Knowing the traps is worth as many points as knowing the docs.

Trap 1 — Synchronous DLQ Does Not Fire

DLQs and Destinations fire only on asynchronous failures. Synchronous failures (API Gateway, ALB, direct SDK RequestResponse) return the error to the caller — AWS Lambda does not retry, DLQ does not receive anything.

Trap 2 — Cold Start Mitigation Ladder

When a question asks "how to remove cold starts":

  1. Trim deployment package, move heavy init outside the handler.
  2. For Java, enable SnapStart.
  3. For any runtime, enable Provisioned Concurrency (costs money).
  4. Reserved Concurrency alone does not remove cold starts — distractor.

Trap 3 — Reserved vs Provisioned Concurrency Naming

"Reserved" sounds like it pre-reserves capacity, but it does not pre-warm sandboxes. "Provisioned" is the one that pre-warms. Re-read slowly whenever these two words appear.

Trap 4 — VPC Lambda Outbound Internet

VPC-attached AWS Lambda cannot reach the public internet from a private subnet without a NAT Gateway. Prefer VPC Endpoints for AWS-service targets.

Trap 5 — Execution Role vs Function Resource Policy

"Why can S3 not invoke my Lambda?" → resource-based policy. "Why can my Lambda not write to DynamoDB?" → execution role. Different directions.

Trap 6 — Layers and Total Unzipped Size

5 Layers is the count limit. 250 MB is the combined unzipped limit of function code plus all Layers. Forgetting the combined rule catches candidates.

Trap 7 — Container Image vs ZIP Limits

ZIP deployment = 50 MB direct / 250 MB unzipped. Container image = 10 GB. If the scenario has "ML model > 250 MB," the answer is container image.

Trap 8 — Async Retries Number

AWS Lambda asynchronous invocation retries twice (total 3 attempts), not three. This is configurable to 0, 1, or 2.

Asynchronous AWS Lambda retries are 0, 1, or 2 additional attempts — default 2 additional attempts, which is 3 total executions. Writing "3 retries" on the exam is wrong. Memorize: 2 additional retries, 3 total attempts by default. Reference: https://docs.aws.amazon.com/lambda/latest/dg/invocation-async.html

AWS Lambda vs EC2 vs Fargate — When to Choose Each

DVA-C02 frames many compute questions as three-way choices. Use this decision tree.

Choose AWS Lambda When

  • Workload is event-driven or bursty.
  • Each execution runs under 15 minutes.
  • You want zero server management.
  • Billing should scale to zero when idle.
  • Payload is under 6 MB (sync) or 256 KB (async).

Choose AWS Fargate When

  • Workload runs in a container.
  • Each task may run hours or days.
  • You want zero host management but need container APIs.
  • You already use Amazon ECS or Amazon EKS.

Choose Amazon EC2 When

  • Workload needs specific OS tuning, persistent connections, or GPUs.
  • You have Reserved Instances or Savings Plans to apply.
  • You need Spot Instances for cost optimization.
  • You run stateful workloads that survive restarts.

AWS Lambda Observability: CloudWatch Logs and Metrics

Every AWS Lambda invocation writes to CloudWatch Logs and emits built-in metrics.

Built-In CloudWatch Metrics

  • Invocations — count of invocations.
  • Duration — execution time in ms.
  • Errors — count of failures.
  • Throttles — count of throttled invocations (over concurrency limit).
  • IteratorAge (poll-based) — age of last processed record.
  • ConcurrentExecutions — current concurrency.
  • ProvisionedConcurrencyUtilization — used fraction of Provisioned Concurrency.

CloudWatch Logs Auto-Provisioning

Every AWS Lambda function writes to /aws/lambda/FUNCTION_NAME. The log group is NOT auto-created until the first invocation; pre-create it in IaC if you want a specific retention policy applied from day one.

FAQ — AWS Lambda Fundamentals

Q1. What is AWS Lambda in one sentence for DVA-C02?

AWS Lambda is a serverless compute service that runs your handler code in response to synchronous, asynchronous, or poll-based events without any server provisioning or patching, bills per millisecond of execution times allocated memory, and caps each invocation at 15 minutes and 10 GB of memory. For DVA-C02, AWS Lambda is the default answer for "event-driven," "no server management," and "pay-per-use" scenarios.

Q2. What is the difference between synchronous, asynchronous, and poll-based AWS Lambda event sources?

Synchronous sources (API Gateway, ALB, AWS SDK RequestResponse) make the caller wait for the AWS Lambda response and handle retries themselves. Asynchronous sources (S3, SNS, EventBridge, SDK Event) hand the event off; AWS Lambda retries up to 2 times and can route failures to a DLQ or Destination. Poll-based sources (SQS, Kinesis, DynamoDB Streams, MSK, MQ) are pulled by AWS Lambda itself, which delivers batches and checkpoints on success. This three-way split determines retry behavior, DLQ availability, payload limits, and concurrency dynamics on DVA-C02.

Q3. How do Reserved Concurrency and Provisioned Concurrency differ in AWS Lambda?

Reserved Concurrency is a free floor-and-ceiling setting that reserves a slice of the account's concurrency pool for one AWS Lambda function — it guarantees capacity and simultaneously caps the function, but it does not remove cold starts. Provisioned Concurrency pre-initializes execution environments so that invocations up to N arrive on warm sandboxes with near-zero cold-start latency; it costs money and can scale with Application Auto Scaling. Combine both when you need guaranteed capacity AND guaranteed warm latency.

Q4. What memory range and timeout does AWS Lambda support, and how do they interact?

AWS Lambda supports memory from 128 MB to 10,240 MB in 1 MB steps, and timeout from 1 second to 900 seconds (15 minutes). CPU is allocated proportionally to memory — at ~1,769 MB you get one full vCPU, and at 10,240 MB roughly six vCPUs. So increasing memory usually makes AWS Lambda faster, sometimes cheaper, and is the primary performance tuning knob. Use the AWS Lambda Power Tuning tool to find the empirical sweet spot.

Q5. What are AWS Lambda Layers and what limits apply?

A Lambda Layer is a ZIP of shared code or dependencies that extracts into /opt inside the execution environment. An AWS Lambda function can reference up to 5 Layers, and the combined unzipped size of function code plus all attached Layers cannot exceed 250 MB. Layers are versioned and shareable cross-account via resource-based policies. Use Layers for shared utilities, heavy third-party dependencies, custom runtimes, and AWS Lambda Extensions.

Q6. What is the difference between the AWS Lambda execution role and the resource-based policy?

The execution role is the IAM role AWS Lambda assumes to call AWS APIs from inside the function — it answers "what can the code do?" The resource-based policy (function policy) attaches to the function and answers "who can invoke the function?" When S3, SNS, or API Gateway triggers AWS Lambda, the resource-based policy must allow the calling service; when the handler calls DynamoDB or S3, the execution role must allow those actions. Swapping these two is one of the most common DVA-C02 traps.

Q7. How does AWS Lambda in a VPC affect cold starts and internet access?

Modern VPC-attached AWS Lambda uses Hyperplane ENIs that are provisioned lazily and shared across invocations, so the multi-second VPC cold start of 2018 is gone. Current VPC cold start overhead is tens of milliseconds. However, a VPC-attached AWS Lambda in a private subnet has no outbound internet unless you add a NAT Gateway; for AWS-service targets, prefer VPC Endpoints (S3, DynamoDB Gateway Endpoints; Secrets Manager, KMS, STS, and others via Interface Endpoints) to avoid NAT data charges.

Q8. What does AWS Lambda SnapStart do and when should I use it?

AWS Lambda SnapStart snapshots the post-INIT execution environment (heap, classes, connections) and resumes from the snapshot on cold invocations. For Java 11+, SnapStart cuts cold starts from seconds to hundreds of milliseconds at no additional cost at launch. Use SnapStart for latency-sensitive Java functions behind API Gateway or ALB. Remember to re-seed uniqueness (RNG, DB connection IDs) in afterRestore hooks. Python and .NET SnapStart support has expanded in newer runtime releases.

Q9. How do AWS Lambda versions and aliases enable safe deployments?

Publishing an AWS Lambda version creates an immutable snapshot (code + configuration) with a numeric ID. Aliases are named pointers (prod, staging) that target a version — or split weighted traffic between two versions. CodeDeploy uses weighted aliases to implement canary (10% shift, bake, then 100%) and linear (10% every minute) traffic shifting for AWS Lambda. Point callers (API Gateway, EventBridge) at the alias ARN so you can deploy without changing the caller.

Q10. What AWS Lambda limits must I memorize for DVA-C02?

Memorize: 15 min max timeout, 10 GB max memory, 10 GB max /tmp, 50 MB / 250 MB / 10 GB deployment package limits (zip direct / unzipped S3 / container), 6 MB sync / 256 KB async payload, 5 Layers per function (250 MB combined unzipped), 4 KB environment variables total, 1000 default concurrent executions, 2 async retries (3 total attempts), and 500–3000 initial burst concurrency plus +1000 per minute afterwards. Those numbers answer more than half of raw recall questions on AWS Lambda.

Summary — AWS Lambda Fundamentals at a Glance

  • AWS Lambda is the serverless compute backbone of DVA-C02; Task 1.2 plus overlap with security, deployment, and troubleshooting domains.
  • Handler signatures are runtime-shaped but always (event, context); Node.js, Python, Java, .NET, Go (via provided.al2023), Ruby, and custom runtimes are all supported, plus container images up to 10 GB.
  • Event sources split into synchronous (6 MB, caller retries), asynchronous (256 KB, 2 Lambda retries, DLQ/Destinations), and poll-based (batched, checkpointed).
  • Memory 128 MB – 10,240 MB drives both CPU and cost; timeout caps at 900 seconds; /tmp scales to 10 GB.
  • Environment variables max 4 KB total and are encrypted at rest with KMS; real secrets belong in Secrets Manager or Parameter Store.
  • Lambda Layers share code across functions — 5 layers per function, 250 MB combined unzipped.
  • Lambda Extensions run alongside handlers for observability and secrets caching.
  • VPC attachment uses Hyperplane ENIs; no public internet without NAT or VPC Endpoints.
  • Execution role = what the function does; resource-based policy = who can invoke the function.
  • Destinations (on-success/on-failure) are the modern evolution of DLQs for async.
  • Versions are immutable snapshots; aliases are weighted pointers — the foundation of canary/linear traffic shifting.
  • SnapStart crushes Java cold starts; Provisioned Concurrency pre-warms sandboxes; Reserved Concurrency guarantees capacity.
  • X-Ray active tracing + Lambda Power Tuning are the DVA-C02-blessed observability and performance tuning combo.

Master these AWS Lambda fundamentals and Task 1.2 becomes your highest-accuracy section on the DVA-C02 exam — and the mental model transfers directly to the SAA-C03 and SOA-C02 serverless questions if you continue along the AWS certification path.

Official sources