examhub .cc 用最有效率的方法,考取最有價值的認證
Vol. I
本篇導覽 約 31 分鐘

開發測試與 Amazon Q Developer

6,200 字 · 約 31 分鐘閱讀

Development testing and Amazon Q Developer together form Task Statement 3.2 — "Test applications in development environments" — of the AWS Certified Developer Associate (DVA-C02) exam. The December 2024 Exam Guide Version 2.1 shrank the CI/CD surface area but added Amazon Q Developer as an in-scope generative AI pair-programmer. That single change rewires how the exam frames development testing: you are now expected to know how to run Lambda locally with AWS SAM local, how to mock AWS SDK calls, how to integration-test against a sandboxed AWS account, how to tail CloudWatch Logs from the terminal, and how Amazon Q Developer accelerates every one of those loops with /dev, /review, and /test. This chapter walks through every development testing concept DVA-C02 can test, from the humble sam local invoke to Amazon Q Developer customizations and CodeGuru Profiler flame graphs. If you have already studied the IaC chapter on AWS SAM and CloudFormation, this topic is its natural companion — SAM builds the template, and development testing proves the template works before sam deploy ever runs.

What Is Development Testing on AWS?

Development testing on AWS is the practice of validating serverless and cloud-native application code before it reaches a shared environment. The DVA-C02 exam frames it around three overlapping loops. The inner loop is sub-second: you save a file, run a unit test, get a red/green signal, and iterate. The middle loop is seconds-to-minutes: you spin up sam local start-api or sam local invoke, hit it with curl or a Lambda test event, and verify the handler against a local Docker-based Lambda emulation. The outer loop is minutes-to-tens-of-minutes: you deploy to a personal sandbox AWS account with sam sync or sam deploy, run integration tests against real AWS services, tail CloudWatch Logs with sam logs or aws logs tail, and iterate. Development testing is the art of staying in the fastest loop that still gives you a trustworthy signal — and Amazon Q Developer is the V2.1 acceleration layer across all three loops.

How Development Testing Fits the DVA-C02 Exam Map

Development testing appears across every DVA-C02 domain:

  • Domain 1 (Development, 32%): invoking Lambda handlers locally, supplying Lambda test events, mocking AWS SDK calls, and wiring API Gateway against sam local start-api.
  • Domain 2 (Security, 26%): Amazon Q Developer security scans, CodeGuru Reviewer pull-request security findings, and the anti-pattern of hardcoded credentials caught before commit.
  • Domain 3 (Deployment, 24%): SAM Accelerate (sam sync) for rapid iteration, test hooks in buildspec.yml, and sandbox-account deployment as a deployment-testing gate. This is the primary home of Task 3.2.
  • Domain 4 (Troubleshooting, 18%): CloudWatch Logs tailing with sam logs and aws logs tail, CodeGuru Profiler for runtime performance hot spots, and reproducing production traces locally.

Even though the central task statement lives in Domain 3, development testing questions are scattered across the whole exam. Memorize the toolchain and you unlock a measurable boost across roughly 10 percent of the question bank.

The Three Testing Loops at 30,000 Feet

Every development testing workflow on AWS cycles through the same pattern: (1) express the intended behavior as a test — unit, integration, or end-to-end; (2) execute against a test double (mock), a local emulator (SAM local, LocalStack), or a real sandbox account; (3) collect a signal — pass/fail, log output, trace, or profiler flame graph; (4) feed that signal back into the code. Amazon Q Developer bolts onto this cycle — it generates the test in step 1 (/test), reviews the code in step 1 and step 4 (/review), and explains failures in step 3 (workspace chat). Keep this cycle as your mental anchor through the chapter.

白話文解釋 Development Testing on AWS

Development testing 講白了就是「你還沒把車開上高速公路,先在車庫裡自己試踩煞車、試轉方向盤」。下面三個類比把 development testing、SAM local、Amazon Q Developer、CodeGuru 的角色一次釐清。

Analogy 1 — The Restaurant Test Kitchen

想像 AWS 生產環境是米其林餐廳的正式廚房,任何一道菜送出去都影響營收。Development testing 就是餐廳旁邊的 test kitchen(試菜廚房):主廚先在試菜廚房把新菜試個十次、調整鹽量、測試擺盤,確定穩了才端進正式廚房。sam local invoke 就是你家裡那台小烤箱 — 同樣可以烤,但規模很小、只測一道菜。sam local start-api 是把整條外帶窗口搬到試菜廚房,客人按服務鈴(HTTP request)你就能立刻上菜並測試流程。LocalStack 則像是把整間餐廳(S3、DynamoDB、SQS、Lambda、API Gateway)都搬進地下室玩具版廚房,好處是菜色齊全、缺點是味道不是 100% 跟真的米其林一樣。Amazon Q Developer 好比一位在你身邊的二廚助理,你喊「幫我發想三種醬汁」他立刻寫出來(/dev),你喊「檢查這道菜安全不安全」他幫你聞(/review + 安全掃描),你喊「幫我寫份試吃問卷」他生成單元測試(/test)。CodeGuru Reviewer 像餐廳的衛生稽查員 — PR 一提他就來掃;CodeGuru Profiler 則是裝在廚房裡的熱像儀,告訴你哪個爐子一直空燒浪費瓦斯(CPU hot method)。

Development testing = 試菜廚房,Amazon Q Developer = 二廚 AI,CodeGuru = 稽查員 + 熱像儀。

Analogy 2 — The Flight Simulator

再把 Lambda 函式想成一架飛機,生產環境就是真天空。第一次就把飛機開上天極度危險,所以航空公司用 flight simulatorsam local invoke 是最陽春的桌面模擬器,只能模擬起飛一小段。sam local start-api 是完整駕駛艙模擬器,HTTP 請求進來、回應送出,整條體驗跟真實 API Gateway + Lambda 很接近,但缺少真實的 IAM、VPC、跨服務延遲。LocalStack 是一整個模擬機場(S3、DynamoDB、SQS 等 AWS 服務全模擬),可以練跨艙指令,但機場是塑膠模型,不會 100% 反映真天氣。Sandboxed dev account 是在沙漠上蓋的小跑道(你個人的開發 AWS 帳號)— 真飛機、真跑道、只是乘客是你自己,出事不影響正機場。Amazon Q Developer 是坐在副駕的教官 AI,給你建議航線(/dev)、抓檢查清單漏項(/review)、幫你出考題(/test)。CodeGuru Profiler 則是飛行資料記錄器(flight data recorder),降落後告訴你哪段航程吃最多油(CPU / wall-clock hot spot)。

Development testing 的精神是:在模擬器裡摔十次,比在真天空摔一次便宜太多。

Analogy 3 — The Open-Book Exam Study Session

DVA-C02 本身就是考試,所以用考試比喻最貼。你備考時不會一上考場就寫正式試卷,而是先自己做 mock exam(模擬考)。Unit test 就是你挑一題單選題練、立刻核對答案;sam local invoke 是閉書練整份題本;sam local start-api 是完整計時練整場 API 互動;LocalStack 是拿考古題整套練;sandbox account 是找朋友辦模擬考,拿真考場佈景但還不是正式。Amazon Q Developer 是旁邊的 AI 家教 — 你念不懂某行程式他白話解釋(workspace chat)、他幫你出 10 題新題目(/test)、他幫你做錯題訂正(/review)、他甚至用你家公司 codebase 的風格客製作答(customizations)。CodeGuru Reviewer 是批改作業的助教,PR 一交就圈紅錯字;CodeGuru Profiler 是考後看你寫字速度慢在哪題的錄影回放。

三個類比串起來,你會記住:development testing 是「正式上線前的多層過濾網」,Amazon Q Developer 是「貼身的 AI 助理」,CodeGuru 是「同儕審查 + 效能顯微鏡」。

SAM Local is the AWS SAM CLI subcommand group (sam local invoke, sam local start-api, sam local start-lambda, sam local generate-event) that uses Docker to emulate the AWS Lambda execution environment on your workstation. SAM Local reads your template.yaml, downloads (or reuses) the matching Lambda runtime container image, mounts your code, and runs the handler under a realistic /var/task layout with the right environment variables and IAM-free local credentials. Reference: https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-local-invoke.html

SAM Local: Running Lambda Functions on Your Laptop

SAM Local is the first thing DVA-C02 expects you to reach for when you want to test a Lambda handler without deploying. It ships as part of the AWS SAM CLI, depends on Docker being installed locally, and exposes three core subcommands that cover almost every inner-loop testing scenario.

sam local invoke — One-Shot Lambda Invocation

sam local invoke runs a single Lambda function once with a JSON event file, then exits. The command:

sam local invoke ProcessOrderFunction --event events/order-created.json

SAM CLI reads template.yaml, finds the ProcessOrderFunction resource, pulls the Lambda runtime Docker image matching its Runtime property (for example public.ecr.aws/lambda/python:3.12), mounts your packaged code, and pipes the event file into stdin. The handler runs, prints logs to your terminal, and exits. This is the fastest local feedback loop for an event-driven function triggered by S3, EventBridge, SQS, or DynamoDB Streams — you capture or hand-craft the event JSON once and re-run the handler as many times as you want.

You can generate representative event payloads instead of hand-rolling them:

sam local generate-event s3 put --bucket my-bucket --key file.txt > events/s3.json
sam local generate-event dynamodb update > events/ddb.json
sam local generate-event apigateway aws-proxy > events/apigw.json

sam local generate-event is the exam-quotable shortcut for "how do I get a realistic test event payload for my Lambda function?"

sam local start-api — Local API Gateway + Lambda

sam local start-api emulates API Gateway on http://127.0.0.1:3000 and routes HTTP requests to the Lambda functions declared in your template.yaml with AWS::Serverless::Function Events of type Api or HttpApi. You then curl your local endpoint:

sam local start-api --port 3000
curl http://127.0.0.1:3000/orders -d '{"sku":"ABC"}' -H 'Content-Type: application/json'

Every request spins up a fresh Docker container per invocation (unless warm containers are configured), which closely models the real Lambda cold-start-per-request behavior. This is the tool to reach for when you want to exercise the same API contract your front end or your integration tests will hit.

sam local start-lambda — Local AWS Lambda Service

sam local start-lambda stands up a mock AWS Lambda service endpoint on http://127.0.0.1:3001. Point an AWS SDK client at this endpoint:

import boto3
client = boto3.client("lambda", endpoint_url="http://127.0.0.1:3001",
                     region_name="us-east-1", aws_access_key_id="x", aws_secret_access_key="x")
client.invoke(FunctionName="ProcessOrderFunction", Payload=b'{}')

This lets your integration test driver call the Lambda function through the SDK exactly as production code would — useful when you want to test client-side retry logic, SDK pagination, or a Step Functions workflow that calls Lambda via the Invoke API.

Debugging Locally with SAM

SAM Local supports attaching a debugger to the running Lambda container with --debug-port:

sam local invoke --debug-port 5858 ProcessOrderFunction

VSCode, PyCharm, and JetBrains IDEs can then attach and hit breakpoints inside your handler, even though the handler is executing inside a Docker container that mimics /var/task.

DVA-C02 loves the distractor "SAM Local runs Lambda in a built-in Node process." It does not. SAM Local requires Docker Desktop (or an equivalent) because it pulls the real Lambda base images and executes handlers inside containers that approximate /var/task, /opt, and the Runtime API. Without Docker, sam local invoke errors out. Memorize: SAM Local = Docker-based Lambda emulator. Reference: https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-local-invoke.html

SAM Accelerate and sam sync — The Middle-Loop Accelerator

SAM Accelerate is the V2.1-era answer to "I changed one line of Lambda code — why did my deploy take 90 seconds?" sam sync is the headline command. Instead of re-packaging, re-uploading, and re-running CloudFormation on every save, sam sync detects which resources changed and uses direct service APIs to patch them in place. Lambda code updates use UpdateFunctionCode; Step Functions state machine updates use UpdateStateMachine; API Gateway updates use the REST API directly. A typical round-trip drops from 60–90 seconds to under 10 seconds.

sam sync --stack-name my-app-dev --watch

With --watch, SAM CLI stays running and redeploys on every file save — the cloud equivalent of live-reload. This is explicitly labeled as "for development only" and is not a substitute for sam deploy producing a reviewable CloudFormation change set in CI/CD.

When to Use sam sync vs sam deploy

  • sam sync --watch: inner-loop iteration in your personal sandbox account. Speed over safety.
  • sam deploy: promotion to shared environments, change-set previews, and CI/CD pipelines. Safety over speed.

DVA-C02 frames this as a straight Task 3.2 question: "Which SAM CLI command accelerates iteration during local development?" Answer: sam sync.

sam sync deliberately drifts the CloudFormation stack state from what the template describes, because it updates resources directly via service APIs. That is fine for your personal dev stack, but mixing sam sync with a CodePipeline stage that later runs sam deploy causes drift conflicts. Keep sam sync --watch in the inner loop; keep sam deploy --no-confirm-changeset in automated pipelines. Reference: https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/accelerate.html

LocalStack: The Community AWS Emulator

LocalStack is a third-party (community-edition open source, pro-edition commercial) local AWS cloud emulator. It runs in Docker and exposes AWS-compatible endpoints for services including S3, DynamoDB, SQS, SNS, Lambda, API Gateway, Kinesis, Step Functions, Secrets Manager, Parameter Store, and many more. You point your AWS SDK at http://localhost:4566 and call the AWS APIs as usual.

s3 = boto3.client("s3", endpoint_url="http://localhost:4566",
                  region_name="us-east-1", aws_access_key_id="test", aws_secret_access_key="test")
s3.create_bucket(Bucket="my-bucket")
s3.put_object(Bucket="my-bucket", Key="k", Body=b"hello")

LocalStack trades fidelity for speed: it is excellent for integration tests that exercise cross-service flows (S3 → Lambda → DynamoDB), but it is not a one-to-one reproduction of AWS. IAM enforcement, eventual consistency timing, and a handful of newer features are approximated, not simulated.

LocalStack vs SAM Local

  • SAM Local emulates only Lambda (and fronts it with a local API Gateway in start-api). It uses the real Lambda runtime base images under Docker, so your handler runs in an environment very close to production.
  • LocalStack emulates the broader AWS service surface. SQS queues, DynamoDB tables, S3 buckets, EventBridge rules, and more are all reachable at localhost:4566. Fidelity is lower service-by-service.
  • Use both: SAM Local for Lambda-centric handler tests, LocalStack when your test needs to round-trip through an S3 bucket or DynamoDB table without hitting a real AWS account.

DVA-C02 will not grill you on LocalStack internals, but the exam guide V2.1 language about "local emulation of AWS services" is compatible with LocalStack. Know that it exists, know it is community-maintained (not an AWS product), and know it is an alternative to a real sandbox account for integration tests.

The LocalStack vs AWS Sandbox Trade-Off

  • LocalStack: free of AWS bills, extremely fast, runs on a laptop offline, lower fidelity.
  • AWS sandbox account: full AWS fidelity, costs real money, requires internet, slower to provision.

Mature teams usually pick one or the other per test category: LocalStack for developer inner-loop integration tests, sandbox account for the CI stage that gates a PR merge.

A DVA-C02 distractor pattern: "Which AWS-owned service runs Lambda locally?" LocalStack is not AWS-owned — it is a third-party project. The AWS-owned local Lambda emulator is AWS SAM CLI's sam local invoke. If a scenario mentions "AWS-provided local testing," the answer is SAM Local, not LocalStack. Reference: https://docs.localstack.cloud/overview/

SDK Mocking Patterns: Unit Tests Without Network Calls

Integration tests are valuable, but the fastest test loop is a pure unit test with no network traffic. AWS SDK mocking libraries let you swap out the real client with a stub that returns canned responses. DVA-C02 knows three canonical patterns: botocore.stub.Stubber for Python (boto3), aws-sdk-client-mock for Node.js v3 SDK, and cassette-based replay libraries like VCR.

boto3 Stubber (Python)

botocore.stub.Stubber is the built-in way to stub AWS SDK calls in Python tests. It lets you queue expected responses and errors against a client:

import boto3
from botocore.stub import Stubber

client = boto3.client("dynamodb", region_name="us-east-1")
stubber = Stubber(client)
stubber.add_response("get_item",
                     {"Item": {"pk": {"S": "user#1"}, "name": {"S": "Alice"}}},
                     expected_params={"TableName": "Users", "Key": {"pk": {"S": "user#1"}}})
stubber.activate()

# code under test calls client.get_item(...) → returns canned response

Stubber also supports add_client_error for injecting ThrottlingException, ConditionalCheckFailedException, or ProvisionedThroughputExceededException to test retry and backoff logic. It is included in boto3 — no extra dependency.

aws-sdk-client-mock (Node.js v3 SDK)

For the modular AWS SDK for JavaScript v3, aws-sdk-client-mock is the community-standard mocking library. Every service client (DynamoDBClient, S3Client, SQSClient) can be mocked with a fluent API:

import { mockClient } from "aws-sdk-client-mock";
import { DynamoDBClient, GetItemCommand } from "@aws-sdk/client-dynamodb";

const ddbMock = mockClient(DynamoDBClient);
ddbMock.on(GetItemCommand).resolves({ Item: { pk: { S: "user#1" } } });

// code under test calls new DynamoDBClient().send(new GetItemCommand(...)) → mocked

You can match on input parameters, reject with specific error names, and reset between tests. For Node.js handlers, aws-sdk-client-mock is the exam-relevant answer to "how do I unit-test a handler that calls DynamoDB?"

VCR / Cassette Patterns

VCR-style libraries (vcrpy for Python, node-vcr / nock for Node.js) record real AWS HTTPS responses on first run and replay them on subsequent runs. The recorded traffic lives in a YAML/JSON "cassette" file checked into the repository. Pros: extremely high fidelity since the recorded response is the real AWS response. Cons: cassettes must be refreshed when the AWS API changes, and secrets must be scrubbed before commit. Use cassettes sparingly — usually for contract tests that assert your code correctly parses a specific AWS response shape.

Choosing a Mocking Strategy

  • Unit test of business logic around an AWS call: Stubber / aws-sdk-client-mock.
  • Contract test against a real AWS response shape: cassette/VCR.
  • Full cross-service flow: LocalStack or sandbox account integration test.
  • Lambda handler end-to-end: sam local invoke with a generated event.

A clean unit test mocks the AWS SDK client, not your own wrapper functions around it. If you wrap DynamoDB in a UserRepository class and then mock the repository, you are not actually testing the AWS integration — you are testing your mock. Prefer stubbing DynamoDBClient.send or boto3.client("dynamodb").get_item so the test covers your query construction and response parsing. Reference: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/core/stubber.html

Integration Testing Against a Dev Account

At some point local mocks lose fidelity and you need real AWS. The DVA-C02-blessed pattern is a sandboxed dev account — a separate AWS account (ideally provisioned through AWS Organizations or Control Tower) that every developer gets for personal experimentation. Integration tests deploy into this account, exercise real AWS services, assert outcomes, and tear down. Key practices:

  • Isolate by account, not by region. Region isolation does not protect against IAM or quota accidents.
  • Stack names include a developer/branch suffix so multiple developers can coexist: my-app-jaric-feature-x.
  • sam deploy --stack-name ... --capabilities CAPABILITY_IAM in the test script keeps the stack reproducible.
  • Always tear down with sam delete after the test (or on a scheduled cleanup) to avoid lingering resource costs.
  • Use AWS Budgets alerts on the dev account so a runaway test does not cost $1,000 overnight.

Integration tests in this pattern look like ordinary test cases that call real AWS APIs — boto3.client("dynamodb").put_item(...), wait, read back, assert. No mocks. This catches IAM permission drift, real eventual-consistency windows, and service-limit interactions that mocks never expose.

Hooking Tests Into CI/CD

Inside buildspec.yml for AWS CodeBuild:

phases:
  install:
    runtime-versions: { python: 3.12 }
    commands:
      - pip install -r requirements-test.txt
  pre_build:
    commands:
      - pytest tests/unit
  build:
    commands:
      - sam build && sam deploy --stack-name test-$CODEBUILD_BUILD_NUMBER --no-confirm-changeset
      - pytest tests/integration
  post_build:
    commands:
      - sam delete --stack-name test-$CODEBUILD_BUILD_NUMBER --no-prompts

This is the classic three-stage shape DVA-C02 expects: unit tests gate the build, deploy to an ephemeral stack, integration tests gate the promote, and teardown runs regardless. See the CI/CD pipeline tools topic for deeper buildspec coverage.

CloudWatch Logs During Development — sam logs and aws logs tail

When a Lambda function fails in development, your first instinct should be to tail its CloudWatch Logs. Two commands matter for DVA-C02.

sam logs

sam logs filters and tails logs for functions in your SAM stack by the logical ID in template.yaml:

sam logs -n ProcessOrderFunction --stack-name my-app-dev --tail
sam logs -n ProcessOrderFunction --stack-name my-app-dev --start-time '10min ago' --filter ERROR

The --tail flag keeps the stream open and prints new log events as they arrive — indispensable while running sam sync --watch in another terminal. The --filter flag accepts CloudWatch Logs filter patterns.

aws logs tail

aws logs tail is the native AWS CLI command (not SAM-specific) for any CloudWatch Logs group:

aws logs tail /aws/lambda/ProcessOrderFunction --follow --since 10m
aws logs tail /aws/lambda/ProcessOrderFunction --filter-pattern '{ $.level = "ERROR" }'

--follow is the CLI equivalent of tail -f. JSON filter patterns like { $.level = "ERROR" } work when your Lambda logs are emitted in JSON (which the structured-logging topic strongly recommends). These are the two exam-relevant commands for live log tailing during development.

Lambda Test Events in the Console

The Lambda console's Test tab lets you save named event templates — s3-put, sqs-batch, api-gateway-proxy — and fire them against the currently deployed version of the function. It is essentially the web-UI cousin of sam local generate-event. DVA-C02 references "Lambda test events" and "the API Gateway test console" as first-class development-testing primitives even when there is no local setup.

API Gateway Test Console

For REST APIs, each method in the API Gateway console has a Test button that synthesizes a request (query string, path parameter, headers, body), invokes the backing integration, and shows the mapped response plus execution logs. This is the fastest way to validate a mapping template or a Lambda authorizer during development without wiring up curl.

Three commands every DVA-C02 candidate must be able to recognize by one-line description:

  • sam local invoke FUNC --event e.json — run a Lambda locally in Docker, once.
  • sam logs -n FUNC --stack-name S --tail — tail CloudWatch Logs of a SAM-managed Lambda.
  • aws logs tail /aws/lambda/FUNC --follow — tail any CloudWatch Log group by name.

Memorize those three signatures and you answer most Task 3.2 command-matching questions. Reference: https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-logs.html

Amazon Q Developer: The V2.1 AI Pair-Programmer

Amazon Q Developer is the generative-AI pair-programmer that the DVA-C02 Exam Guide V2.1 explicitly added to scope. It ships as an IDE plugin for Visual Studio Code and JetBrains IDEs (IntelliJ IDEA, PyCharm, WebStorm, Rider, PhpStorm, GoLand, RubyMine), plus a CLI. There is also a separate Amazon Q Developer experience in the AWS Management Console. For the exam, focus on the IDE capabilities because Task 3.2 is about development testing.

Inline Code Suggestions

As you type, Amazon Q Developer streams ghost-text completions — from single-line hints to whole function bodies. Tab accepts. The suggestions are trained on public code plus internal AWS best practices, so snippets tend to use modern AWS SDK idioms (SDK for JavaScript v3, boto3 with paginators, etc.) out of the box.

/dev — Feature Development Agent

/dev turns Amazon Q Developer into a multi-step coding agent. You describe the feature in natural language in the chat panel, and /dev plans, edits, and creates files across your workspace to implement it. Typical prompt:

/dev Add a new API route GET /orders/{id} backed by a Lambda function that reads from the Orders DynamoDB table. Update template.yaml, add the handler in src/orders/get.py, and add a unit test.

/dev produces a diff you can review, accept, or reject file by file. For DVA-C02, /dev represents the V2.1 "AI generates code" capability.

/review — Code Review Agent

/review analyses your current workspace or uncommitted changes for quality, correctness, security, and code-style issues. It surfaces findings with severity and explanation, similar to a senior engineer reviewing a PR. /review is complementary to CodeGuru Reviewer — /review runs on demand at the developer's IDE; CodeGuru Reviewer runs on the pull request in the repository.

/test — Unit Test Generation

/test generates unit tests for a selected function or file. You highlight a handler, type /test in the chat, and Amazon Q Developer drafts pytest or Jest test cases covering happy paths and error conditions. This is the single most exam-relevant Amazon Q Developer capability — Task 3.2 is about testing, and /test is the V2.1 headline for "AI writes your tests for you."

# Selected: src/orders/get.py:get_handler
# Generated by /test into tests/test_get_handler.py
def test_get_handler_returns_404_when_item_missing(mocker):
    ddb = mocker.patch("src.orders.get.ddb_client")
    ddb.get_item.return_value = {}
    resp = get_handler({"pathParameters": {"id": "missing"}}, None)
    assert resp["statusCode"] == 404

Security Scanning

Amazon Q Developer performs automated security scans on your workspace, flagging issues like hardcoded credentials, insecure cryptographic APIs, injection risks, and AWS-specific anti-patterns (e.g., overly broad IAM policies in a SAM template). Security findings appear inline in the IDE with a one-click fix when possible. For DVA-C02, this satisfies the "sensitive data in code" (Task 2.3) intersection — Amazon Q Developer catches hardcoded secrets before commit.

Documentation Generation

Amazon Q Developer can generate README files, JSDoc/docstring comments, and architectural summaries from existing code. /doc is a newer command in the Amazon Q Developer family that produces project-level documentation aligned to your code's actual behavior.

Workspace Chat

The chat panel is grounded in your open workspace — Amazon Q Developer indexes open files and can answer questions like "Where is the DynamoDB table defined?" or "Explain what this regex does." This makes onboarding a new codebase much faster and is the V2.1 "explain existing code" capability.

Customizations — Aligning Q Developer with Your Codebase

Amazon Q Developer customizations let a team point Amazon Q Developer at their own private codebase (through AWS CodeStar Connections) so suggestions reflect the team's internal APIs, naming conventions, and utility libraries. Customizations are a paid feature and are administered through IAM Identity Center. For DVA-C02, you only need to know that customizations exist and that they make Amazon Q Developer suggestions closer to your own team's code style.

Amazon Q Developer vs GitHub Copilot — Exam Framing

DVA-C02 will not ask you to compare AI pair-programmers commercially, but it will test that Amazon Q Developer is the AWS-integrated option and that its exam-relevant capabilities are: inline suggestions, /dev, /review, /test, security scan, doc generation, workspace chat, and customizations. Memorize those eight.

Amazon Q Developer can do many things, but Task 3.2 — "Test applications in development environments" — makes /test the single most exam-weighted capability. If a DVA-C02 scenario asks "which AWS service automatically generates unit tests for your Lambda handler from the IDE?" the answer is Amazon Q Developer. Runner-up: /review for code quality and security feedback. Amazon Q Developer is not a deployment tool — do not pick it for scenarios about CI/CD pipelines or blue/green deployments. Reference: https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/test-generation.html

Amazon CodeGuru Reviewer and Profiler

Amazon CodeGuru has two sub-services relevant to development testing: CodeGuru Reviewer for static analysis and PR review, and CodeGuru Profiler for runtime profiling. Both remain in scope for DVA-C02 V2.1 even though some sources claim CodeGuru was removed — the exam guide language retains "code quality" and "performance optimization" responsibilities that map directly to CodeGuru.

CodeGuru Reviewer — Automated Code Review on Pull Requests

CodeGuru Reviewer connects to your source repository (CodeCommit, GitHub, Bitbucket, GitHub Enterprise) and automatically analyses pull requests. It surfaces findings for:

  • AWS best-practice violations (wrong SDK usage, missing retry logic, poor Lambda patterns).
  • Security vulnerabilities (hardcoded credentials, overly permissive IAM, unsafe deserialization).
  • Code quality issues (resource leaks, thread safety, concurrency bugs).

Findings appear as inline PR comments with explanations and code examples. This complements Amazon Q Developer /review — CodeGuru Reviewer runs at the repository layer on every PR, while /review runs at the IDE layer on demand.

CodeGuru Profiler — Runtime Performance Analysis

CodeGuru Profiler runs as an agent inside your production or pre-production application (Java, Python, JVM-based workloads) and samples CPU and wall-clock time. It emits flame graphs and recommendations identifying:

  • CPU-hot methods consuming more cycles than expected.
  • Inefficient hot spots (for example, inefficient string concatenation in a tight loop).
  • Specific AWS SDK anti-patterns (creating a new client per request instead of reusing).

For Lambda, CodeGuru Profiler attaches as a layer or via the AWS Distro for OpenTelemetry. Profiler recommendations feed directly back into the development loop — a profiler finding about an inefficient DynamoDB query becomes a new unit test case ("this query should use a GSI"), which becomes a fix, which becomes a re-profile.

CodeGuru vs Amazon Q Developer Security Scan

  • CodeGuru Reviewer: automated at the PR level, language-limited (Java, Python, primarily), tuned for repository-wide review.
  • Amazon Q Developer security scan: IDE-level, pre-commit, broader language support, faster turnaround.
  • Use both: Q Developer scan pre-commit, CodeGuru Reviewer at PR time, CodeGuru Profiler post-deployment.

The mature DVA-C02-blessed development workflow chains all three tools. Amazon Q Developer /review runs in your IDE as you type. CodeGuru Reviewer auto-comments on your pull request before a human reviewer arrives. CodeGuru Profiler runs in the deployed environment and feeds performance findings back to new tickets. Each tool catches a different class of problem at a different loop speed. Reference: https://docs.aws.amazon.com/codeguru/latest/profiler-ug/what-is-codeguru-profiler.html

Unit Test Patterns for Lambda Handlers

Unit-testing a Lambda handler is just unit-testing an ordinary function — but the handler signature (event, context) and the heavy AWS SDK usage create recurring patterns worth memorizing.

Pattern 1 — Factor Business Logic Out of the Handler

A testable Lambda splits the handler into a thin adapter and a pure business function:

# src/orders/get.py
def _fetch_order(ddb, order_id: str) -> dict | None:
    r = ddb.get_item(TableName="Orders", Key={"id": {"S": order_id}})
    return r.get("Item")

def handler(event, context):
    order_id = event["pathParameters"]["id"]
    item = _fetch_order(boto3.client("dynamodb"), order_id)
    return {"statusCode": 404 if not item else 200, "body": json.dumps(item or {})}

Tests cover _fetch_order with a Stubber and a tiny handler-level test with a mocked client — that keeps 90% of the logic in pure-function tests.

Pattern 2 — Use a Stubber, Not a Module-Level Patch

botocore.stub.Stubber is cleaner than mocker.patch("boto3.client") because it validates call parameters and response shape.

Pattern 3 — Inject the Client, Do Not Import Singletons

Create clients outside the handler (for execution-context reuse) but pass them into business functions so tests can override them. This is the "dependency injection light" pattern that plays well with Lambda's cold-start optimization guidance.

Pattern 4 — Fixture-Driven Event Payloads

Store realistic event JSON under tests/fixtures/ (generated by sam local generate-event) and load them in tests. This gives unit tests the same event payloads that sam local invoke would use, bridging unit and integration tests.

Pattern 5 — Freeze Time for Idempotency Keys

Lambda handlers that use uuid.uuid4() or datetime.now() for idempotency keys must be tested with a mocked clock (freezegun in Python, jest.useFakeTimers() in Jest) so tests are deterministic.

VCR / Cassette Patterns for AWS APIs

VCR-style cassette libraries are the "record once, replay forever" testing tool. vcrpy (Python) and nock / node-vcr (Node.js) intercept HTTPS calls, save the real request/response pair to a YAML cassette file, and replay from the cassette on subsequent runs.

import vcr

@vcr.use_cassette("cassettes/list_buckets.yaml",
                  filter_headers=["authorization", "x-amz-security-token"])
def test_list_buckets():
    s3 = boto3.client("s3", region_name="us-east-1")
    response = s3.list_buckets()
    assert "Buckets" in response

Use cassettes when:

  • You need a real AWS response shape to test your parser.
  • You want one-time recording and then fast deterministic replays.
  • You can accept refreshing the cassette when AWS API versions change.

Avoid cassettes when:

  • The request body changes between runs (timestamps, UUIDs).
  • The test needs to inject failure modes — Stubber is better for that.
  • You cannot reliably scrub secrets before committing the cassette.

vcrpy records exactly what was sent over the wire, including Authorization headers with SigV4-signed credentials. Always configure filter_headers and filter_post_data_parameters so access keys, session tokens, and bearer tokens are replaced with placeholders before the cassette lands in git. Leaked credentials in cassette files are a real incident class, not a theoretical risk. Reference: https://vcrpy.readthedocs.io/en/latest/

Environment-Specific Configuration for Testing

Every development testing strategy eventually needs environment-specific configuration — different DynamoDB table names, feature flags, endpoint URLs for LocalStack vs real AWS. DVA-C02 expects you to know three canonical approaches:

  • SAM template Parameters + !Ref: pass Environment: dev|test|prod and branch resource names off it.
  • AWS Systems Manager Parameter Store: read /myapp/${env}/dbTable at runtime inside the handler.
  • AWS AppConfig: for runtime feature flags with instant rollback.

In tests, hardcode the env-name to test, write a setup fixture that populates test values into Parameter Store (via LocalStack or a real sandbox account), and run the handler as if in production.

Common Exam Traps — Development Testing and Amazon Q Developer

DVA-C02 rotates the same handful of development testing traps across its question bank. Learn them cold.

Trap 1 — SAM Local Does Not Emulate IAM

sam local invoke runs your handler with your developer AWS credentials (or --profile), not with the Lambda function's declared execution role. That means a function that will fail in production due to a missing IAM permission might pass locally. Always run integration tests in a real AWS account to catch IAM drift.

Trap 2 — Amazon Q Developer Is Not a Deployment Tool

Amazon Q Developer generates code, tests, reviews, and docs. It does not deploy — sam deploy, CodePipeline, and CodeDeploy do that. If a scenario frames Amazon Q Developer as "handles blue/green deployment," that is a distractor.

Trap 3 — LocalStack Is Not an AWS Product

"Which AWS-owned tool emulates AWS services locally?" — the answer is AWS SAM CLI (sam local), not LocalStack. LocalStack is third-party.

Trap 4 — sam sync Drifts CloudFormation State

Using sam sync then sam deploy on the same stack produces drift errors because sam sync bypasses CloudFormation. Keep sam sync in dev-only stacks.

Trap 5 — /test Generates, Does Not Execute

Amazon Q Developer /test writes test files. You still run them with pytest, jest, or your test harness of choice. The generation is AI; the execution is your CI/CD or local runner.

Trap 6 — CodeGuru Reviewer Is Not Real-Time

CodeGuru Reviewer runs on pull requests, not on every save. Real-time IDE feedback is Amazon Q Developer territory.

Trap 7 — Stubber Does Not Make Network Calls

botocore.stub.Stubber intercepts calls before they leave the client — your test does not need network access, AWS credentials, or environment variables. If the test seems to hang, check for a code path that bypasses the stubbed client.

Trap 8 — Lambda Console Test Events Hit Real AWS

The Lambda console's Test button invokes the deployed function in AWS, not a local copy. If your handler writes to S3, the test will create a real S3 object. For local isolation, use sam local invoke instead.

sam local invoke runs Lambda code in a Docker container on your laptop, but when that code calls boto3.client("dynamodb").put_item(...), the call goes to real AWS using whichever credentials SAM CLI found in your environment or ~/.aws/credentials. It is easy to forget this and accidentally write to a production table during local tests. Always scope local dev credentials to a sandbox account or use LocalStack endpoints during local testing. Reference: https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-cli-command-reference-sam-local-invoke.html

FAQ — Development Testing and Amazon Q Developer

Q1. What is SAM Local in one sentence for DVA-C02?

SAM Local is the AWS SAM CLI subcommand group (sam local invoke, sam local start-api, sam local start-lambda, sam local generate-event) that uses Docker to emulate the AWS Lambda execution environment on your laptop so you can test handlers, exercise API Gateway locally, and generate realistic event payloads without deploying. It is the AWS-owned answer to "how do I run Lambda locally?" and depends on Docker being installed.

Q2. When should I use sam sync versus sam deploy?

Use sam sync --watch for inner-loop iteration in your personal sandbox account — it bypasses CloudFormation change sets and directly patches Lambda code, Step Functions, and API Gateway via service APIs, cutting round-trips from 60+ seconds to under 10 seconds. Use sam deploy for promotion to shared environments, change-set previews, and any CI/CD pipeline. Never mix the two on the same stack or you will get CloudFormation drift errors.

Q3. How does Amazon Q Developer accelerate development testing?

Amazon Q Developer adds four testing-relevant capabilities to your IDE: /test generates unit tests for a selected function, /review performs code-quality and security review on demand, /dev implements a feature as a multi-file diff with tests included, and its security scanner flags hardcoded credentials and unsafe patterns before commit. On DVA-C02, /test is the headline because Task 3.2 is explicitly about testing.

Q4. What is the difference between Amazon Q Developer /review and Amazon CodeGuru Reviewer?

/review runs in the IDE on demand against your current workspace or uncommitted changes — instant, developer-initiated feedback. CodeGuru Reviewer runs at the pull-request layer in your source repository (CodeCommit, GitHub, Bitbucket) — automated, team-visible feedback on every PR. They are complementary: use /review pre-commit and CodeGuru Reviewer post-commit.

Q5. How do I unit-test a Lambda handler that calls DynamoDB without hitting AWS?

For Python, use botocore.stub.Stubber to queue canned responses against a real boto3 client — no network calls, no credentials needed. For Node.js v3 SDK, use aws-sdk-client-mock with mockClient(DynamoDBClient).on(GetItemCommand).resolves(...). Both libraries let you inject happy-path responses, error responses (throttling, conditional-check failures), and assert on input parameters. Factor business logic out of the handler so most tests can target pure functions rather than the adapter.

Q6. What is the difference between SAM Local and LocalStack?

SAM Local is AWS-owned and emulates only Lambda (with a local API Gateway in start-api). It pulls real Lambda base images, so handler fidelity is high. LocalStack is a third-party community emulator covering the broader AWS service surface (S3, DynamoDB, SQS, SNS, Step Functions, Secrets Manager, and more) behind a single endpoint http://localhost:4566. Fidelity is lower per service. Use SAM Local for Lambda-centric tests and LocalStack for cross-service integration flows that you do not want to run in a real AWS account.

Q7. How do I tail Lambda logs during development?

Two commands. sam logs -n FUNCTION --stack-name STACK --tail tails CloudWatch Logs for a SAM-managed Lambda by logical ID in your template.yaml. aws logs tail /aws/lambda/FUNCTION --follow is the generic AWS CLI command that tails any CloudWatch Log group by name, SAM-managed or otherwise. Both accept filter patterns — use JSON filter patterns like { $.level = "ERROR" } when your Lambda logs are emitted in JSON.

Q8. What does Amazon CodeGuru Profiler do and when should I run it?

CodeGuru Profiler is a runtime profiling agent that samples CPU and wall-clock time of a deployed application and surfaces flame graphs plus remediation recommendations for hot methods, resource leaks, and AWS SDK anti-patterns like "new client per request." Run it in staging or production (not local dev) because it needs representative load. Profiler findings feed back into development as new unit tests and fix tickets, closing the performance loop that local testing cannot.

Q9. Can I use Amazon Q Developer customizations with my own private codebase?

Yes. Amazon Q Developer customizations let an admin connect a private repository through AWS CodeStar Connections and train a customization that makes Q Developer suggestions use your team's internal APIs, naming conventions, and utility libraries. Customizations are an administrator-managed feature delivered through IAM Identity Center, and they are billable. For DVA-C02, know the capability exists — you will not be asked about billing or setup steps.

Q10. What DVA-C02 development-testing commands and tools must I memorize?

Memorize: sam local invoke, sam local start-api, sam local start-lambda, sam local generate-event, sam sync --watch, sam logs --tail, aws logs tail --follow; botocore.stub.Stubber (Python), aws-sdk-client-mock (Node.js v3); Amazon Q Developer /dev, /review, /test, security scan, workspace chat, customizations; Amazon CodeGuru Reviewer (PR-time static analysis) and CodeGuru Profiler (runtime flame graphs); the Lambda console Test tab and the API Gateway test console. Those names and signatures answer the majority of Task 3.2 questions.

Summary — Development Testing and Amazon Q Developer at a Glance

  • Development testing on AWS splits into three loops: inner (unit tests, seconds), middle (SAM local + Docker, seconds-to-minutes), outer (sandbox account, minutes).
  • sam local invoke, sam local start-api, and sam local start-lambda are the AWS-owned way to run Lambda locally under Docker.
  • sam sync --watch is the V2.1 middle-loop accelerator; it bypasses CloudFormation for dev-only speed. Never mix with sam deploy on the same stack.
  • LocalStack is a third-party emulator covering the broader AWS service surface; useful for cross-service integration without a real AWS account.
  • botocore.stub.Stubber (Python) and aws-sdk-client-mock (Node.js v3) are the exam-blessed SDK mocking libraries.
  • VCR/cassette libraries record and replay real AWS HTTPS traffic — good for contract tests, dangerous for secret leakage.
  • Integration tests against a sandboxed dev AWS account catch IAM, quota, and eventual-consistency issues that mocks cannot.
  • sam logs --tail and aws logs tail --follow are the two command-line log tailing primitives DVA-C02 recognizes.
  • Amazon Q Developer is the V2.1 AI pair-programmer: inline suggestions, /dev for feature development, /review for code review, /test for test generation, security scanner, workspace chat, and customizations for team codebase alignment.
  • /test is the single most exam-weighted Amazon Q Developer capability because Task 3.2 is about testing.
  • CodeGuru Reviewer adds PR-layer automated code review; CodeGuru Profiler adds runtime flame graphs and AWS SDK anti-pattern detection.
  • Common traps: SAM Local does not emulate IAM, Amazon Q Developer does not deploy, LocalStack is not AWS-owned, /test generates but does not execute, and sam sync drifts CloudFormation state.

Master these development testing concepts and Task 3.2 becomes one of your highest-accuracy sections on the DVA-C02 exam — and the mental model carries directly into SAM + CloudFormation deployments, CI/CD pipeline design, and post-deployment troubleshooting with CloudWatch and X-Ray.

官方資料來源