AWS CI/CD pipeline tools are the backbone of DVA-C02 Task Statement 3.4 ("Deploy code by using AWS CI/CD services"). A developer-associate candidate must fluently describe how AWS CodePipeline orchestrates stages and actions, how AWS CodeBuild compiles artifacts from a buildspec.yml, how AWS CodeDeploy pushes those artifacts onto Amazon EC2 fleets, AWS Lambda functions, and Amazon ECS services using appspec.yml lifecycle hooks, how Amazon ECR stores container images, how Amazon CodeCatalyst wraps the whole toolchain into project-level workflows, and how AWS CodeStar Notifications wires the pipeline into Slack, Amazon SNS, and AWS Chatbot. This guide walks through every AWS CI/CD pipeline tool the DVA-C02 exam tests, with exam traps, callouts, and an FAQ at the end.
What Are AWS CI/CD Pipeline Tools?
AWS CI/CD pipeline tools are the managed services AWS provides to implement continuous integration and continuous delivery (CI/CD) for software running on AWS. Continuous integration means every source change is automatically built and tested. Continuous delivery means every passing build can be automatically deployed to a target environment. The DVA-C02 exam packages these capabilities into seven AWS CI/CD pipeline tools:
- AWS CodePipeline — the orchestrator. Defines stages, actions, transitions, and artifact flow.
- AWS CodeBuild — the builder. Executes
buildspec.ymlphases in an ephemeral container and emits artifacts. - AWS CodeDeploy — the deployer. Rolls artifacts onto EC2, Lambda, or ECS using
appspec.ymllifecycle hooks. - AWS CodeCommit — managed Git source control with native triggers into Lambda and SNS.
- AWS CodeArtifact — private package registry for npm, Maven, PyPI, NuGet, and generic formats.
- Amazon ECR — private and public container image registry with lifecycle policies and vulnerability scanning.
- Amazon CodeCatalyst — unified DevOps platform wrapping source, workflows, environments, and issues.
Around these seven sits AWS CodeStar Notifications, the cross-service notification rule layer that pushes pipeline, build, and deploy events into Amazon SNS, AWS Chatbot, and Slack.
A valid DVA-C02 answer almost always names one of the seven AWS CI/CD pipeline tools above. When the exam shows a scenario such as "the team wants to build a Java project, publish the JAR to an internal registry, and roll it out to production with automatic rollback on CloudWatch alarm," you should already be sketching a CodePipeline diagram in your head: CodeCommit → CodeBuild → CodeArtifact push → CodeDeploy with alarm rollback.
AWS CI/CD pipeline tools are the family of managed AWS services that automate the end-to-end flow of source code from a developer's commit to a production deployment. AWS CodePipeline is the orchestration service; AWS CodeBuild executes compile and test phases; AWS CodeDeploy performs the actual deployment onto compute targets; AWS CodeCommit hosts the source; AWS CodeArtifact and Amazon ECR host dependencies and container images; Amazon CodeCatalyst offers a unified project-level wrapper. All AWS CI/CD pipeline tools integrate natively with IAM, CloudWatch Events / EventBridge, CloudTrail, and KMS. See: https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html
Plain-English Explanation of AWS CI/CD Pipeline Tools
To lock the AWS CI/CD pipeline tools into memory, anchor each service to an everyday analogy.
Analogy 1 — The factory assembly line (industrial kitchen analogy). Think of a central kitchen that pushes out ten thousand meals a day. AWS CodePipeline is the conveyor belt that moves trays from station to station. Each station is a stage: prep (source), cooking (build), plating (test), and delivery (deploy). AWS CodeBuild is the oven at the cooking station — it takes raw ingredients (source), follows a recipe (the buildspec.yml), and outputs cooked dishes (artifacts). AWS CodeDeploy is the delivery driver who takes the finished tray to the customer's table (EC2 fleet, Lambda alias, or ECS service) and follows a handoff checklist (appspec.yml hooks) at every step. Amazon ECR is the walk-in freezer where pre-portioned container images are kept. AWS CodeArtifact is the pantry of trusted ingredients (npm, Maven, PyPI, NuGet) that every station can draw from. Amazon CodeCatalyst is the kitchen manager's clipboard that tracks every ticket, recipe, oven, and driver in one unified dashboard. AWS CodeStar Notifications is the intercom system that beeps the manager on Slack whenever a tray is burned or delayed.
Analogy 2 — The airport baggage handling system (logistics analogy). Your source commit is a suitcase. AWS CodeCommit is the check-in counter. AWS CodePipeline is the conveyor belt that routes the suitcase through X-ray (build), customs (test), and finally onto the aircraft (deploy). A CodePipeline execution mode of Superseded is like a smart conveyor that lets a newer suitcase jump ahead of an older one when both are waiting for the same slot. Queued means every suitcase is processed strictly first-come-first-served. Parallel means the conveyor can run multiple suitcases simultaneously on separate belts. A CodeDeploy blue/green deployment on Amazon ECS is like taxiing an entirely new aircraft to the gate, checking passengers onto the new aircraft (test listener), and only then towing the old aircraft away (terminating the old task set).
Analogy 3 — The open-book exam (developer productivity analogy). The developer associate's daily work is an open-book exam with the AWS documentation as reference. AWS CI/CD pipeline tools automate the grading. CodeCommit is the answer sheet. CodeBuild is the auto-grader that runs unit tests. CodeDeploy is the registrar who publishes results. CodePipeline is the proctor who makes sure each step happens in order and stops the exam if any step fails. Manual approval actions are the proctor pausing to ask the department chair for a signature before releasing a grade. The whole system is version-controlled, reproducible, and audited in CloudTrail — which is exactly what a senior developer is expected to build for production.
Stringing the three analogies together: CodePipeline is the conveyor / proctor / clipboard; CodeBuild is the oven / X-ray / auto-grader; CodeDeploy is the delivery driver / aircraft tow / registrar; CodeCommit + CodeArtifact + ECR are the ingredients, pantry, and freezer; CodeCatalyst is the unified manager; CodeStar Notifications is the intercom. Every DVA-C02 question about AWS CI/CD pipeline tools reduces to mapping a scenario onto one of those roles.
AWS CodePipeline — The Orchestrator
AWS CodePipeline is the orchestration layer of the AWS CI/CD pipeline tools family. A pipeline is a directed chain of stages; each stage contains one or more actions; actions consume and produce artifacts stored in an S3 artifact bucket (or, optionally, an AWS CodeArtifact-adjacent store for specific integrations).
CodePipeline Stages and Actions
A CodePipeline pipeline always begins with a Source stage and typically ends with a Deploy stage. Between them you can compose any number of Build, Test, Approval, and Invoke stages. A pipeline must have at least two stages; the first must be a Source stage.
Action categories supported by AWS CodePipeline:
- Source — AWS CodeCommit, Amazon S3, Amazon ECR, GitHub (via AWS CodeStar Connections or CodeConnections), GitHub Enterprise Server, GitLab, and Bitbucket.
- Build — AWS CodeBuild, Jenkins, TeamCity.
- Test — AWS CodeBuild, AWS Device Farm, Ghost Inspector, third-party test providers.
- Deploy — AWS CodeDeploy, AWS Elastic Beanstalk, AWS CloudFormation, Amazon ECS, Amazon ECS with blue/green via CodeDeploy, AWS AppConfig, AWS OpsWorks, Amazon S3, AWS Service Catalog.
- Approval — manual approval by an IAM principal with optional Amazon SNS notification and URL for review.
- Invoke — AWS Lambda and AWS Step Functions.
Artifacts and the Artifact Store
Every CodePipeline pipeline uses an artifact store, an Amazon S3 bucket that holds the zipped inputs and outputs exchanged between actions. Each action declares input artifacts and output artifacts by logical name. CodePipeline passes the correct S3 object references between stages automatically. For multi-region pipelines you can attach one artifact store per Region; CodePipeline replicates artifacts into the destination Region before running actions there.
Manual Approval Actions
A Manual Approval action pauses the pipeline until an authorized IAM identity clicks Approve or Reject. You can attach an Amazon SNS topic so reviewers receive an email with a direct approval URL. Manual approval is the recommended AWS CI/CD pipeline tools pattern for the last step before production — a human gate that prevents automated rollouts of risky changes.
Cross-Account and Cross-Region Pipelines
AWS CodePipeline supports cross-account actions by assuming an IAM role in the target account. The source account's pipeline role must be granted sts:AssumeRole on the target account's role, and the target account's role must have the permissions required to perform the action (deploy a stack, push to ECR, and so on). A common pattern is a central pipeline in a "tools" account deploying into segregated dev, staging, and prod accounts.
AWS CodePipeline supports cross-region actions by configuring additional artifact stores in each Region you deploy to. The pipeline itself lives in one Region (the "home Region") but invokes actions in other Regions by name. CodePipeline replicates the input artifacts to the remote Region before the action runs.
Pipeline Triggers, Filters, and Polling vs Events
A pipeline can start automatically on source change via three mechanisms:
- CloudWatch Events / Amazon EventBridge (recommended) — event-driven, low-latency. CodeCommit emits a
referenceUpdatedevent; an EventBridge rule starts the pipeline instantly. - Amazon S3 source polling — for S3 sources, an EventBridge rule on
Object Createdis the modern recommendation; the legacy path was CodePipeline polling, which is slower and costs more API calls. - Webhook (for GitHub / Bitbucket / GitLab via CodeConnections) — Git providers push events into CodePipeline.
Pipeline triggers and filters let you restrict which source changes start the pipeline: trigger on specific branches, tags, file paths, or pull-request events. Filters are evaluated at the trigger layer, so irrelevant pushes never consume CodeBuild minutes. This is the V2 pipeline type feature every DVA-C02 candidate should know by name.
Pipeline Execution Modes — Queued, Parallel, Superseded
AWS CodePipeline V2 supports three execution modes that control what happens when a new pipeline execution arrives while another is still in progress:
- Superseded (default) — a newer execution can replace an older one waiting between stages. Use when only the latest commit matters (mainline development).
- Queued — executions run one at a time in FIFO order. No execution can skip ahead. Use when strict ordering matters (for example, sequential database migrations).
- Parallel — executions run concurrently on independent flows. Use when each source change is isolated (for example, a monorepo with per-service branches).
Choosing the correct execution mode is a V2.1-era exam topic. When the DVA-C02 exam describes a team that "only cares about the latest change" the answer is Superseded. When the exam describes "every commit must be deployed in order" the answer is Queued. When the exam mentions "concurrent feature branches," the answer is Parallel.
The DVA-C02 exam tests CodePipeline execution modes as a scenario match. Memorize:
- Superseded (default) — newer executions replace older ones in the waiting area. Best for mainline "latest-wins" delivery.
- Queued — strict FIFO, one execution at a time across the pipeline. Best when order matters (migrations, sequenced rollouts).
- Parallel — fully concurrent executions on independent flows. Best for monorepos or multi-branch pipelines. Execution modes are a V2-only feature; a V1 pipeline cannot switch modes. See: https://docs.aws.amazon.com/codepipeline/latest/userguide/concepts.html
CloudWatch Events Trigger vs Polling
Legacy CodePipeline pipelines used polling — CodePipeline would poll the source (CodeCommit, S3) on a schedule for changes. Polling is deprecated in favor of CloudWatch Events / EventBridge triggers because polling is slower (up to minutes of delay), more expensive (API calls), and cannot filter by file path. The modern AWS CI/CD pipeline tools pattern is always EventBridge-driven triggers. If a DVA-C02 question mentions "reduce pipeline start latency," the correct answer is to replace polling with EventBridge-based triggers.
AWS CodeBuild — The Builder
AWS CodeBuild is the fully managed build service in the AWS CI/CD pipeline tools family. It runs an ephemeral build container on AWS-managed compute, executes the phases defined in buildspec.yml, and publishes artifacts and reports.
buildspec.yml Phases
A buildspec.yml is a YAML file stored in the source root that defines the CodeBuild execution plan. The file has four ordered phases plus optional sections:
version: 0.2
env:
variables:
BUILD_ENV: "prod"
parameter-store:
DB_HOST: "/prod/db/host"
secrets-manager:
API_KEY: "prod/api:SecretString:key"
phases:
install:
runtime-versions:
nodejs: 20
commands:
- npm ci
pre_build:
commands:
- echo "Logging in to ECR"
- aws ecr get-login-password | docker login --username AWS --password-stdin $REPO_URI
build:
commands:
- npm run build
- docker build -t $REPO_URI:$CODEBUILD_RESOLVED_SOURCE_VERSION .
post_build:
commands:
- docker push $REPO_URI:$CODEBUILD_RESOLVED_SOURCE_VERSION
- printf '[{"name":"app","imageUri":"%s"}]' $REPO_URI:$CODEBUILD_RESOLVED_SOURCE_VERSION > imagedefinitions.json
artifacts:
files:
- imagedefinitions.json
- appspec.yml
- taskdef.json
reports:
jest-reports:
files: [coverage/junit.xml]
file-format: JUNITXML
cache:
paths:
- node_modules/**/*
The four phases:
- install — runtime setup. Use
runtime-versionsto pin Node.js, Python, Java, Go, Ruby, .NET, PHP, or Docker. Install tool-chain dependencies that do not belong in the application cache. - pre_build — sign in to registries, run linters, fetch secrets, or any step that must succeed before the actual build.
- build — compile, bundle, or package. Produces the primary artifact.
- post_build — push images, tag, emit
imagedefinitions.jsonfor Amazon ECS, or write Lambda zip outputs.
Phase failures propagate — if install fails, later phases are skipped. A failure in post_build marks the whole build failed even if build succeeded.
Artifacts and Reports
Artifacts are the files CodeBuild publishes to S3 (or passes back to CodePipeline as output artifacts). Declare them under the artifacts: block. For CodePipeline-driven builds, CodeBuild automatically packages the artifact into the pipeline's artifact store.
Reports are a separate first-class CodeBuild feature for test results and code coverage. A report group holds multiple runs of the same test suite. Supported formats: JUnit XML, Cucumber JSON, TestNG XML, Visual Studio TRX, NUnit, and Cobertura / JaCoCo / Clover / SimpleCov for code coverage. Reports appear in the CodeBuild console with pass/fail trends — DVA-C02 considers reports the standard answer when the scenario asks "where should test results be collected and visualized?"
Environment Variables, Parameter Store, and Secrets Manager
CodeBuild exposes three injection mechanisms for build-time configuration:
- Plain
variables— non-sensitive values such as build flags and feature toggles. parameter-store— references to AWS Systems Manager Parameter Store parameters. Supports SecureString (KMS-encrypted) parameters.secrets-manager— direct references to AWS Secrets Manager secrets. Use for credentials that must be rotated (database passwords, API keys).
Never hardcode secrets in buildspec.yml or commit them to source. The DVA-C02 exam flags any answer that reads a secret from plaintext variables as incorrect.
Compute Type, Build Image, and Custom Images
A CodeBuild project specifies:
- Compute type —
BUILD_GENERAL1_SMALL(3 GB / 2 vCPU),BUILD_GENERAL1_MEDIUM(7 GB / 4 vCPU),BUILD_GENERAL1_LARGE(15 GB / 8 vCPU),BUILD_GENERAL1_2XLARGE(145 GB / 72 vCPU). Larger tiers cost more per minute. - Build image — the container image the build runs inside. AWS-managed images are named
aws/codebuild/standard:7.0,aws/codebuild/amazonlinux2-x86_64-standard:5.0, and so on. They include a curated toolchain (Docker, Node, Python, Java, Go, .NET). - Custom image — any image pulled from Amazon ECR or Docker Hub. Use when the AWS-managed images lack a required tool (a specific compiler version, an embedded build system).
Larger compute types reduce wall-clock build time but cost more per minute. The DVA-C02 answer to "builds are too slow" is usually "enable caching" first, then "increase compute type" second.
Local Build with Docker
CodeBuild provides local build support via codebuild_build.sh and the CodeBuild Agent Docker image. Developers can reproduce a full CodeBuild run on their laptop before pushing, which accelerates buildspec.yml debugging. This is the correct answer when the DVA-C02 exam asks how to debug a build that only fails inside CodeBuild without consuming AWS build minutes for every try.
Cache — S3 and Local
CodeBuild supports two cache types:
- S3 cache — declared by bucket and key. Shared across build hosts. Best for dependencies that rarely change (node_modules, Maven
~/.m2, pip wheels). Slightly slower than local cache because it is downloaded each run. - Local cache — cached on the CodeBuild host itself. Three modes:
DOCKER_LAYER(Docker layer cache fordocker build),SOURCE(Git clone cache), andCUSTOM(arbitrary paths). Faster than S3 cache, but only available when the same host is reused — a cold host is a cache miss.
The DVA-C02 exam uses caching as a performance-optimization answer. When the scenario says "CodeBuild downloads the same 300 MB of dependencies every run," the fix is an S3 cache with paths: pointing at the dependency folder.
Batch Builds
CodeBuild batch builds let a single project run multiple related builds in one invocation — for example, x86, Arm64, and Windows variants of the same release. Batch builds share configuration but run on independent hosts and can be parallelized. Batch builds are the right answer when the DVA-C02 exam describes "build the same application on multiple architectures in a single CodePipeline action."
Webhooks
When CodeBuild is triggered directly from a Git source (outside of CodePipeline), you configure a webhook — a callback URL that the Git provider invokes on push, pull_request, tag, or release. Webhook filter groups let you restrict which events start a build (specific branches, file path patterns, actor identities). Webhooks are the DVA-C02 answer for "run a CodeBuild on every pull request without building a pipeline." Inside CodePipeline, pipeline triggers and filters handle the same concern at the pipeline level.
When a DVA-C02 question asks how to reduce CodeBuild time, always try caching first before scaling the compute type. The order of optimization is: (1) enable S3 cache for dependencies, (2) enable Local cache DOCKER_LAYER for Docker builds, (3) split into batch builds if architectures differ, and (4) increase compute type (small → medium → large). Jumping straight to a larger compute type without caching is the wrong AWS CI/CD pipeline tools answer because it costs more and does not fix the underlying dependency re-download. See: https://docs.aws.amazon.com/codebuild/latest/userguide/build-caching.html
AWS CodeDeploy — The Deployer
AWS CodeDeploy is the deployment service in the AWS CI/CD pipeline tools family. It supports three compute platforms: EC2 / on-premises, AWS Lambda, and Amazon ECS. Each platform has its own appspec.yml shape, its own deployment configurations, and its own lifecycle-hook set.
appspec.yml Basics
The appspec.yml is the deployment manifest CodeDeploy reads to know what to deploy and how. Its shape differs per platform:
- For EC2 / on-premises,
appspec.ymllists files to copy, permissions to set, and scripts to run at each lifecycle hook. - For Lambda,
appspec.ymlnames the function, the alias, the current version, the target version, and the lifecycle hook Lambda functions. - For ECS,
appspec.ymlnames the task definition, the container, the port, and the hook Lambda functions.
CodeDeploy on EC2 / On-Premises
For EC2 / on-premises deployments, a CodeDeploy agent must be installed and running on every target host. The agent polls CodeDeploy for deployment instructions. Targets are grouped into a deployment group — a named set of instances identified by Auto Scaling group membership, EC2 tags, or on-premises instance tags.
Supported deployment configurations for EC2 / on-premises:
CodeDeployDefault.AllAtOnce— deploy to every instance simultaneously. Fastest, highest risk.CodeDeployDefault.HalfAtATime— half of instances at a time. Balanced.CodeDeployDefault.OneAtATime— one instance at a time. Slowest, lowest risk.- Custom configurations — specify minimum healthy hosts as a count or percentage.
EC2 lifecycle hooks (in order of execution during an in-place deployment):
ApplicationStopDownloadBundleBeforeInstallInstallAfterInstallApplicationStartValidateService
During a blue/green EC2 deployment, additional hooks fire on the replacement fleet: BeforeBlockTraffic, BlockTraffic, AfterBlockTraffic, BeforeAllowTraffic, AllowTraffic, AfterAllowTraffic. The order of hooks is a perennial DVA-C02 trap — memorize BeforeInstall → Install → AfterInstall → ApplicationStart → ValidateService.
CodeDeploy on AWS Lambda
CodeDeploy on Lambda shifts traffic between two versions of a Lambda function behind an alias. You deploy a new version, CodeDeploy shifts traffic from the old version to the new version according to a deployment configuration, and lifecycle hook Lambda functions validate the traffic shift.
Lambda deployment configurations:
- AllAtOnce — 100% traffic shift immediately (
CodeDeployDefault.LambdaAllAtOnce). - Linear — shift a fixed percentage every N minutes. Built-in examples:
LambdaLinear10PercentEvery1Minute,LambdaLinear10PercentEvery2Minutes,LambdaLinear10PercentEvery3Minutes,LambdaLinear10PercentEvery10Minutes. - Canary — shift a small percentage immediately, then the remainder after a wait. Built-in examples:
LambdaCanary10Percent5Minutes,LambdaCanary10Percent10Minutes,LambdaCanary10Percent15Minutes,LambdaCanary10Percent30Minutes.
Lambda lifecycle hooks: BeforeAllowTraffic (runs before any traffic shift — ideal for smoke tests against the new version) and AfterAllowTraffic (runs after the full shift — final validation). Each hook is a separate Lambda function.
CodeDeploy on Amazon ECS (Blue/Green)
CodeDeploy on ECS always performs a blue/green deployment. It creates a new task set (green) alongside the existing task set (blue), shifts traffic between them via a load balancer, and finally terminates the blue task set.
Two load balancer listeners are used:
- Production listener — carries real user traffic. Always pointed at blue until the cutover.
- Test listener (optional) — lets the green task set be exercised by internal smoke tests before production traffic is shifted. Typically listens on a non-standard port.
Wait time configuration adds a pause between cutover steps so you can observe the green deployment before termination:
- Wait time for test traffic to start — how long after the green task set is created to keep the test listener active.
- Wait time before terminating the original task set — how long blue is kept running after production traffic has fully shifted to green. If you need to roll back, you roll back within this window by flipping traffic back to blue — the fastest form of rollback on AWS.
ECS lifecycle hooks (all are Lambda functions): BeforeInstall, AfterInstall, AfterAllowTestTraffic, BeforeAllowTraffic, AfterAllowTraffic. The AfterAllowTestTraffic hook is the DVA-C02 classic — it validates the green task set via the test listener before production traffic is shifted.
Deployment Groups
A deployment group identifies the target of a CodeDeploy deployment. For EC2 / on-premises, it is a set of tagged instances or an Auto Scaling group. For Lambda, it is a function + alias. For ECS, it is a cluster + service. A CodeDeploy application can own many deployment groups, which is how you model dev, staging, and prod with one application definition.
Automatic Rollback on Alarm
CodeDeploy supports automatic rollback on two triggers:
- Deployment failure — any hook failure or health-check failure triggers rollback to the last known-good revision.
- CloudWatch Alarm — if you associate a CloudWatch alarm with the deployment group, CodeDeploy rolls back automatically when the alarm enters the ALARM state during deployment. This is the DVA-C02 canonical answer for "how do I roll back when the new version increases error rate or latency?"
Rollback works by redeploying the previous successful revision, not by resurrecting the prior compute resources — so the compute target (instances, Lambda alias, ECS task set) receives the old code again via a brand-new deployment.
DVA-C02 frequently asks "which CodeDeploy lifecycle hook runs after the application has been installed but before it starts?" Candidates who skim confuse AfterInstall with ApplicationStart. The correct order on EC2 is: ApplicationStop → DownloadBundle → BeforeInstall → Install → AfterInstall → ApplicationStart → ValidateService. Memorize the sequence as "Stop, Download, BeforeInstall, Install, AfterInstall, Start, Validate." On Lambda the order is BeforeAllowTraffic → traffic shift → AfterAllowTraffic. On ECS the order is BeforeInstall → AfterInstall → AfterAllowTestTraffic → BeforeAllowTraffic → AfterAllowTraffic. Mixing EC2 hooks into a Lambda or ECS answer is an instant loss of points. See: https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html
The DVA-C02 exam separates two rollback triggers: deployment-step failure and CloudWatch Alarm. Only the CloudWatch Alarm trigger catches post-deployment metric regressions such as elevated 5xx rate, p99 latency, or DLQ message count. When the scenario says "roll back if customer-facing error rate spikes after the new version is live," the correct answer is "associate a CloudWatch alarm with the CodeDeploy deployment group" — not "add another health check" and not "increase ValidateService script retries." See: https://docs.aws.amazon.com/codedeploy/latest/userguide/deployments-rollback-and-redeploy.html
AWS CodeCommit — Managed Git
AWS CodeCommit is the managed Git service in the AWS CI/CD pipeline tools family. It hosts private Git repositories, supports HTTPS and SSH access, and integrates natively with IAM for authentication.
Key CodeCommit features for DVA-C02:
- IAM-based auth — no separate credentials system; the same IAM user or role that can call AWS services can clone a CodeCommit repo.
- Cross-account access — roles in account A can assume a role in account B and clone repos in B.
- Triggers — run an AWS Lambda function or publish an Amazon SNS message when events occur: commit pushed, branch created, branch deleted, tag created. Triggers are the native way to chain CodeCommit into event-driven AWS CI/CD pipeline tools workflows without polling.
- Approval rule templates — enforce pull-request approvals before merging.
- Pull requests — branch-based code review with comments.
- Encryption at rest — always on, via AWS KMS (aws/codecommit managed key or a customer-managed key).
In DVA-C02 scenarios, CodeCommit is the answer for "a fully managed Git repository that integrates with IAM and can trigger Lambda functions on commit without a third-party webhook." A CodeCommit trigger is distinct from an EventBridge rule: triggers are defined on the repository and fire synchronously from the repository into Lambda or SNS, whereas EventBridge rules observe CodeCommit events across an account and dispatch to any EventBridge target.
AWS CodeArtifact — Private Package Registry
AWS CodeArtifact is the AWS-managed package registry in the AWS CI/CD pipeline tools family. It provides private repositories for the following package formats:
- npm (Node.js)
- Maven (Java)
- PyPI (Python)
- NuGet (.NET)
- Generic (any file; for release bundles and binaries not tied to a language ecosystem)
- Swift (SwiftPM), Ruby (Bundler), and others are continually added; check the console for the current list.
CodeArtifact supports upstream repositories. A private repository can proxy and cache public upstream sources such as npmjs.org, Maven Central, PyPI, NuGet Gallery. The first time a developer requests a public dependency, CodeArtifact fetches it from the upstream and caches it. Subsequent fetches are served from the CodeArtifact cache — reducing external dependency risk (supply-chain attacks, upstream outages) and accelerating builds inside CodeBuild.
Authentication to CodeArtifact happens via IAM-scoped auth tokens. A developer or CodeBuild container runs aws codeartifact get-authorization-token and exports the token into npm, Maven, pip, or dotnet CLI configuration.
DVA-C02 uses CodeArtifact as the answer for "an internal package registry compatible with npm / Maven / pip / NuGet, with IAM-based access and public upstream caching." It is not a container registry — for containers, the answer is Amazon ECR.
Amazon ECR — Container Image Registry
Amazon Elastic Container Registry (Amazon ECR) is the container image registry in the AWS CI/CD pipeline tools family. It supports two registry types:
- Amazon ECR private registry — per-account private repositories. Images are pulled by Amazon ECS, Amazon EKS, AWS App Runner, and AWS Lambda (for container-image Lambda). IAM governs push and pull permissions. Cross-account pull is configured via repository policies.
- Amazon ECR public registry (Amazon ECR Public / public.ecr.aws) — publicly pullable images. Use for images you want the internet to consume (open-source distributions, example apps).
Lifecycle Policies
ECR lifecycle policies delete old images automatically to control storage cost. A policy is a JSON document with one or more rules: "expire images older than 30 days" or "keep only the last 10 images tagged release-*." Lifecycle rules are evaluated daily. DVA-C02 uses lifecycle policies as the answer for "ECR storage is growing unbounded; how do I clean up old images without manual work?"
Image Scanning
ECR offers two scanning modes:
- Basic scanning — uses the open-source Clair CVE database. On-push scan by default; can be rescheduled manually.
- Enhanced scanning — powered by Amazon Inspector. Continuous scanning of both OS packages and application language dependencies (npm, Maven, PyPI, NuGet, Go, Ruby). Findings are pushed into AWS Security Hub.
For DVA-C02 "scan container images for CVEs before they reach production" the answer is ECR enhanced scanning via Amazon Inspector.
Pulling Through Cache Rules
ECR supports pull-through cache rules for public upstream registries (Docker Hub, Quay, Amazon ECR Public, GitHub Container Registry, and the Microsoft Container Registry). The first pull fetches the image from the upstream; subsequent pulls serve from the ECR cache. This mirrors CodeArtifact's upstream behavior for containers.
DVA-C02 expects you to answer "which registry for what?" without hesitation:
- Amazon ECR private — container images for ECS, EKS, Lambda, App Runner. Supports lifecycle policies and vulnerability scanning (basic Clair or enhanced via Amazon Inspector).
- Amazon ECR public (public.ecr.aws) — publicly pullable container images.
- AWS CodeArtifact — language packages for npm, Maven, PyPI, NuGet, and generic binary formats. Supports public upstream caching.
- Amazon S3 — raw artifact storage used by CodePipeline's artifact store and by CodeBuild for artifact outputs not bound to a package format. A DVA-C02 question that mentions "cache Docker Hub images inside our account" maps to ECR pull-through cache; a question that mentions "cache npmjs packages" maps to CodeArtifact upstream. See: https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html
Amazon CodeCatalyst — Unified DevOps Platform
Amazon CodeCatalyst is the unified DevOps platform in the AWS CI/CD pipeline tools family. CodeCatalyst wraps source control, issue tracking, workflows (CI/CD), development environments, and project dashboards into a single project-level experience.
Core CodeCatalyst primitives:
- Spaces — the billing and identity boundary. A space contains projects.
- Projects — the unit of work. Each project has its own source repositories, workflows, environments, and issues.
- Workflows — YAML-defined CI/CD pipelines inside a project. Triggers include pushes, pull requests, schedules, and manual runs. Actions include build, test, deploy, and integrations with AWS CodeBuild, AWS CodeDeploy, AWS Lambda, Amazon ECS, AWS CloudFormation, Terraform, and custom actions.
- Environments — named deployment targets linked to AWS accounts via account connections. Workflows deploy into environments.
- Dev Environments — cloud-hosted development machines integrated with IDEs (Visual Studio Code, JetBrains, AWS Cloud9) so the whole team develops against identical tooling.
- Blueprints — project templates that scaffold an entire project with best-practice workflows and environments.
- Issues — lightweight task tracking integrated with commits and workflows.
CodeCatalyst differs from the individual AWS CI/CD pipeline tools (CodePipeline, CodeBuild, CodeDeploy) in scope — it is a unified experience that spans source, CI/CD, environments, and tickets. The older services remain first-class and are what CodeCatalyst invokes under the hood for many actions. DVA-C02 uses CodeCatalyst as the answer when the scenario emphasizes "a single unified DevOps platform across projects, workflows, and environments" rather than assembling the individual services yourself.
AWS CodeStar Notifications — The Notification Layer
AWS CodeStar Notifications is the cross-service notification rule layer for the AWS CI/CD pipeline tools family. You create notification rules that match events from CodeCommit, CodeBuild, CodeDeploy, and CodePipeline and route them to three target types:
- Amazon SNS topics — for email, SMS, or generic fan-out.
- AWS Chatbot — for posting into Slack channels or Amazon Chime chat rooms.
- (Indirect) any EventBridge target via SNS fan-out or a Lambda subscription.
Notification rules are per-resource — a CodePipeline pipeline has its own notification rule that specifies which events (pipeline started, pipeline succeeded, pipeline failed, stage failed, action failed, manual approval needed) fire into which targets. For DVA-C02, CodeStar Notifications is the answer for "notify the team on Slack when a pipeline fails" — not a custom EventBridge rule with a custom Lambda, though that alternative also exists.
The AWS CodeStar service itself (the unified project dashboard that shipped in 2017) was deprecated in 2024 and should not be confused with AWS CodeStar Notifications, which remains fully supported and is part of the AWS CI/CD pipeline tools family.
Putting It Together — A Reference Architecture
A typical DVA-C02-scale production AWS CI/CD pipeline tools architecture looks like this:
- Developer pushes to AWS CodeCommit on the
mainbranch. - A CodePipeline trigger fires on
referenceUpdatedvia an Amazon EventBridge rule. - CodePipeline Source stage reads the commit into the S3 artifact store.
- CodePipeline Build stage invokes AWS CodeBuild, which runs the
buildspec.ymlphases: install dependencies (from AWS CodeArtifact upstream cache), build the Docker image, push to Amazon ECR (with lifecycle policies pruning old tags), and emitimagedefinitions.json,appspec.yml, andtaskdef.jsonas artifacts. - CodePipeline Test stage invokes a second CodeBuild project for integration tests that publish JUnit reports.
- CodePipeline Manual Approval stage pauses and emails reviewers via an Amazon SNS topic.
- CodePipeline Deploy stage invokes AWS CodeDeploy with an ECS blue/green deployment group. CodeDeploy creates the green task set, runs the
AfterAllowTestTraffichook against the test listener, shifts production traffic, and waits 30 minutes before terminating blue. - If any CloudWatch alarm (error rate, p99 latency, DLQ depth) enters ALARM during deployment, CodeDeploy auto-rolls back.
- AWS CodeStar Notifications pushes pipeline success / failure events into a Slack channel via AWS Chatbot.
- Amazon CodeCatalyst (optional) wraps the whole thing as a single project with issues, workflows, and dev environments.
Every named service in this architecture is part of the AWS CI/CD pipeline tools family and is in scope for DVA-C02.
Key Numbers and Must-Memorize Facts
- CodePipeline stages per pipeline: 2 minimum, first must be Source.
- CodePipeline execution modes: Superseded (default), Queued, Parallel.
- CodePipeline types: V1 (legacy, polling) and V2 (execution modes, triggers/filters).
- CodeBuild buildspec phases: install, pre_build, build, post_build.
- CodeBuild compute types: SMALL (3 GB), MEDIUM (7 GB), LARGE (15 GB), 2XLARGE (145 GB).
- CodeBuild cache types: S3 and Local (DOCKER_LAYER, SOURCE, CUSTOM).
- CodeDeploy EC2 lifecycle hooks (in order): ApplicationStop, DownloadBundle, BeforeInstall, Install, AfterInstall, ApplicationStart, ValidateService.
- CodeDeploy Lambda hooks: BeforeAllowTraffic, AfterAllowTraffic.
- CodeDeploy ECS hooks: BeforeInstall, AfterInstall, AfterAllowTestTraffic, BeforeAllowTraffic, AfterAllowTraffic.
- CodeDeploy Lambda configurations: AllAtOnce, Linear (10% every 1/2/3/10 min), Canary (10% then wait 5/10/15/30 min).
- CodeDeploy EC2 configurations: AllAtOnce, HalfAtATime, OneAtATime, Custom.
- ECR scanning modes: Basic (Clair) and Enhanced (Amazon Inspector).
- CodeArtifact formats: npm, Maven, PyPI, NuGet, generic (plus growing list).
- CodeStar Notifications targets: Amazon SNS and AWS Chatbot.
Common Exam Traps — Pitfalls to Memorize
Trap 1: "appspec.yml and buildspec.yml are interchangeable"
False. buildspec.yml is read by CodeBuild and defines build phases and artifacts. appspec.yml is read by CodeDeploy and defines deployment targets and lifecycle hooks. Putting phases into an appspec or hooks into a buildspec is an instantly wrong answer on DVA-C02.
Trap 2: "CodeDeploy ECS deployments can be in-place"
False. CodeDeploy on ECS is always blue/green. In-place deployments are only available on the EC2 / on-premises platform. If a DVA-C02 question says "in-place ECS deployment with CodeDeploy" the answer is incorrect.
Trap 3: "Lambda deployment Canary and Linear are the same thing"
False. Canary shifts a small percentage immediately, then the remainder after a single wait. Linear shifts a fixed percentage every N minutes in equal steps. Canary has two steps; Linear has many steps. Example: Canary10Percent30Minutes shifts 10% then waits 30 minutes then shifts 90%; Linear10PercentEvery3Minutes shifts 10% ten times over 30 minutes.
Trap 4: "Polling is the modern way to trigger pipelines"
False. Polling is legacy. Modern AWS CI/CD pipeline tools use Amazon EventBridge triggers for CodeCommit and S3 sources. Polling is slower, more expensive, and cannot filter by path.
Trap 5: "CodePipeline V1 supports execution modes"
False. Execution modes (Superseded, Queued, Parallel) are a V2-only feature. V1 pipelines always behave like Superseded.
Trap 6: "CodeArtifact stores container images"
False. Amazon ECR stores container images. CodeArtifact stores language packages (npm, Maven, PyPI, NuGet, generic). Cross these wires and the answer is wrong.
Trap 7: "CodeStar is the same as CodeStar Notifications"
False. The original AWS CodeStar project dashboard service was deprecated in 2024. AWS CodeStar Notifications is a separate, fully supported service that routes CI/CD events to SNS and AWS Chatbot. Amazon CodeCatalyst is the modern replacement for the CodeStar project-dashboard role.
Trap 8: "CloudWatch Alarm rollback happens only before deployment starts"
False. CodeDeploy monitors the configured CloudWatch alarm during the deployment window. If the alarm enters ALARM at any point, CodeDeploy triggers rollback — which is the whole point of alarm-based rollback.
Trap 9: "Manual approval in CodePipeline requires writing a custom Lambda"
False. Manual approval is a first-class action category with built-in SNS integration. No Lambda required.
Trap 10: "Pipeline triggers and webhooks are the same"
Partial. A webhook is a callback from a Git provider to CodeBuild (outside of CodePipeline). A pipeline trigger is a CodePipeline V2 feature that starts a pipeline on source events with branch/tag/path filters. They solve similar problems at different levels.
AWS CI/CD Pipeline Tools vs Similar Concepts
- AWS CodePipeline vs GitHub Actions — CodePipeline is AWS-native orchestration with first-class IAM, VPC, and cross-account support. GitHub Actions is GitHub-native with a larger third-party action catalog. DVA-C02 expects you to name AWS services as the answer; GitHub Actions is a distractor.
- AWS CodeBuild vs Jenkins — CodeBuild is fully managed, ephemeral, and billed per minute. Jenkins is self-hosted or EC2-based and requires you to run the control plane. For DVA-C02 "fully managed build with no servers to manage" always map to CodeBuild.
- AWS CodeDeploy vs Elastic Beanstalk — CodeDeploy is a deployment engine for EC2 / Lambda / ECS with fine-grained lifecycle hooks. Elastic Beanstalk is a managed platform abstraction that owns deployment, compute provisioning, and auto-scaling. A DVA-C02 question mentioning "blue/green deployment for ECS" maps to CodeDeploy, not Beanstalk.
- AWS CodeCommit vs GitHub / GitLab / Bitbucket — CodeCommit is the AWS-native managed Git. GitHub and friends integrate into CodePipeline via CodeConnections. DVA-C02 treats all as valid sources but expects CodeCommit when the scenario emphasizes IAM-based auth without external identities.
- Amazon CodeCatalyst vs individual Code services* — CodeCatalyst is the unified wrapper. The individual services remain available and are often what CodeCatalyst invokes under the hood.
FAQ — AWS CI/CD Pipeline Tools Top Questions
Q1: What is the difference between buildspec.yml and appspec.yml?
A: buildspec.yml is consumed by AWS CodeBuild and defines four ordered phases (install, pre_build, build, post_build) plus artifacts, reports, environment variables, and cache. appspec.yml is consumed by AWS CodeDeploy and defines the deployment target plus lifecycle hooks whose exact set depends on the compute platform (EC2, Lambda, or ECS). Mixing the two files up is a high-frequency DVA-C02 trap.
Q2: Which CodePipeline execution mode should I pick for a team where only the latest commit matters? A: Superseded (the default). When a newer execution arrives while an older one is still waiting between stages, the newer one replaces the older one. For strict FIFO ordering use Queued; for concurrent independent flows use Parallel.
Q3: How do I automatically roll back a CodeDeploy deployment when a CloudWatch alarm fires? A: Associate the CloudWatch alarm with the CodeDeploy deployment group's rollback configuration. When the alarm enters ALARM during the deployment window, CodeDeploy triggers rollback to the last known-good revision. This is the canonical AWS CI/CD pipeline tools answer for "roll back on metric regression" — not a custom Lambda, not a Step Functions state machine.
Q4: What is the correct order of CodeDeploy EC2 lifecycle hooks? A: ApplicationStop → DownloadBundle → BeforeInstall → Install → AfterInstall → ApplicationStart → ValidateService. For blue/green, additional hooks on the replacement and original fleets fire around traffic blocking and allowing.
Q5: What is the difference between Canary and Linear deployment configurations for Lambda? A: A Canary configuration shifts a small percentage of traffic immediately, waits a fixed interval, then shifts the remaining traffic in one step — two total steps. A Linear configuration shifts a fixed percentage of traffic every N minutes in equal increments — many total steps. Canary is ideal when you want a quick initial validation with a long soak; Linear is ideal when you want gradual, evenly distributed ramp-up.
Q6: When should I use Amazon ECR vs AWS CodeArtifact? A: Amazon ECR stores container images for ECS, EKS, App Runner, and Lambda container images. AWS CodeArtifact stores language packages for npm, Maven, PyPI, NuGet, and generic binary formats. A Docker image never belongs in CodeArtifact; an npm package never belongs in ECR. Both support caching public upstreams (ECR pull-through cache rules; CodeArtifact upstream repositories).
Q7: How do I trigger an AWS Lambda function whenever a commit is pushed to an AWS CodeCommit repo?
A: Two options. (1) A CodeCommit trigger defined directly on the repository that invokes the Lambda on referenceUpdated. (2) An Amazon EventBridge rule that matches CodeCommit referenceUpdated events and dispatches to the Lambda. CodeCommit triggers are simplest for a single downstream target; EventBridge rules are preferred when you need multiple targets, cross-account routing, or filter logic beyond the trigger's capability.
Q8: What is Amazon CodeCatalyst and when should I choose it over assembling individual Code services?* A: Amazon CodeCatalyst is a unified DevOps platform that wraps source control, workflows (CI/CD), environments, cloud development environments, and issue tracking into a single project-level experience. Choose CodeCatalyst when the scenario emphasizes "one unified DevOps platform across projects, workflows, and environments" and when the team values a pre-wired experience over assembling the individual services. Choose the individual services (CodeCommit, CodeBuild, CodeDeploy, CodePipeline, ECR) when you need maximum control, service-level IAM scoping, or integration with existing AWS CI/CD pipeline tools workflows.
Q9: How do I notify a Slack channel when a CodePipeline pipeline fails?
A: Create an AWS CodeStar Notifications rule on the pipeline resource that matches the Pipeline execution failed event and delivers to an AWS Chatbot client configured for the target Slack channel. No custom Lambda is required. AWS Chatbot also supports Amazon Chime and Microsoft Teams targets.
Q10: How should secrets be injected into a CodeBuild build?
A: Reference them in buildspec.yml under env.secrets-manager: for AWS Secrets Manager values or under env.parameter-store: for AWS Systems Manager Parameter Store SecureString values. Never hardcode secrets in buildspec.yml, never commit them to source, and never expose them via plain variables: entries. The CodeBuild service role must have secretsmanager:GetSecretValue or ssm:GetParameters permissions on the referenced resources.
Further Reading — Official AWS Documentation
- AWS CodePipeline User Guide: https://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html
- AWS CodeBuild User Guide: https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html
- AWS CodeDeploy User Guide: https://docs.aws.amazon.com/codedeploy/latest/userguide/welcome.html
- AWS CodeCommit User Guide: https://docs.aws.amazon.com/codecommit/latest/userguide/welcome.html
- AWS CodeArtifact User Guide: https://docs.aws.amazon.com/codeartifact/latest/ug/welcome.html
- Amazon ECR User Guide: https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html
- Amazon CodeCatalyst User Guide: https://docs.aws.amazon.com/codecatalyst/latest/userguide/welcome.html
- AWS CodeStar Notifications User Guide: https://docs.aws.amazon.com/codestar-notifications/latest/userguide/welcome.html
- CodeDeploy AppSpec Lifecycle Hooks Reference: https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html
- CodeBuild buildspec Reference: https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html
- CodePipeline Execution Modes: https://docs.aws.amazon.com/codepipeline/latest/userguide/concepts.html
- AWS DVA-C02 Exam Guide v2.1: https://d1.awsstatic.com/training-and-certification/docs-dev-associate/AWS-Certified-Developer-Associate_Exam-Guide.pdf
Mastering AWS CI/CD pipeline tools is the single most efficient way to raise your DVA-C02 Domain 3 score. The exam reduces every Task 3.4 question to the seven AWS CI/CD pipeline tools covered above — CodePipeline as orchestrator, CodeBuild as builder, CodeDeploy as deployer, CodeCommit as source, CodeArtifact and Amazon ECR as registries, and Amazon CodeCatalyst as the unified wrapper — plus AWS CodeStar Notifications as the notification layer. Memorize the execution modes, the buildspec phases, the appspec.yml lifecycle hooks per platform, and the Canary versus Linear distinction, and the entire AWS CI/CD pipeline tools domain collapses into a small set of decision trees.