examhub .cc The most efficient path to the most valuable certifications.
In this note ≈ 21 min

Compute Services (EC2, ECS, EKS, Fargate, Lambda)

4,180 words · ≈ 21 min read

AWS compute services cover every way you can run code or workloads on AWS — from classic virtual machines on Amazon EC2, to containerized apps on Amazon ECS and Amazon EKS, to fully serverless execution on AWS Lambda and AWS Fargate. Task statement 3.3 of the CLF-C02 exam guide asks you to identify AWS compute services by use case, and this chapter prepares you to pick the right service quickly under exam pressure. Before we dive in, note that AWS compute services are the single biggest surface area in Domain 3, so expect at least 5–8 scenario questions on AWS compute services in a real CLF-C02 attempt.

What Are AWS Compute Services?

AWS compute services are the family of AWS offerings that execute your application logic, process data, and host workloads. On the CLF-C02 exam, AWS compute services are organized around three paradigms: virtual machines (Amazon EC2 and Amazon Lightsail), containers (Amazon ECS, Amazon EKS, plus AWS Fargate as a launch type), and serverless functions (AWS Lambda). Specialty AWS compute services like AWS Batch and AWS Outposts sit around the edges to cover batch jobs and on-premises extensions.

At the foundational level, the CLF-C02 exam guide expects you to:

  • Recognize every AWS compute service by name and by one-sentence use case.
  • Differentiate Amazon EC2 from AWS Lambda from AWS Fargate based on operational responsibility and workload duration.
  • Match Amazon ECS vs Amazon EKS to "AWS-native orchestration" vs "managed Kubernetes."
  • Know that AWS Fargate is a launch type — not a separate compute engine — that works with both Amazon ECS and Amazon EKS.
  • Understand how Auto Scaling Groups and Elastic Load Balancers combine with Amazon EC2 to achieve elasticity and high availability.

The scope of AWS compute services also overlaps with deployment and operation methods (task 3.1 — think AWS Elastic Beanstalk) and with pricing models (task 4.1 — think Amazon EC2 On-Demand vs Reserved vs Spot). This notes page focuses on the "identify and differentiate" layer, leaving deployment mechanics and pricing math to their dedicated topics.

Why AWS Compute Services Matter on CLF-C02

Compute is historically the highest-mentioned Domain 3 subject on the CLF-C02 exam. Community mention counts show AWS compute services near the top of candidate confusion — especially "EC2 vs Lambda vs Fargate" scenario questions. Getting AWS compute services right secures a large block of Domain 3 points (34% of the exam) and feeds directly into shared-responsibility-model, pricing-models, and global-infrastructure questions.

Core Operating Principles of AWS Compute Services

AWS compute services share three core operating principles that show up across the CLF-C02 exam.

Principle 1 — The Management Spectrum

Every AWS compute service sits on a spectrum from "you manage everything above the hypervisor" (Amazon EC2) to "AWS manages everything" (AWS Lambda). The more abstract the service, the less operational overhead — but also the less control. Amazon EC2 gives maximum control, AWS Lambda gives minimum overhead, and AWS Fargate / Amazon ECS / Amazon EKS sit in the middle.

Principle 2 — Elasticity Through Orchestration

Compute elasticity on AWS is not a property of Amazon EC2 alone — it comes from combining Amazon EC2 (or containers) with Amazon EC2 Auto Scaling and Elastic Load Balancing. AWS Lambda is elastic by design (it scales to zero and to thousands of concurrent executions automatically). AWS Fargate scales at the task level without any host management.

Principle 3 — Pay for What You Use

Every AWS compute service follows pay-as-you-go, but the billing unit differs:

  • Amazon EC2: per second (with a 60-second minimum on most Linux instances) or per hour.
  • AWS Lambda: per millisecond of execution × allocated memory.
  • AWS Fargate: per second of vCPU and memory reserved by the task.
  • Amazon ECS / Amazon EKS control plane: cluster hour (EKS) or free (ECS control plane).
  • Amazon Lightsail: flat monthly price (predictable budgeting).

An AWS compute service is any AWS offering that runs customer code or workloads, regardless of abstraction level — virtual machine, container, or serverless function. Reference: https://aws.amazon.com/products/compute/

Amazon EC2 — Elastic Compute Cloud Deep Dive

Amazon EC2 (Elastic Compute Cloud) is the foundational virtual-machine service among AWS compute services. Every CLF-C02 exam attempt will include multiple Amazon EC2 scenario questions, so knowing Amazon EC2 cold is non-negotiable.

What Is Amazon EC2?

Amazon EC2 provides resizable virtual servers — called instances — running in the AWS cloud. You choose the operating system (Amazon Linux, Ubuntu, Windows, Red Hat, SUSE, macOS), the instance type (CPU + memory + storage + network shape), and the region/AZ. Amazon EC2 is billed per second after the first minute, and Amazon EC2 integrates with Amazon EBS for block storage, Amazon VPC for networking, and IAM roles for secure credentials.

Amazon EC2 Instance Families

Amazon EC2 groups instance types into families optimized for different workloads. CLF-C02 expects you to recognize family letters and their workload category.

  • General purpose (M, T, Mac) — balanced CPU / memory for web servers, small databases, development boxes. T-series is burstable, ideal for low-baseline traffic.
  • Compute optimized (C) — highest CPU-per-dollar for batch processing, gaming servers, scientific modeling, high-performance web servers.
  • Memory optimized (R, X, z) — large RAM for in-memory caches, SAP HANA, real-time big-data analytics.
  • Storage optimized (I, D, H) — high local IOPS or high sequential disk throughput for NoSQL databases, data warehouses, Hadoop clusters.
  • Accelerated computing (P, G, Inf, Trn, F) — GPUs, AWS Inferentia, AWS Trainium, or FPGAs for ML training, inference, and graphics rendering.

M = balanced (Main), C = Compute, R = RAM, I = IOPS storage, D = Dense HDD, P/G = GPU. Memorizing the first letter unlocks most Amazon EC2 scenario questions on CLF-C02. Reference: https://aws.amazon.com/ec2/instance-types/

Amazon EC2 Purchasing Options (Intro)

Amazon EC2 supports five main pricing models. Full details live in the pricing-models topic; at CLF-C02 level you only need to recognize each by name:

  1. On-Demand — pay as you go, no commitment.
  2. Reserved Instances (RIs) — 1- or 3-year commitment for up to 72% discount.
  3. Savings Plans — commit to $ / hour of compute spend for flexibility across Amazon EC2, AWS Fargate, and AWS Lambda.
  4. Spot Instances — up to 90% discount on spare capacity, AWS can reclaim with 2-minute notice.
  5. Dedicated Hosts / Dedicated Instances — physical server isolation for compliance or BYOL (bring-your-own-license) scenarios.

CLF-C02 splits Amazon EC2 knowledge across this topic (compute-services, task 3.3) and pricing-models (task 4.1). Always identify AWS compute services first, then reach for pricing decisions. Mixing them is a common exam trap. Reference: https://aws.amazon.com/ec2/pricing/

Amazon EC2 Storage Options

Amazon EC2 instances can attach different storage types:

  • Instance Store — physically attached to host, ephemeral, lost when the instance stops.
  • Amazon EBS (Elastic Block Store) — durable block volume, persists beyond instance lifecycle, snapshot-able.
  • Amazon EFS (Elastic File System) — shared NFS file system mountable across multiple Amazon EC2 instances.

Storage selection is its own CLF-C02 topic, but you must recognize that Amazon EC2 storage choice affects durability, IOPS, and cost.

Amazon EC2 Auto Scaling and Elastic Load Balancing

No CLF-C02 discussion of AWS compute services is complete without Amazon EC2 Auto Scaling and Elastic Load Balancing. Together they deliver the "elasticity" property of AWS compute services.

Amazon EC2 Auto Scaling Groups

An Auto Scaling Group (ASG) automatically adjusts the number of Amazon EC2 instances between a minimum and maximum based on scaling policies (CPU utilization, request count, scheduled events). ASGs also replace unhealthy Amazon EC2 instances automatically, providing self-healing behavior.

Typical CLF-C02 Auto Scaling signals:

  • "Automatically add or remove Amazon EC2 capacity based on demand" → Auto Scaling Group.
  • "Maintain a fleet size of N healthy Amazon EC2 instances" → ASG health checks + replacement.
  • "Scale Amazon EC2 out during business hours, scale in at night" → Scheduled scaling policy.

Elastic Load Balancing (ELB) Flavors

Elastic Load Balancing distributes incoming traffic across multiple Amazon EC2 instances, AWS Fargate tasks, AWS Lambda functions, or on-premises targets. CLF-C02 expects recognition of three current flavors and the deprecated one.

  • Application Load Balancer (ALB) — layer 7 (HTTP/HTTPS). Path-based routing, host-based routing, WebSocket support. Best for microservices and containerized web apps.
  • Network Load Balancer (NLB) — layer 4 (TCP/UDP/TLS). Ultra-low latency, millions of requests per second, static IP per AZ. Best for high-performance or non-HTTP workloads.
  • Gateway Load Balancer (GWLB) — deploys and scales third-party virtual appliances (firewalls, IDS/IPS).
  • Classic Load Balancer (CLB) — legacy, deprecated for new workloads. Mentioned occasionally in older study materials; on CLF-C02, pick ALB or NLB unless the question explicitly says "Classic."

If the scenario says HTTP / HTTPS / web app / path-based → ALB. If it says TCP / UDP / millions of requests / static IP → NLB. If it says firewall appliance → GWLB. If it says Classic → red flag for outdated study material. Reference: https://aws.amazon.com/elasticloadbalancing/features/

AWS Lambda — Serverless Compute

AWS Lambda is the headline serverless member of AWS compute services. The CLF-C02 exam loves AWS Lambda scenario questions because AWS Lambda redefines the server responsibility model.

What Is AWS Lambda?

AWS Lambda runs your code in response to events without provisioning any servers. You upload a function (Python, Node.js, Java, Go, .NET, Ruby, or custom runtime via container image), configure a trigger (Amazon S3 object upload, Amazon API Gateway request, Amazon DynamoDB stream, Amazon EventBridge event, and many more), and AWS Lambda handles everything else — scaling, patching, high availability across AZs.

AWS Lambda billing is pure pay-per-use:

  • Compute: GB-seconds of memory × duration.
  • Invocations: flat cost per million requests.
  • Free tier: 1M requests and 400,000 GB-seconds per month, forever.

AWS Lambda Hard Limits You Must Memorize

  • Maximum execution time: 15 minutes per invocation (900 seconds).
  • Memory: 128 MB – 10,240 MB (10 GB) allocatable.
  • Ephemeral storage /tmp: up to 10 GB.
  • Deployment package: 50 MB zipped (direct) or 250 MB unzipped, 10 GB via container image.
  • Payload: 6 MB synchronous, 256 KB asynchronous event.

AWS Lambda caps at 900 seconds (15 minutes) per invocation. If a workload needs to run longer than 15 minutes, AWS Lambda is wrong — pick Amazon EC2, AWS Fargate, or AWS Batch instead. Reference: https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-limits.html

AWS Lambda Cold Starts

A "cold start" occurs when AWS Lambda must initialize a new execution environment to handle an invocation — typically adding 100 ms to several seconds of latency depending on runtime and package size. Cold starts can be mitigated with Provisioned Concurrency (pre-warmed containers), but CLF-C02 just wants you to recognize the term. Expect AWS Lambda cold start trivia as a distractor, not a main answer.

When to Use AWS Lambda

  • Event-driven processing (Amazon S3 upload → thumbnail generator).
  • Light web backends behind Amazon API Gateway.
  • Scheduled cron-style tasks (Amazon EventBridge Scheduler).
  • Glue code between AWS services.
  • Workloads that run less than 15 minutes, are bursty, and benefit from zero-idle-cost billing.

When NOT to Use AWS Lambda

  • Long-running processes (> 15 minutes): use Amazon EC2, AWS Fargate, or AWS Batch.
  • Persistent connections (game servers, WebSocket at massive scale with stateful sessions): prefer Amazon EC2 or containers.
  • Very high steady throughput where AWS Lambda pricing exceeds Amazon EC2: run the math in Pricing Calculator.

A classic CLF-C02 trap is offering AWS Lambda for a "batch job that runs for 3 hours." AWS Lambda has a hard 15-minute ceiling. Pick AWS Batch (queued long-running jobs) or Amazon EC2 / AWS Fargate instead. Community data shows this is one of the top-5 AWS compute services traps. Reference: https://aws.amazon.com/lambda/faqs/

Containers on AWS — Amazon ECS vs Amazon EKS vs AWS Fargate

Container-based AWS compute services form a three-way matrix that CLF-C02 loves to test. Understanding Amazon ECS vs Amazon EKS vs AWS Fargate clearly is worth several exam points.

Container Concepts Primer

Before the service comparisons, memorize the five foundational container terms:

  • Image — a read-only template with your application code, runtime, libraries, and dependencies. Built from a Dockerfile.
  • Registry — where images are stored. On AWS, the native choice is Amazon ECR (Elastic Container Registry).
  • Container — a running instance of an image.
  • Task / Pod — a grouping of one or more containers deployed together (a Task in Amazon ECS, a Pod in Amazon EKS).
  • Cluster — the logical group of compute capacity running your tasks or pods.
  • Service — a long-running, scalable controller that keeps N copies of a task running and integrates with a load balancer.

Amazon ECS — AWS-Native Container Orchestration

Amazon ECS (Elastic Container Service) is AWS's proprietary, deeply integrated container orchestration platform. Amazon ECS uses AWS-native primitives (Task Definitions, Services, Clusters) and plugs into every AWS service (IAM, Amazon VPC, Elastic Load Balancing, Amazon CloudWatch, AWS Fargate, Amazon EC2) with minimal glue code. The Amazon ECS control plane is free; you pay only for the underlying compute.

Amazon EKS — Managed Kubernetes

Amazon EKS (Elastic Kubernetes Service) is AWS's managed Kubernetes offering. Amazon EKS runs upstream, CNCF-conformant Kubernetes — the same Kubernetes you would find on Google GKE, Azure AKS, or a self-managed bare-metal cluster. Amazon EKS shines when you want portability across clouds, a rich ecosystem of Kubernetes tooling, or team familiarity with Kubernetes primitives (Deployments, Services, Ingresses, Helm charts). Amazon EKS control plane costs a per-cluster hourly fee.

AWS Fargate — Serverless Container Runtime

AWS Fargate is not a separate compute service — it is a launch type. AWS Fargate lets both Amazon ECS and Amazon EKS run tasks / pods without you managing any Amazon EC2 host machines. You define CPU and memory per task, and AWS Fargate provisions, patches, and scales the underlying capacity invisibly. Billing is per-second of vCPU and memory reserved.

This "launch type" distinction is heavily tested. You can run Amazon ECS on Amazon EC2 launch type, or Amazon ECS on AWS Fargate launch type. The same choice exists for Amazon EKS.

A very common CLF-C02 trap treats AWS Fargate as a peer of Amazon ECS. It is not. AWS Fargate is a serverless execution mode for Amazon ECS and Amazon EKS. The exam will reward you for saying "Amazon ECS on AWS Fargate" or "Amazon EKS on AWS Fargate" — not "AWS Fargate instead of Amazon ECS." Reference: https://aws.amazon.com/fargate/

Amazon ECS vs Amazon EKS Decision Rules

  • Pick Amazon ECS when: the team is AWS-centric, you want the fastest path from Dockerfile to production, you prefer AWS-native APIs, you want the lowest control-plane cost.
  • Pick Amazon EKS when: the team already knows Kubernetes, you need multi-cloud or hybrid portability, you rely on open-source Kubernetes tooling (Helm, Istio, Argo CD), you run vendor software that ships as Kubernetes manifests.

AWS Fargate vs Amazon EC2 Launch Type Decision Rules

  • Pick AWS Fargate when: you want zero host management, variable workloads, per-task billing, quick starts, low operational overhead.
  • Pick Amazon EC2 launch type when: you need specific instance types (GPU, bare-metal), you want maximum control, you already have Reserved Instances or Savings Plans to apply, you need daemon-style workloads per host.

Amazon ECS = AWS-native (starts with E for "Elastic" + C for "Container"). Amazon EKS = Kubernetes (the K is for Kubernetes). Candidates often mix these up under exam pressure. Memorize the K → Kubernetes mnemonic and you will never miss this distinction on CLF-C02. Reference: https://aws.amazon.com/containers/services/

AWS Batch — Managed Batch Compute

AWS Batch is the specialty member of AWS compute services for batch workloads. AWS Batch provisions Amazon EC2 or AWS Fargate capacity on demand to run queued jobs (scientific simulations, financial risk calculations, genomics pipelines, media transcoding). You submit a job with CPU and memory requirements, AWS Batch picks the right instance type, runs it, and shuts it down when the queue drains. AWS Batch integrates with Amazon EC2 Spot Instances for maximum cost efficiency.

Typical CLF-C02 AWS Batch signals:

  • "Run thousands of long-running compute jobs with automatic scheduling" → AWS Batch.
  • "Cost-optimized batch compute using Spot Instances" → AWS Batch (with Spot).
  • "Workload longer than 15 minutes, not real-time" → AWS Batch or Amazon EC2, never AWS Lambda.

Amazon Lightsail — Simplified VPS

Amazon Lightsail is the entry-level member of AWS compute services: a simplified virtual private server (VPS) with predictable monthly pricing (from around $3.5 / month), pre-configured bundles (LAMP stack, Node.js, WordPress, Joomla, Magento, static sites), and a streamlined console.

Amazon Lightsail use cases on CLF-C02:

  • Small websites, blogs, or test environments.
  • Small business applications with predictable, low traffic.
  • Learning / prototyping where Amazon EC2 feels overwhelming.

Amazon Lightsail is not for production-scale enterprise workloads or advanced networking — for those, graduate to Amazon EC2, Amazon ECS, or AWS Fargate. Lightsail instances can also be "upgraded" to Amazon EC2 if needs grow.

On CLF-C02, if a question mentions "small business," "simple," "predictable monthly price," or "beginner," Amazon Lightsail is very likely the right answer. Amazon Lightsail targets users who would otherwise choose GoDaddy, DigitalOcean, or Bluehost. Reference: https://aws.amazon.com/lightsail/features/

AWS Outposts — Compute On-Premises

AWS Outposts extends AWS compute services into your own data center. AWS ships physical racks (42U Outposts rack) or 1U / 2U Outposts servers to your premises, and those racks run a subset of AWS services (Amazon EC2, Amazon EBS, Amazon ECS, Amazon EKS, Amazon RDS, Amazon S3 on Outposts). AWS Outposts is managed by AWS remotely.

Use cases:

  • Low-latency requirements (manufacturing floor, real-time trading).
  • Data residency requirements that prevent data from leaving a specific building or country.
  • Migration bridging where some systems must stay on-premises temporarily.

On CLF-C02 you only need the one-sentence identification — "AWS Outposts runs AWS compute services in your own data center." The deep infrastructure story lives in the global-infrastructure topic.

AWS Compute Services Comparison Matrix

A side-by-side cheat sheet is the best way to lock in AWS compute services differentiation for CLF-C02.

AWS Compute Service Abstraction Unit of Work Max Duration Management Overhead Typical Use Case
Amazon EC2 VM Instance No limit High (OS + runtime) Any long-running workload, custom stacks
Amazon EC2 on Auto Scaling Group VM fleet Instance fleet No limit High Elastic web tiers, enterprise apps
Amazon ECS on Amazon EC2 Container Task No limit Medium (you run host) AWS-native containers with cost control
Amazon ECS on AWS Fargate Container (serverless) Task No limit Low AWS-native containers, zero hosts
Amazon EKS on Amazon EC2 Kubernetes Pod No limit Medium-High K8s portability + cost control
Amazon EKS on AWS Fargate Kubernetes (serverless) Pod No limit Low K8s portability, zero hosts
AWS Lambda Function Invocation 15 minutes Minimal Event-driven, bursty, short workloads
AWS Batch Batch jobs Job No limit Low Scientific / scheduled batch
Amazon Lightsail Simple VM Bundle No limit Very Low Small websites, beginners
AWS Outposts On-prem VM / container Instance / Task No limit Medium On-premises AWS extension

Common Exam Traps Across AWS Compute Services

CLF-C02 repeats the same AWS compute services traps cycle after cycle. Knowing them cold is like getting free points.

Trap 1 — Amazon EC2 vs AWS Lambda Boundary

Questions will describe a workload and ask "which service?" Watch for:

  • Long-running (> 15 min) → not AWS Lambda.
  • Event-driven, milliseconds to minutes → AWS Lambda.
  • Steady, predictable 24/7 → Amazon EC2 with Reserved Instances or Savings Plans.
  • Spiky, bursty, short → AWS Lambda.

Trap 2 — Amazon ECS vs Amazon EKS

  • "AWS-native orchestration" → Amazon ECS.
  • "Managed Kubernetes" or "upstream Kubernetes" → Amazon EKS.
  • "Multi-cloud portability" or "Helm charts" → Amazon EKS.
  • "Lowest control-plane cost" → Amazon ECS (the control plane is free).

Trap 3 — AWS Fargate as a Separate Service

AWS Fargate is never the answer to "which orchestrator?" — the orchestrator is Amazon ECS or Amazon EKS. AWS Fargate is the answer to "which launch type removes host management?"

Trap 4 — AWS Lambda Time Limit

Any scenario mentioning "hours," "overnight," or "heavy long-running" rules AWS Lambda out.

Trap 5 — Elastic Beanstalk vs Amazon EC2

AWS Elastic Beanstalk is a deployment service (topic 3.1), not a separate compute engine. It provisions Amazon EC2 behind the scenes. On compute-services questions, Amazon EC2 is the correct underlying answer; on deployment-methods questions, AWS Elastic Beanstalk is.

The single most common CLF-C02 AWS compute services trap frames a workload so that two of Amazon EC2, AWS Lambda, AWS Fargate all sound plausible. Use this decision tree: does it run longer than 15 minutes? Remove AWS Lambda. Do I want to manage hosts? If yes, Amazon EC2; if no, AWS Fargate. Is it truly serverless / event-driven / short? AWS Lambda. Reference: https://docs.aws.amazon.com/whitepapers/latest/aws-overview/compute-services.html

Key Numbers and Must-Memorize Facts

  • AWS Lambda maximum timeout: 15 minutes (900 seconds).
  • AWS Lambda maximum memory: 10,240 MB (10 GB).
  • Amazon EC2 instance family letters: M, T (general), C (compute), R, X (memory), I, D, H (storage), P, G, Inf, Trn, F (accelerated).
  • Amazon ECS control plane cost: free (pay for compute only).
  • Amazon EKS control plane cost: per-cluster hourly fee (non-zero).
  • AWS Fargate billing unit: per-second vCPU + memory reserved.
  • Amazon Lightsail price floor: around $3.5 / month for the cheapest bundle.
  • Amazon EC2 Auto Scaling: adjusts between min, desired, max capacity.
  • Elastic Load Balancer flavors: ALB (L7), NLB (L4), GWLB (appliance), CLB (deprecated).
  • AWS Batch underlying compute: Amazon EC2 or AWS Fargate.
  • AWS Outposts form factors: rack (42U) or server (1U / 2U).

CLF-C02 carefully separates AWS compute services (task 3.3) from neighboring topics. Know the boundary.

  • compute-services vs deployment-operation-methods (3.3 vs 3.1) — Amazon EC2 is compute; AWS Elastic Beanstalk / AWS CloudFormation / AWS CDK are deployment. The compute question asks "what runs the code?"; the deployment question asks "how did the code get there?"
  • compute-services vs pricing-models (3.3 vs 4.1) — "which service?" is compute; "how do I pay?" is pricing. The two interact (Amazon EC2 has On-Demand, RI, Spot) but the exam distinguishes clearly.
  • compute-services vs global-infrastructure (3.3 vs 3.2) — Amazon EC2 runs in an AZ within a Region; AWS Outposts runs on-premises. Where is compute is task 3.2; what is compute is task 3.3.
  • compute-services vs database-services (3.3 vs 3.4) — You can run a database on Amazon EC2 (self-managed compute), or use a managed database (Amazon RDS, Amazon Aurora, Amazon DynamoDB). CLF-C02 prefers managed database answers unless the question explicitly says "self-managed."
  • compute-services vs network-services (3.3 vs 3.5) — Amazon EC2 lives in Amazon VPC subnets; Elastic Load Balancing is compute-adjacent but formally a networking service on the exam guide.

Use the question packs in /learn/aws/clf-c02/practice?task=3.3 to drill these AWS compute services patterns:

  1. "Which AWS service runs code in response to events without provisioning servers?" → AWS Lambda.
  2. "Which service is best for a batch job that runs 3 hours?" → AWS Batch or Amazon EC2, not AWS Lambda.
  3. "A company wants managed Kubernetes with upstream API compatibility." → Amazon EKS.
  4. "A company wants the simplest managed container orchestration on AWS." → Amazon ECS.
  5. "A company wants to run containers without managing any Amazon EC2 instances." → Amazon ECS on AWS Fargate or Amazon EKS on AWS Fargate.
  6. "A small business wants a pre-configured WordPress stack at a fixed monthly price." → Amazon Lightsail.
  7. "A company wants to run AWS services on-premises for low latency." → AWS Outposts.
  8. "Which service automatically adjusts Amazon EC2 capacity based on CPU utilization?" → Amazon EC2 Auto Scaling.
  9. "Which load balancer is best for path-based routing of HTTPS traffic?" → Application Load Balancer (ALB).
  10. "Which Amazon EC2 instance family is best for an in-memory SAP HANA workload?" → Memory optimized (R family).

FAQ — AWS Compute Services Top Questions

Q1. What are the main AWS compute services on CLF-C02?

The core AWS compute services on CLF-C02 are Amazon EC2 (virtual machines), AWS Lambda (serverless functions), Amazon ECS (AWS-native containers), Amazon EKS (managed Kubernetes), AWS Fargate (serverless container launch type for Amazon ECS and Amazon EKS), AWS Batch (managed batch jobs), Amazon Lightsail (simple VPS), and AWS Outposts (AWS compute in your own data center). Memorize these eight names plus Elastic Load Balancing and Amazon EC2 Auto Scaling as compute-adjacent services.

Q2. When should I pick AWS Lambda over Amazon EC2?

Pick AWS Lambda when the workload is event-driven, bursty, short (< 15 minutes), and benefits from zero-idle-cost billing. Pick Amazon EC2 when the workload is long-running, steady, requires specific OS configurations, or depends on persistent connections. The 15-minute AWS Lambda timeout is the hard rule — if a workload needs longer, AWS Lambda is automatically wrong on CLF-C02.

Q3. What is the difference between Amazon ECS and Amazon EKS?

Both Amazon ECS and Amazon EKS orchestrate containers on AWS compute services. Amazon ECS is AWS-native with proprietary APIs and a free control plane — fastest path on AWS. Amazon EKS runs upstream Kubernetes with full CNCF compatibility, a per-cluster hourly control-plane fee, and the benefit of multi-cloud portability. Choose Amazon ECS for pure AWS simplicity; choose Amazon EKS when the team or ecosystem already speaks Kubernetes.

Q4. Is AWS Fargate a replacement for Amazon ECS or Amazon EKS?

No. AWS Fargate is a launch type used by both Amazon ECS and Amazon EKS to run containers without managing Amazon EC2 hosts. You cannot use AWS Fargate by itself — it always works through an orchestrator. On CLF-C02, saying "AWS Fargate instead of Amazon ECS" is a trap answer.

Q5. Which AWS compute services are serverless?

Two of the AWS compute services are strictly serverless: AWS Lambda (functions) and AWS Fargate (containers, used through Amazon ECS or Amazon EKS). "Serverless" on AWS means no Amazon EC2 host management, auto-scaling to zero (or near-zero for AWS Fargate), and pay-per-use billing.

Q6. What compute services cover long-running workloads that exceed 15 minutes?

For workloads longer than 15 minutes, rule out AWS Lambda and choose from: Amazon EC2, Amazon ECS (on Amazon EC2 or AWS Fargate), Amazon EKS (on Amazon EC2 or AWS Fargate), AWS Batch, or Amazon Lightsail. On CLF-C02, the most common correct answers for long jobs are Amazon EC2 (custom stacks), AWS Batch (queued compute jobs), or AWS Fargate (containers).

Q7. How do Amazon EC2 Auto Scaling and Elastic Load Balancing relate to AWS compute services?

Amazon EC2 Auto Scaling and Elastic Load Balancing are the glue that makes Amazon EC2 elastic and highly available. Auto Scaling adjusts the number of Amazon EC2 instances; Elastic Load Balancing distributes traffic across healthy instances (or AWS Fargate tasks, or AWS Lambda). Together they convert raw Amazon EC2 into an elastic, self-healing compute fleet.

Q8. Does CLF-C02 ask about AWS Elastic Beanstalk in the compute topic?

AWS Elastic Beanstalk is technically a deployment service (task 3.1), not a compute service. It provisions Amazon EC2 on your behalf. On compute-services questions, Amazon EC2 is the underlying compute; on deployment-methods questions, AWS Elastic Beanstalk is the managed platform. Keep the two topics separate and you will nail the distinction.

Further Reading — AWS Overview Whitepaper and Documentation

For deeper understanding of AWS compute services beyond CLF-C02 scope:

  • AWS Overview Whitepaper — Compute Services section.
  • AWS Well-Architected Framework — Performance Efficiency pillar, Compute section.
  • Amazon EC2 User Guide — instance type selection.
  • AWS Lambda Developer Guide — event sources and deployment packages.
  • Amazon ECS Developer Guide and Amazon EKS User Guide.
  • AWS Fargate FAQs for launch-type selection guidance.
  • AWS Compute Blog for announcements and new AWS compute services.

These resources go beyond the CLF-C02 depth but help reinforce the mental model of AWS compute services abstraction and pay-per-use billing.

Summary — AWS Compute Services at a Glance

  • AWS compute services span virtual machines (Amazon EC2, Amazon Lightsail), containers (Amazon ECS, Amazon EKS, with AWS Fargate as a launch type), serverless (AWS Lambda), specialty batch (AWS Batch), and on-premises extension (AWS Outposts).
  • The management spectrum runs from Amazon EC2 (most control) to AWS Lambda (least overhead).
  • Elasticity for Amazon EC2 comes from Amazon EC2 Auto Scaling plus Elastic Load Balancing.
  • AWS Fargate is always a launch type — never a standalone compute service.
  • AWS Lambda has a hard 15-minute ceiling; long workloads must use Amazon EC2, AWS Fargate, AWS Batch, or Amazon EKS / Amazon ECS on Amazon EC2.
  • Amazon Lightsail is the beginner-friendly entry point; AWS Outposts is the on-premises AWS extension.
  • Know the boundary between compute-services (3.3), deployment-operation-methods (3.1), pricing-models (4.1), and global-infrastructure (3.2).

Master this chapter on AWS compute services and you will handle the 5–8 Domain 3 compute questions on CLF-C02 with confidence — and the same mental model carries directly into SAA-C03 and DVA-C02 if you continue the AWS certification path.

Official sources