examhub .cc The most efficient path to the most valuable certifications.
In this note ≈ 21 min

AWS Global Infrastructure (Regions, AZs, Edge)

4,120 words · ≈ 21 min read

AWS Global Infrastructure is the worldwide network of physical data centers, network links, and edge points of presence that AWS operates so customers can run workloads close to their users, survive localized failures, and meet data residency rules. For the CLF-C02 exam, you must know the hierarchy — Regions, Availability Zones, and Edge Locations — plus the specialty tiers (Local Zones, Wavelength Zones, Outposts) and how each layer of AWS Global Infrastructure maps to real business requirements such as latency, compliance, resilience, and cost.

This page is part of Domain 3 (Cloud Technology and Services), Task Statement 3.2. It focuses on the structural layout of AWS Global Infrastructure. Deep-dive networking behaviour of CloudFront, Route 53, and Global Accelerator as services belongs to the network-services topic. Deployment tooling that places workloads across AWS Global Infrastructure (CloudFormation, Elastic Beanstalk) belongs to deployment-operation-methods.

What is AWS Global Infrastructure?

AWS Global Infrastructure is Amazon Web Services' globally distributed physical footprint: more than 30 Regions, more than 100 Availability Zones, and hundreds of Edge Locations and Regional Edge Caches spread across every inhabited continent. Each layer of AWS Global Infrastructure solves a different problem. Regions solve fault containment and data residency. Availability Zones solve in-Region high availability. Edge Locations solve last-mile latency through CloudFront, Route 53, and Global Accelerator. Local Zones and Wavelength Zones extend AWS Global Infrastructure into metro areas and 5G networks. AWS Outposts brings AWS Global Infrastructure inside your own data center.

The CLF-C02 exam tests AWS Global Infrastructure heavily because almost every other AWS service inherits properties from the infrastructure layer beneath it. If you understand AWS Global Infrastructure, you can reason about resilience, latency, and compliance for any service — EC2, S3, RDS, Lambda, CloudFront — without memorising each service individually.

Why AWS Global Infrastructure Matters for CLF-C02

AWS Global Infrastructure shows up in exam questions in three main shapes:

  1. Definition questions — "What is the difference between a Region and an Availability Zone?"
  2. Scenario questions — "A company must store customer data inside the EU and needs low-latency access from mobile devices. Which combination of AWS Global Infrastructure components should they choose?"
  3. Trap questions — mixing Local Zones with Wavelength Zones, or CloudFront Edge Locations with Regional Edge Caches.

Hierarchy at a Glance

At the top sits the AWS Global Infrastructure overall. Inside it are Regions. Inside each Region are Availability Zones. Around those Regions, extending out toward users, are Edge Locations and Regional Edge Caches. Beside those core layers are the specialty extensions — Local Zones, Wavelength Zones, and AWS Outposts — each pushing AWS Global Infrastructure closer to a specific type of workload.

Region — a geographically isolated cluster of AWS data centers (comprising multiple Availability Zones) with its own billing scope and service catalog. Availability Zone (AZ) — one or more discrete data centers inside a Region with redundant power, networking, and cooling, separated by meaningful distance but linked via low-latency fibre. Edge Location — a global Point of Presence (PoP) used by CloudFront, Route 53, and Global Accelerator to terminate user connections close to the viewer. Regional Edge Cache — a mid-tier CloudFront cache sitting between Edge Locations and the origin. Local Zone — an AWS Global Infrastructure extension placed in a metro area for single-digit-millisecond latency. Wavelength Zone — AWS Global Infrastructure embedded inside a telco's 5G network. AWS Outposts — a rack of AWS Global Infrastructure hardware installed on customer premises.

Source ↗

Core Operating Principles — Regions, AZs, and Edge Locations Hierarchy

AWS Global Infrastructure follows three hard architectural principles that repeatedly surface on the CLF-C02 exam.

Principle 1 — Regions Are Isolated By Design

Each AWS Region inside AWS Global Infrastructure is an independent failure domain. AWS deliberately does not replicate data across Regions automatically. If you store objects in S3 in us-east-1, those objects stay in us-east-1 unless you explicitly configure cross-Region replication. This isolation is what lets AWS Global Infrastructure comply with data residency regulations like GDPR, HIPAA, and China's PIPL.

Principle 2 — Availability Zones Are Physically Separate But Logically Close

Inside a Region, Availability Zones are separated by meaningful distance (typically tens of kilometres) to avoid correlated failures from floods, fires, earthquakes, or power events. At the same time, the AZs are connected by dedicated low-latency, high-bandwidth AWS-owned fibre, so synchronous replication (for example Multi-AZ RDS or EFS) is feasible.

Principle 3 — Edge Pushes Content Toward the User

Edge Locations and Regional Edge Caches sit outside the Region boundary. They are used by three edge-aware services — Amazon CloudFront, Amazon Route 53, and AWS Global Accelerator. The exam treats these Edge Locations as part of AWS Global Infrastructure even though they do not host general compute workloads.

AWS Global Infrastructure = Regions → Availability Zones → Data Centers. Around it: Edge Locations and specialty zones (Local Zones, Wavelength Zones, AWS Outposts). Never say "a Region is an Availability Zone." Regions contain AZs, not the other way around. Source ↗

AWS Regions — Geographic Isolation, Data Sovereignty, and Latency

An AWS Region is a physical location anywhere in the world where AWS clusters data centers. As of 2026, AWS Global Infrastructure exposes more than 30 Regions with more planned. Each Region has a code like us-east-1 (N. Virginia), ap-northeast-1 (Tokyo), or eu-west-2 (London).

Anatomy of a Region

Every Region in AWS Global Infrastructure contains:

  • At least three Availability Zones (new Regions typically launch with three; many mature Regions have six).
  • A regional service catalog — not every AWS service is available in every Region. Bedrock, for example, is only in a subset.
  • A billing boundary — data transfer within a Region is usually free or cheap, while data transfer between Regions is charged.
  • Compliance accreditations specific to that Region (for example GovCloud Regions meet U.S. government requirements).

Why Regions Exist in AWS Global Infrastructure

  1. Fault containment — a Region is the largest blast radius you can build against in AWS. If a Region goes dark (extremely rare), other Regions keep serving.
  2. Data residency / sovereignty — by placing resources in eu-central-1 you keep data within Germany and the EU legal framework.
  3. Latency — placing workloads near users cuts round-trip time.
  4. Cost — on-demand prices vary by Region; us-east-1 (N. Virginia) tends to be the cheapest.

Region Naming and Scope

A Region's full name looks like US East (N. Virginia) with a code us-east-1. A quirky exception: us-east-1 (N. Virginia) is home to many global services such as IAM, CloudFront control plane, and the Route 53 control plane. Many exam takers forget that IAM is a global AWS Global Infrastructure service whose metadata lives in us-east-1.

  • AWS Global Infrastructure currently spans 30+ Regions worldwide.
  • Each Region has at least 3 Availability Zones — new Regions never launch with fewer.
  • AWS has publicly committed to launching additional Regions in Taiwan, Mexico, Chile, New Zealand, Saudi Arabia, and other countries during the CLF-C02 exam lifecycle.
  • Data transfer into a Region from the internet is free; out of a Region to the internet is metered.

Source ↗

How to Choose an AWS Region

Four criteria drive Region selection on the CLF-C02 exam:

  1. Compliance / data residency — GDPR may force EU Regions; HIPAA workloads need BAA-covered Regions.
  2. Latency to end users — pick the Region geographically closest to the majority of users.
  3. Service availability — verify the specific AWS service is offered in that Region.
  4. Cost — on-demand pricing varies; us-east-1 is typically cheapest for compute and storage.

Use "CLSC" — Compliance, Latency, Service availability, Cost. In scenario questions, read the question stem to find the dominant constraint; the correct Region choice usually maps to one of these four. Source: https://docs.aws.amazon.com/whitepapers/latest/aws-overview/global-infrastructure.html

Availability Zones — Physical Separation, No Single Point of Failure

An Availability Zone (AZ) is one or more physically separate data centers inside a Region with independent power, cooling, physical security, and networking. AZs are the building block that lets you design highly available workloads on AWS Global Infrastructure without leaving a Region.

What Makes an AZ an AZ

  • Redundant power from independent grid substations and on-site generators.
  • Independent cooling so a chiller failure in one AZ does not affect another.
  • Isolated network fabric — each AZ has its own top-of-rack and spine switches, with dedicated uplinks into the Region's fibre backbone.
  • Meaningful physical distance — typically up to 100 km apart, but close enough to keep sub-millisecond fibre latency between AZs.

AZ Identifiers and Obfuscation

Inside your AWS account, AZs appear as us-east-1a, us-east-1b, us-east-1c, etc. AWS deliberately maps these letter suffixes randomly per account, so your us-east-1a is not necessarily the same physical AZ as another customer's us-east-1a. This prevents hotspots from customers all defaulting to the "first" AZ.

Minimum AZ Rules

Every AWS Region inside AWS Global Infrastructure has a minimum of three Availability Zones. Some services require multiple AZs — for example Multi-AZ RDS replicates synchronously across two AZs, and Elastic Load Balancer routes across AZ targets.

The exam often tempts you with "An Availability Zone is a single data center." This is wrong. An AZ can be one or more data centers, all sharing the same AZ identity with redundant power, networking, and cooling. Also wrong: "A Region has 2 AZs." Every AWS Region in AWS Global Infrastructure has 3 or more AZs — never 2. Source ↗

High Availability Using Multiple AZs

The canonical high-availability pattern on AWS Global Infrastructure is "Multi-AZ." Examples:

  • Amazon RDS Multi-AZ — primary in AZ-a, synchronous standby in AZ-b. Failover takes 60–120 seconds.
  • Auto Scaling Groups across AZs — EC2 launches spread across 2-3 AZs behind an ALB.
  • Amazon EFS — data replicated automatically across AZs.
  • Amazon S3 Standard — stores objects redundantly across at least 3 AZs within a Region.

When Multi-AZ Is Not Enough — Multi-Region

For disaster recovery that survives a full-Region outage, you need multi-Region. Typical triggers for going multi-Region on AWS Global Infrastructure:

  • Regulatory requirement for a DR site in a different jurisdiction.
  • Very low RTO/RPO for globally critical systems.
  • Users spread across continents with unacceptable cross-ocean latency.

Edge Locations — CloudFront, Route 53, Global Accelerator

Edge Locations are endpoints of AWS Global Infrastructure deployed in hundreds of cities worldwide. Unlike Regions and AZs, Edge Locations are not used to host your EC2 instances or S3 buckets. They exist to terminate end-user traffic close to the viewer and pass requests toward the origin Region.

Services That Use Edge Locations

Three CLF-C02 services live on the Edge layer of AWS Global Infrastructure:

  • Amazon CloudFront — global content delivery network (CDN). Caches static and dynamic content at Edge Locations so repeat requests do not travel back to the origin.
  • Amazon Route 53 — DNS service with an Anycast network spread across Edge Locations. DNS queries resolve at the nearest PoP.
  • AWS Global Accelerator — uses the Edge to route TCP/UDP traffic over the AWS backbone instead of the public internet, improving consistency and reducing jitter.

Hundreds of PoPs vs Tens of Regions

Regions are measured in dozens. Edge Locations are measured in hundreds. AWS has over 600 Points of Presence across 100+ cities globally. That scale is why CloudFront can dramatically reduce end-user latency compared to serving straight from a Region.

Why This Is a Structural Topic, Not a Service Topic

Although CloudFront is a specific AWS service, the CLF-C02 blueprint splits the knowledge:

  • Global Infrastructure (Task 3.2) — knowing that Edge Locations exist as a layer of AWS Global Infrastructure.
  • Network Services (Task 3.5) — knowing how to configure CloudFront distributions, invalidations, and cache behaviours.

For this page we stop at the structural boundary — CloudFront uses Edge Locations that are part of AWS Global Infrastructure. For detailed CloudFront features see the network-services topic.

Regional Edge Caches — The Tier Between Edge and Origin

Regional Edge Caches are a less-famous layer of AWS Global Infrastructure, often under-covered by older CLF-C01 study materials. They are larger CloudFront caches positioned between Edge Locations and the origin Region.

Why Regional Edge Caches Exist

A single Edge Location has limited storage and evicts rarely-requested content quickly. If a request misses the Edge cache and has to travel all the way back to the origin Region across the internet, latency spikes. The Regional Edge Cache sits in the middle: it keeps content that is not hot enough for the Edge but still too popular to fetch from origin repeatedly.

How the Flow Works

  1. Viewer requests example.com/video.mp4.
  2. Request hits the nearest Edge Location. Cache miss.
  3. Edge Location fetches from the nearest Regional Edge Cache. Hit.
  4. Edge caches the object and serves the viewer.

This multi-tier caching is a key part of CloudFront's behaviour on AWS Global Infrastructure and explains why hit ratios stay high even for long-tail content.

Edge Locations are numerous, small, and viewer-facing. Regional Edge Caches are fewer, larger, and origin-facing. Both are part of AWS Global Infrastructure's Edge tier. Questions that mention "mid-tier cache between PoPs and origin" point to Regional Edge Caches. Source ↗

AWS Local Zones — Ultra-Low Latency to Metro Areas

Local Zones are an extension of AWS Global Infrastructure designed for workloads that need single-digit-millisecond latency to specific metropolitan areas. Unlike Edge Locations (which are read-only caches), Local Zones run actual compute, storage, and database services.

What Runs in a Local Zone

A Local Zone typically supports:

  • Amazon EC2 instance families (selected types)
  • Amazon EBS volumes
  • Amazon FSx
  • Elastic Load Balancer
  • Amazon VPC subnets extended into the Local Zone
  • Amazon RDS (selected engines)

Local Zone Use Cases

  1. Real-time gaming — players in Los Angeles need sub-10 ms latency to the game server; an us-west-2-lax-1a Local Zone hosts the servers.
  2. Media & entertainment post-production — large video files edited remotely with workstation-class latency.
  3. Live video streaming and transcoding — content creators feeding bursty uploads.
  4. AR/VR training simulations — industrial workflows that cannot tolerate 50 ms RTT.

Parent Region Relationship

Every Local Zone is connected to a parent Region. For example, us-west-2-lax-1a is a Local Zone in Los Angeles whose parent Region is us-west-2 (Oregon). Services in the Local Zone inherit identity, networking, and global control planes from the parent Region. You extend a VPC from the parent Region into the Local Zone.

AWS Wavelength Zones — AWS Global Infrastructure on 5G Networks

Wavelength Zones embed AWS Global Infrastructure inside telecom operators' 5G networks. The goal: ultra-low-latency access for mobile devices without traffic ever leaving the mobile carrier's network.

How Wavelength Differs From Local Zones

  • Local Zones — AWS Global Infrastructure in a metro-area AWS-owned facility, reachable over the public internet or Direct Connect.
  • Wavelength Zones — AWS Global Infrastructure inside a specific telco's 5G network (Verizon in the U.S., KDDI in Japan, Vodafone in the UK/Germany, SK Telecom in South Korea). Accessed directly from mobile devices on that carrier.

Wavelength Use Cases

  1. 5G mobile edge compute — AR glasses, connected cars, real-time video analysis.
  2. Industrial IoT — factory robots on private 5G requiring millisecond response.
  3. Live broadcasting from mobile — streaming creators whose content never leaves the carrier backbone.

Supported Services

Fewer services run in Wavelength Zones than Local Zones — typically EC2, EBS, VPC, and load balancing. Heavy managed services (Aurora, Bedrock) stay in the parent Region.

If the scenario mentions "5G mobile operator" or a carrier name (Verizon, Vodafone, KDDI), the answer is Wavelength Zone. If the scenario mentions "metro area" or "city-specific low latency for customers in Los Angeles / Boston / Chicago", the answer is Local Zone. Both are part of AWS Global Infrastructure, but the CLF-C02 exam loves this specific distinction. Source ↗

AWS Outposts — AWS Global Infrastructure on Customer Premises

AWS Outposts brings the same AWS Global Infrastructure hardware, services, APIs, and tools into the customer's own data center. It is the answer to the question "What if I need AWS consistency but my data cannot leave the building?"

Outposts Form Factors

  • Outposts Racks — 42U standard racks shipped fully configured.
  • Outposts Servers — 1U or 2U servers for smaller spaces (retail stores, branch offices, maritime).

What Outposts Supports

  • EC2, EBS, S3 on Outposts
  • ECS, EKS for containers
  • Amazon RDS on Outposts
  • Application Load Balancer
  • EMR for local big-data processing

Outposts Use Cases

  1. Low-latency local applications — a manufacturing floor control system that must not depend on a WAN link.
  2. Data residency — a country where no AWS Region exists yet but data must stay on-premises.
  3. Local data processing — pre-processing large datasets before shipping summaries to the cloud.
  4. Legacy integration — keeping mainframe or proprietary hardware adjacent to AWS services.

AWS Global Infrastructure Boundary

Outposts is the only part of AWS Global Infrastructure that you physically house. AWS still owns and operates the hardware (fully managed), patches it, and ships replacements — but it lives at your address. This is the line where AWS Global Infrastructure meets hybrid cloud.

How to Choose a Region — A CLF-C02 Decision Framework

Region selection is a recurring scenario on the CLF-C02 exam. Use this four-step decision order when AWS Global Infrastructure questions appear.

Step 1 — Compliance / Data Residency

Does a law or contract require data to stay within specific borders? If yes, filter to Regions in that jurisdiction. Examples:

  • GDPR → EU Regions (eu-west-1, eu-central-1, eu-west-3, etc.).
  • HIPAA → any commercial Region with a BAA; avoid opt-in Regions for new workloads.
  • ITAR / FedRAMP High → AWS GovCloud (US) Regions.
  • China's PIPL → AWS China (Beijing / Ningxia), operated by Sinnet and NWCD.

Step 2 — Latency to Users

Where are the majority of your users? Pick the Region geographically closest. For global apps, use AWS Global Infrastructure's Edge tier (CloudFront + Global Accelerator) to reduce latency further.

Step 3 — Service Availability

Check the regional service list. Amazon Bedrock, Amazon Q, SageMaker JumpStart, and Local Zones are not in every Region.

Step 4 — Cost

On-demand prices vary by Region. us-east-1 is typically cheapest, followed by us-east-2 and us-west-2. Japan and Australia Regions tend to be pricier.

If the question says "regulatory" or "residency" → filter by jurisdiction. "Low latency for users" → closest Region or Local Zone. "Cheapest" → us-east-1. "Specific service not available" → the stem is telling you to switch Regions. This four-step ordering rarely fails on CLF-C02. Source ↗

High Availability Design on AWS Global Infrastructure

AWS Global Infrastructure gives you three scales of HA:

  • Single AZ, multi-rack — protects against rack-level failures only. Rarely appropriate for production.
  • Multi-AZ within a Region — the default HA pattern. Protects against AZ failures including localized natural disasters. Uses AWS-internal low-latency fibre.
  • Multi-Region — for catastrophic Region outages, global DR, and cross-border compliance. Typically requires active-passive or active-active replication (S3 CRR, DynamoDB Global Tables, Aurora Global Database).

Common Multi-AZ Patterns

  • Stateless web tier — Auto Scaling Group across 3 AZs + Application Load Balancer.
  • Stateful database tier — RDS Multi-AZ, Aurora cluster with replicas across AZs, DynamoDB (natively multi-AZ inside a Region).
  • Shared filesystem — Amazon EFS (multi-AZ by default) or FSx with Multi-AZ deployment.

Multi-Region DR Patterns

  • Backup and restore — cheapest, slowest RTO. Periodic cross-Region backups.
  • Pilot light — minimal resources running in DR Region, scaled up on failover.
  • Warm standby — scaled-down full stack running; scale out on failover.
  • Active-active / multi-site — full capacity in both Regions with intelligent routing.

These DR tiers map back to the Reliability pillar of the Well-Architected Framework — see the well-architected-framework topic for more depth.

Data Residency, Sovereignty, and Compliance

Data residency controls where data physically lives on AWS Global Infrastructure; sovereignty controls which jurisdiction's laws apply. Both are solved first at the Region layer.

Region-Level Controls

  • Do not replicate automatically — S3 Cross-Region Replication is opt-in; by default objects stay put.
  • Region-scoped services — most services are Region-scoped; IAM, CloudFront, and Route 53 are exceptions.
  • Opt-in Regions — some Regions (Hong Kong, Bahrain, Cape Town, Milan, UAE, Tel Aviv, Jakarta, Zurich, Hyderabad, Melbourne, Malaysia) require explicit opt-in to prevent accidental data placement.

GovCloud and Sovereign-Cloud Regions

  • AWS GovCloud (US) — isolated Regions for U.S. government workloads; physically and logically separate from commercial AWS Global Infrastructure.
  • AWS Sovereign Cloud (EU) — AWS announced the European Sovereign Cloud for workloads with strict EU sovereignty requirements.
  • AWS China Regions — operated under local partnership; separate AWS account required.

Key Numbers and Must-Memorize Facts

  • 30+ Regions globally as of 2026.
  • 100+ Availability Zones total across AWS Global Infrastructure.
  • 3+ AZs per Region — the minimum guarantee for every AWS Region.
  • 600+ Edge Locations / PoPs serving CloudFront, Route 53, and Global Accelerator.
  • Regional Edge Caches — a dozen-plus mid-tier caches, positioned between Edge Locations and origin.
  • Local Zones — dozens of metro-area extensions, each tied to a parent Region.
  • Wavelength Zones — telco-integrated Zones inside 5G networks (Verizon, Vodafone, KDDI, SK Telecom, Bell, etc.).
  • Outposts — customer-premises racks and servers, fully managed by AWS.

Source ↗

Common Exam Traps — Boundary Tests on AWS Global Infrastructure

Trap 1 — Region vs Availability Zone

A Region is not a single data center. An AZ is not a single building. Students get this backward under exam pressure. Anchor: Region → AZs → data centers.

Trap 2 — Edge Location vs Regional Edge Cache

Edge Locations are viewer-facing and numerous. Regional Edge Caches are fewer and sit between Edge Locations and origin. If the question says "mid-tier caching to reduce origin load," answer Regional Edge Cache.

Trap 3 — Local Zone vs Wavelength Zone

Local Zone = metro area for general workloads. Wavelength Zone = inside a 5G carrier for mobile-edge workloads. The carrier name in the stem is the giveaway.

Trap 4 — Outposts as "A Region"

Outposts is not a Region. It is an extension of a parent Region that happens to live on your floor. Exam distractors sometimes treat Outposts as an independent Region — this is wrong.

Trap 5 — CloudFront as Purely Edge

CloudFront is at the Edge, but its configuration, invalidations, and distribution metadata are controlled from the us-east-1 global endpoint. This is why CloudFront is called a global AWS Global Infrastructure service.

Trap 6 — Assuming Every Service Exists in Every Region

Bedrock, Amazon Q, and some ML services launch in a subset of Regions first. If the exam says "A company in Mumbai wants to use Bedrock with Claude Sonnet 3.5" and offers ap-south-1, verify whether the service is available — otherwise another Region may be correct.

AWS Global Infrastructure vs Network Services — Scope Boundary

This is a Task Statement 3.2 topic: structure. The sibling Task Statement 3.5 (Network Services) covers services that run on the infrastructure.

  • 3.2 (this page) — the existence and layout of Regions, AZs, Edge Locations, Regional Edge Caches, Local Zones, Wavelength Zones, Outposts.
  • 3.5 (network-services) — how to configure VPC, subnets, Security Groups, NACLs, Route 53 routing policies, CloudFront distributions, Direct Connect, VPN, Global Accelerator.

When a question asks "What part of the AWS Global Infrastructure caches content close to users?" → Edge Locations (3.2 answer). When a question asks "Which service caches content close to users?" → CloudFront (3.5 answer).

Use the quiz engine to drill Task 3.2 questions that map to AWS Global Infrastructure concepts:

  • "How many AZs does every AWS Region have at minimum?" — targets the min-AZ rule.
  • "Which AWS Global Infrastructure component is used by Amazon CloudFront?" — Edge Locations.
  • "A gaming company in Los Angeles needs single-digit-ms latency for players in the LA metro." — Local Zone.
  • "A telco partner needs AWS services embedded in its 5G network." — Wavelength Zone.
  • "A bank cannot move certain datasets off-premises but wants AWS APIs." — AWS Outposts.
  • "An EU customer must keep user data inside the EU." — pick an EU Region.

FAQ — AWS Global Infrastructure Top Questions

Q1. What is the difference between an AWS Region and an Availability Zone?

A Region in AWS Global Infrastructure is a geographic area containing multiple Availability Zones. An AZ is one or more physically separate data centers within that Region, each with independent power, cooling, and networking. You choose a Region for compliance and latency; you choose multiple AZs inside that Region for high availability. Every Region in AWS Global Infrastructure has at least three AZs, and they are typically tens of kilometres apart but connected by low-latency AWS-owned fibre.

Q2. Are Edge Locations the same as Availability Zones?

No. Edge Locations are separate from Availability Zones and do not host customer workloads such as EC2 instances or S3 buckets. Edge Locations are the Point-of-Presence layer of AWS Global Infrastructure used by CloudFront, Route 53, and Global Accelerator to terminate user traffic close to the viewer. AZs inside Regions are where your compute and storage live. You can think of Edge Locations as the "last mile" of AWS Global Infrastructure, while AZs are the "data center core."

Q3. When should I use a Local Zone versus a Wavelength Zone?

Use a Local Zone when you need single-digit-ms latency to users in a specific metropolitan area over the public internet (gaming, media rendering, live video). Use a Wavelength Zone when your users connect through a specific 5G mobile carrier and traffic must stay inside that carrier's network (connected vehicles, AR/VR on 5G, industrial IoT). Both are extensions of AWS Global Infrastructure tied to a parent Region, but the access path is different.

Q4. Does AWS automatically replicate data across Regions?

No. By default, AWS Global Infrastructure keeps data inside the Region where you created it. S3 Cross-Region Replication, DynamoDB Global Tables, Aurora Global Database, and AWS Backup cross-Region copy are all opt-in features. This default is intentional: it preserves data sovereignty and avoids surprising charges. Many CLF-C02 questions hinge on this — do not assume replication without it being configured.

Q5. How do I choose an AWS Region?

Evaluate four criteria in order on CLF-C02: (1) compliance / data residency — filter out Regions that do not meet legal constraints; (2) latency to users — pick the closest Region; (3) service availability — verify the exact services you need exist in that Region; (4) cost — compare on-demand pricing, recognizing that us-east-1 is typically the cheapest. Missing any one of these four can make your Region choice wrong for the scenario.

Q6. Is AWS Outposts the same as a Region?

No. AWS Outposts is AWS Global Infrastructure hardware installed on your premises and connected back to a parent Region. It is not an independent Region. When you deploy an Outposts rack, you extend a VPC from the parent Region, and IAM / control plane remain in that parent Region. Outposts is the right answer for low-latency local workloads, strict data residency where no Region exists, or tight integration with on-premises systems — but it is always a child of the parent AWS Region.

Q7. Why does AWS Global Infrastructure matter for high availability?

Because each layer of AWS Global Infrastructure provides a different failure-isolation boundary. Multi-AZ deployments inside one Region guard against data-center-level failures. Multi-Region architectures guard against full-Region outages and natural disasters that affect an entire metro area. Combining Edge Locations, Regional Edge Caches, and Global Accelerator adds resilience at the network edge. Understanding this hierarchy is the fastest way to answer scenario questions about RPO, RTO, and availability targets on the CLF-C02 exam.

Further Reading on AWS Global Infrastructure

Official sources