examhub .cc The most efficient path to the most valuable certifications.
In this note ≈ 21 min

Storage Services (S3, EBS, EFS, FSx, Storage Gateway, Backup)

4,120 words · ≈ 21 min read

AWS storage services span three fundamental paradigms — object, block, and file — plus hybrid and backup layers. The AWS Cloud Practitioner (CLF-C02) Task 3.6 ("Identify AWS storage services") tests whether you can pick the right AWS storage service for a given scenario, read S3 storage classes without hesitation, and untangle the classic Amazon S3 vs Amazon EBS vs Amazon EFS confusion. This study note walks every exam-scope AWS storage service, drills the most-missed traps, and gives you enough repetition on the high-frequency keywords (Amazon S3, Amazon EBS, Amazon EFS, Amazon FSx, S3 storage classes) so the answers click in under 20 seconds of reading the question.

Because AWS storage services account for roughly 34% of the CLF-C02 weight inside Domain 3, and because community data shows storage-services is the highest-heat topic in Domain 3 (mention count 340, difficulty 0.75), under-preparing here is a reliable way to fail. Use this guide end to end.

What Are AWS Storage Services?

AWS storage services are managed services that let you persist, share, archive, and protect data without running the underlying disks, filers, or tape robots yourself. They cover three storage paradigms and one protection layer:

  • Object storage — Amazon S3. Data is stored as objects (file + metadata + unique key) in a flat namespace called a bucket, accessed over HTTP(S) through the S3 API.
  • Block storage — Amazon EBS. Data is stored as fixed-size blocks on a virtual volume that behaves like a raw disk attached to a single Amazon EC2 instance.
  • File storage — Amazon EFS and Amazon FSx. Data is stored in a hierarchical file system that multiple clients can mount over NFS (EFS) or SMB/Lustre/NFS/ZFS protocols (FSx).
  • Hybrid + data protection — AWS Storage Gateway bridges on-premises to the AWS Cloud, AWS Backup centralizes backups, and AWS Elastic Disaster Recovery (DRS) replicates machines for recovery.

AWS storage services differ from databases: databases (RDS, DynamoDB) expose query APIs for structured data, while AWS storage services expose byte-level, file-level, or object-level APIs for raw content. Keep this boundary crisp — CLF-C02 regularly hides it in distractor answers that mix Amazon S3 with Amazon RDS.

The Three Storage Paradigms in One Table

Paradigm Example AWS Service Unit Protocol Best For
Object Amazon S3 Object (file + key + metadata) HTTPS/REST API Backups, static assets, data lakes, media
Block Amazon EBS 512-byte to 4 KiB blocks Attached as block device to EC2 Boot volumes, databases, transactional apps
File Amazon EFS, Amazon FSx Files and folders NFS, SMB, Lustre Shared file shares, Linux/Windows workloads, HPC
Object, block, and file storage are the three paradigms of AWS storage services. Amazon S3 is object, Amazon EBS is block, Amazon EFS and Amazon FSx are file. Memorize this mapping — it is the single most asked question family in CLF-C02 storage.

Core Operating Principles of AWS Storage Services

AWS storage services share a few design principles that repeat across the portfolio:

  1. Managed — AWS owns the hardware, replication, patching, and capacity provisioning. You own the data, access control, and encryption keys.
  2. Durable by default — Most AWS storage services replicate data across multiple devices and often multiple facilities. Amazon S3 offers 99.999999999% (11 9's) durability for stored objects.
  3. Region vs AZ scope — Amazon S3 is regional (objects automatically stored across at least 3 AZs in the selected Region). Amazon EBS is AZ-scoped (one volume lives in one AZ). Amazon EFS is regional (mount targets in each AZ of the Region).
  4. Billing is pay-as-you-go — You pay for storage capacity, requests, data transfer, and sometimes retrieval (Glacier).
  5. Security — Encryption at rest (SSE-S3, SSE-KMS, SSE-C for S3; EBS encryption; EFS encryption; FSx encryption) and in-transit TLS.
Amazon S3 is regional — AWS automatically stores copies across at least three Availability Zones for you. Amazon EBS is AZ-scoped — a single volume lives in exactly one AZ and cannot be attached to an EC2 instance in a different AZ without a snapshot. Amazon EFS is regional — it exposes a mount target in each AZ of the Region. This scope question appears almost every CLF-C02 attempt.

Amazon S3 — Object Storage Deep Dive

Amazon S3 (Amazon Simple Storage Service) is AWS's flagship object storage service and the single most tested AWS storage service in CLF-C02. You put any blob of data (up to 5 TB per object) into a bucket and reference it by a key.

Amazon S3 Key Facts

  • Durability: 99.999999999% (11 9's) across all S3 storage classes except One Zone-IA.
  • Availability: Varies by storage class — S3 Standard is 99.99%, S3 One Zone-IA is 99.5%.
  • Object size: Up to 5 TB per object; single PUT up to 5 GB; multipart upload for > 100 MB.
  • Bucket names: Globally unique across all AWS accounts.
  • Access: HTTPS API, S3 console, AWS CLI, SDK — NOT mountable as a block device.
  • Namespace: Flat. "Folders" are a UI illusion; the / is just a character in the key.

Amazon S3 Features

  • Versioning — keeps every version of an object; protects against accidental delete and overwrite.
  • Lifecycle policies — automatically transition objects between S3 storage classes or expire them after N days.
  • Cross-Region Replication (CRR) and Same-Region Replication (SRR) — asynchronous object copying.
  • Encryption — SSE-S3 (S3-managed keys), SSE-KMS (KMS-managed keys), SSE-C (customer-provided keys), client-side.
  • Access control — Bucket policies, IAM policies, ACLs, S3 Block Public Access, pre-signed URLs.
  • S3 Object Lock — WORM (write-once-read-many) compliance for regulated workloads.
  • S3 Transfer Acceleration — uses CloudFront edge locations to speed long-distance uploads.
  • S3 Event Notifications — trigger Lambda, SQS, SNS on object create/delete.
For exam scenarios where someone wants to serve a static website, host images, back up on-premises data, or build a data lake — the answer is almost always Amazon S3. If you see "static content at scale" or "durable archive" the Amazon S3 flag should go up immediately.

Amazon S3 Storage Classes

S3 storage classes are a classic CLF-C02 keyword trap. There are seven S3 storage classes you should know. Memorize them as a ladder from hottest to coldest:

  1. S3 Standard — Frequent access, millisecond retrieval, 99.99% availability. Default.
  2. S3 Intelligent-Tiering — Moves objects across access tiers automatically based on access pattern. Small monitoring fee per object, but no lifecycle rules needed.
  3. S3 Standard-IA (Infrequent Access) — Lower storage cost than Standard, per-GB retrieval fee, 99.9% availability. Minimum 30 days, minimum 128 KB per object.
  4. S3 One Zone-IA — Like Standard-IA but data stored in a single AZ. 99.5% availability; 20% cheaper. Use for re-creatable data.
  5. S3 Glacier Instant Retrieval — Archive with millisecond retrieval. For data accessed once a quarter.
  6. S3 Glacier Flexible Retrieval — Archive with minutes-to-hours retrieval (Expedited 1-5 min, Standard 3-5 hrs, Bulk 5-12 hrs).
  7. S3 Glacier Deep Archive — Cheapest S3 storage class. 12-48 hour retrieval. For compliance archives accessed once or twice a year.

S3 Storage Classes Cheat Sheet

S3 Storage Class Retrieval Time Min Storage Duration Use Case
S3 Standard ms None Hot data, websites
S3 Intelligent-Tiering ms-hours None Unknown access pattern
S3 Standard-IA ms 30 days Backups accessed monthly
S3 One Zone-IA ms 30 days Re-creatable, non-critical
S3 Glacier Instant ms 90 days Quarterly access archives
S3 Glacier Flexible 1 min-12 hr 90 days Disaster recovery data
S3 Glacier Deep Archive 12-48 hr 180 days 7-10 year compliance
Amazon S3 has 11 9's durability (99.999999999%) across nearly all S3 storage classes — meaning you would lose 1 object every 10,000 years with 10 million objects stored. Availability is different and varies: 99.99% (Standard), 99.9% (IA), 99.5% (One Zone-IA). Exam distractors swap durability and availability numbers to trip you up.

S3 Lifecycle Policies and Versioning

S3 lifecycle policies are automation rules that tell Amazon S3 how to transition or expire objects. Typical chain:

Day 0    → S3 Standard
Day 30   → S3 Standard-IA
Day 90   → S3 Glacier Flexible Retrieval
Day 365  → S3 Glacier Deep Archive
Day 2555 → Expire (delete)

Versioning keeps every version of an object. Once enabled on a bucket you cannot disable it — only suspend it. Versioning plus MFA Delete is a classic answer for "protect against accidental deletion" scenarios.

S3 Intelligent-Tiering is designed to eliminate the need for manual lifecycle rules. If you see a question asking "which storage class automatically optimizes cost for unknown access patterns without lifecycle rules" — the answer is S3 Intelligent-Tiering, NOT S3 Standard-IA. Intelligent-Tiering has no retrieval charges and no minimum storage duration, at the cost of a small monthly monitoring fee per object.

S3 Replication

  • Cross-Region Replication (CRR) — replicate objects to a bucket in another AWS Region (compliance, disaster recovery, latency).
  • Same-Region Replication (SRR) — replicate objects within the same Region (log aggregation, account isolation).
  • Both require versioning enabled on source and destination buckets.
  • Replication is asynchronous and only applies to objects created after rule activation (with optional S3 Batch Replication for existing objects).

Amazon EBS — Block Storage for EC2

Amazon EBS (Amazon Elastic Block Store) provides persistent block-level storage volumes for Amazon EC2 instances. Think of Amazon EBS as a virtual disk attached to your EC2 instance.

Amazon EBS Key Facts

  • Scope: Availability Zone. An Amazon EBS volume lives in exactly one AZ and can only attach to EC2 instances in that same AZ.
  • Attachment: One volume attaches to one EC2 instance (except io1/io2 Multi-Attach for specific clustered workloads).
  • Size: 1 GiB to 64 TiB depending on volume type.
  • Persistence: Data persists beyond instance lifecycle (unless DeleteOnTermination=true).
  • Snapshots: Point-in-time backups stored in Amazon S3 (managed by AWS — you don't see the bucket). Snapshots can be copied across Regions to enable cross-Region DR.
  • Encryption: AES-256 via KMS; volumes created from encrypted snapshots are automatically encrypted.

Amazon EBS Volume Types

Type Category Max IOPS Use Case
gp3 SSD General Purpose 16,000 Default choice, most workloads, boot volumes
gp2 SSD General Purpose 16,000 Legacy default, IOPS scale with size
io2 Block Express SSD Provisioned IOPS 256,000 Mission-critical databases, SAP HANA
io1 SSD Provisioned IOPS 64,000 High-performance databases
st1 HDD Throughput Optimized 500 Big data, log processing, data warehouse
sc1 HDD Cold 250 Lowest-cost infrequently accessed data
Amazon EBS snapshots are stored in Amazon S3 even though you never see the bucket. This is why you can copy an EBS snapshot to another Region — it goes through the AWS-managed S3 layer. Snapshots are incremental: only changed blocks are copied after the first snapshot, which makes them cost-efficient.

EC2 Instance Store vs Amazon EBS

  • Instance store — physically attached to the host machine, ephemeral (lost on stop/terminate), zero network latency, free with the instance.
  • Amazon EBS — network-attached block storage, persistent, paid separately.

If an exam question asks "highest IOPS for temporary data" — instance store can win. For persistence or boot volumes — Amazon EBS.

Amazon EFS — Shared NFS File Storage

Amazon EFS (Amazon Elastic File System) is a fully managed NFSv4 file system for Linux workloads. Many EC2 instances and even on-premises servers (via VPN or Direct Connect) can mount the same Amazon EFS file system at once.

Amazon EFS Key Facts

  • Protocol: NFSv4.1 (Linux). Not for Windows.
  • Scope: Regional. Mount targets are created in each AZ of your VPC.
  • Scaling: Automatically scales from MB to PB without provisioning.
  • Performance modes: General Purpose (default, low latency) and Max I/O (higher throughput, slightly higher latency).
  • Throughput modes: Bursting (default), Provisioned, Elastic.
  • Storage classes: EFS Standard, EFS Standard-IA, EFS One Zone, EFS One Zone-IA (with lifecycle management moving inactive files to IA automatically).
  • Pricing: Pay for what you use — no pre-provisioning.

Typical Amazon EFS Use Cases

  • Content management systems where multiple web servers share the same asset library.
  • Home directories for developers.
  • Shared application config or secrets across an Auto Scaling group.
  • Container persistent storage (ECS/EKS).
  • Analytics and big-data workloads needing shared POSIX file system.
Amazon EFS is Linux only (NFS protocol). If the scenario says "Windows File Server shared drive" or "SMB protocol," the correct AWS storage service is Amazon FSx for Windows File Server — NOT Amazon EFS. Candidates lose points by defaulting to EFS because it sounds generic.

Amazon FSx — Managed Specialized File Systems

Amazon FSx is a family of managed third-party file systems. There are four flavors, all in CLF-C02 scope:

Amazon FSx Variant Protocol Target Use Case
Amazon FSx for Windows File Server SMB / NTFS Windows applications, Active Directory integration, SharePoint
Amazon FSx for Lustre Lustre HPC, ML training, genomics, financial modeling
Amazon FSx for NetApp ONTAP NFS, SMB, iSCSI Enterprise NetApp customers lifting-and-shifting to AWS
Amazon FSx for OpenZFS NFS Linux workloads needing ZFS snapshots/clones

Amazon FSx Exam Hooks

  • Windows + SMB + Active Directory → Amazon FSx for Windows File Server.
  • HPC + Lustre + sub-millisecond latency + 100s of GB/s throughput → Amazon FSx for Lustre.
  • NetApp SnapMirror / SnapVault / FlexClone requirements → Amazon FSx for NetApp ONTAP.
  • Linux ZFS clones / snapshots → Amazon FSx for OpenZFS.
For "deep-learning training that streams data from Amazon S3 with low latency" — the answer is Amazon FSx for Lustre. FSx for Lustre can link directly to an S3 bucket and process its objects as files. This is a repeat scenario question.

AWS Storage Gateway — Hybrid Storage

AWS Storage Gateway is a hybrid cloud storage service that connects on-premises applications to AWS storage services. You deploy a virtual or hardware appliance on-premises; it caches hot data locally and asynchronously moves data to AWS.

AWS Storage Gateway Modes

  • S3 File Gateway — exposes files to on-prem via NFS or SMB; stores them as objects in Amazon S3. Use for file-based workloads that need cloud durability.
  • FSx File Gateway — on-premises cache for Amazon FSx for Windows File Server.
  • Volume Gateway — presents iSCSI block volumes to on-prem; two modes (Cached and Stored). Backed by Amazon S3, with point-in-time snapshots as Amazon EBS snapshots.
  • Tape Gateway — virtual tape library (VTL) that replaces physical tape. Backups go to S3 and archive to S3 Glacier / Glacier Deep Archive. Drop-in for NetBackup, Veeam, Veritas, etc.

Typical exam scenario: "A company has on-premises backup software that writes to physical tape. They want to eliminate tape robots and move archives to the cloud." → AWS Storage Gateway Tape Gateway.

AWS Backup — Centralized Backup

AWS Backup is a fully managed, policy-driven backup service that centralizes and automates backup across 15+ AWS storage services and compute services, including:

  • Amazon EBS volumes
  • Amazon EC2 instances (application-consistent)
  • Amazon RDS / Aurora / DynamoDB
  • Amazon EFS
  • Amazon FSx
  • Storage Gateway volumes
  • Amazon S3 (object-level)
  • VMware workloads via AWS Storage Gateway

AWS Backup Features

  • Backup plans — schedule, retention, lifecycle tier to cold storage.
  • Cross-Region copy and cross-account copy for DR.
  • AWS Backup Vault Lock — WORM protection for backups (compliance).
  • AWS Backup Audit Manager — monitor backup compliance against policies.
If a question describes "a single service to manage backups across Amazon EBS, Amazon RDS, Amazon EFS, Amazon DynamoDB, and AWS Storage Gateway with one retention policy," the answer is AWS Backup, not Amazon S3 lifecycle rules or individual service backups. AWS Backup is the centralized backup plane across AWS storage services and beyond.

AWS Elastic Disaster Recovery (AWS DRS)

AWS Elastic Disaster Recovery (previously CloudEndure Disaster Recovery) lets you recover servers (physical, virtual, or cloud) into AWS after a disaster. It continuously replicates source machines at the block level into a staging area in your AWS account — typically cheap Amazon EBS staging volumes — and then launches full-capacity replicas on demand during a failover event.

Exam positioning:

  • AWS Backup → scheduled backups (RPO minutes-hours, RTO hours-days).
  • AWS DRS → real-time disaster recovery (RPO seconds, RTO minutes).
  • AWS Storage Gateway → hybrid day-to-day storage, not purely DR.

Comparison: Amazon S3 vs Amazon EBS vs Amazon EFS vs Amazon FSx

This is the highest-frequency AWS storage services question pattern in CLF-C02. Keep this table in your head:

Attribute Amazon S3 Amazon EBS Amazon EFS Amazon FSx
Paradigm Object Block File File
Access HTTPS API Block device on EC2 NFS mount SMB / NFS / Lustre
Scope Region AZ Region AZ or Multi-AZ (depending on FSx variant)
Concurrent clients Thousands 1 EC2 (usually) Many EC2 Many EC2 / on-prem
Durability 11 9's Replicated in AZ 11 9's (Multi-AZ) Replicated
OS support Any (API) Any Linux only Windows / Linux (varies)
Typical size Unlimited 1 GiB to 64 TiB PB PB
Price scale Cheapest for cold data Mid Higher than EBS per GB Premium
Main exam hook "static content / backup / archive" "attach disk to EC2" "share files across EC2 (Linux)" "Windows file share / HPC Lustre"

Object vs Block vs File — 30-Second Decision Rule

  • Object → "Store a file that I'll fetch through an HTTP API" → Amazon S3.
  • Block → "Attach a hard drive to one EC2 instance" → Amazon EBS.
  • File → "Multiple machines need to see the same file system" → Amazon EFS (Linux) or Amazon FSx (Windows / HPC).

Common Exam Traps for AWS Storage Services

Every CLF-C02 attempt features at least 4-6 AWS storage services questions. Recognize the traps:

  1. Amazon S3 cannot boot an EC2 instance — you need Amazon EBS (or instance store). A question saying "EC2 boot volume" always means Amazon EBS.
  2. Amazon EBS can't be shared across AZs. If two EC2 instances in different AZs need the same disk, the answer is Amazon EFS (or FSx), not Amazon EBS.
  3. Amazon EFS is Linux-only, Amazon FSx for Windows File Server is Windows.
  4. Durability ≠ availability. 11 9's is durability (data loss probability). Availability is a different percentage and is lower.
  5. S3 Intelligent-Tiering is the answer when the question says "unknown or changing access patterns and you don't want to set up lifecycle rules."
  6. S3 One Zone-IA loses data if the AZ is destroyed — only use for re-creatable data.
  7. Amazon EBS snapshots are stored in S3 but you don't see the bucket; they are incremental.
  8. AWS Storage Gateway Tape Gateway replaces physical tape for backup software.
  9. AWS Backup is the centralized plane — don't confuse with individual service-level snapshots.
  10. Amazon S3 Glacier Deep Archive is cheapest but slowest (12-48 hours to restore).
A question asks: "Which AWS storage service should you choose for the EC2 boot volume of a Linux web server?" The trick is that candidates read "Linux" and pick Amazon EFS. Wrong. Boot volumes require block storage → Amazon EBS. Amazon EFS is a data share, not a boot disk. Read the question for "boot" first, then decide.

AWS Storage Services vs Database Services — Scope Boundary

This is the 3.6 vs 3.4 boundary exam trap. Remember:

  • Amazon S3 is an AWS storage service, not a database. It has no SQL, no query engine, and no transactions.
  • Amazon DynamoDB is a database, not an AWS storage service — even though it stores data.
  • Amazon EBS is AWS storage for EC2; Amazon RDS uses Amazon EBS internally but is classified as a database service.

If the question mentions "store the backup file" → Amazon S3. If the question mentions "query structured records with SQL" → database service.

Key Numbers and Must-Memorize Facts for AWS Storage Services

  • Amazon S3 durability: 11 9's (99.999999999%).
  • Amazon S3 max object size: 5 TB; max single PUT: 5 GB.
  • Amazon S3 storage classes: 7 (Standard, Intelligent-Tiering, Standard-IA, One Zone-IA, Glacier Instant, Glacier Flexible, Glacier Deep Archive).
  • S3 Glacier Deep Archive retrieval: 12-48 hours.
  • S3 Standard availability: 99.99%.
  • Amazon EBS scope: one AZ.
  • Amazon EBS volume types: gp3, gp2, io2, io1, st1, sc1.
  • Amazon EFS protocol: NFSv4.1, Linux only.
  • Amazon FSx variants: Windows File Server, Lustre, NetApp ONTAP, OpenZFS.
  • AWS Storage Gateway modes: S3 File, FSx File, Volume (Cached/Stored), Tape.
AWS storage services numbers to burn into memory before exam day: S3 durability 11 9's, S3 max object 5 TB, S3 storage classes = 7, EBS is AZ-scoped, EFS is Linux-NFS, FSx has 4 variants (Windows/Lustre/ONTAP/OpenZFS), Storage Gateway has 4 modes (S3 File / FSx File / Volume / Tape). These stats show up word-for-word in exam options.

Pricing Signals for AWS Storage Services

You don't need to calculate prices, but you need to know relative ordering:

  • Cheapest per GB-month — Amazon S3 Glacier Deep Archive.
  • Most expensive per GB-month — Amazon FSx for Windows/Lustre (premium managed file).
  • Amazon EBS > Amazon S3 Standard per GB for persistent storage.
  • Amazon S3 Standard-IA / One Zone-IA cheaper per GB than Standard but with retrieval fees.
  • Data transfer IN to AWS storage services = free. Data transfer OUT to the Internet = paid.
  • Cross-AZ and cross-Region replication costs extra.

How AWS Storage Services Connect to Other CLF-C02 Topics

  • Compute (3.3) — EC2 uses EBS for block, EFS/FSx for shared file; Lambda reads S3 events.
  • Database (3.4) — RDS and DynamoDB are backed by internal storage but are classified separately; AWS Backup spans databases and storage.
  • Network (3.5) — S3 endpoints (Gateway VPC Endpoint) for private access; Storage Gateway uses VPN or Direct Connect.
  • Security (2.4) — Amazon Macie scans S3 for PII; KMS encrypts S3, EBS, EFS, FSx.
  • Migration (1.3) — AWS Snow Family (Snowcone, Snowball, Snowmobile) moves PB-scale data into Amazon S3; DataSync migrates file systems; Storage Gateway is for ongoing hybrid.

Practice Scenario Patterns for AWS Storage Services

Pattern 1: "A company wants to archive compliance logs that are accessed at most once a year and need 7-year retention." → S3 Glacier Deep Archive with a 7-year lifecycle expiration rule.

Pattern 2: "A team of 20 EC2 Linux web servers in 3 AZs all need to read and write the same content directory." → Amazon EFS.

Pattern 3: "A Windows application server needs a high-performance SMB share with Active Directory integration." → Amazon FSx for Windows File Server.

Pattern 4: "A database runs on EC2 and needs 20,000 consistent IOPS on a 1 TB volume." → Amazon EBS io2 (or io1).

Pattern 5: "A company wants to replace an on-premises tape library used by Veeam." → AWS Storage Gateway Tape Gateway.

Pattern 6: "A company wants a single backup policy across Amazon EBS, Amazon RDS, and Amazon DynamoDB." → AWS Backup.

Pattern 7: "Minimize cost for objects whose access pattern is unknown; no lifecycle rules." → S3 Intelligent-Tiering.

Pattern 8: "Cheapest Amazon S3 storage for easily re-creatable data." → S3 One Zone-IA.

FAQ — AWS Storage Services Top Questions

Q1: What is the difference between Amazon S3 and Amazon EBS?

Amazon S3 is object storage accessed over HTTPS API; you can't mount it as a disk. Amazon EBS is block storage attached to a single EC2 instance like a hard drive. Use Amazon S3 for files, backups, static content, and data lakes. Use Amazon EBS for EC2 boot volumes and databases running on EC2.

Q2: Can I attach one Amazon EBS volume to multiple EC2 instances?

Normally no — Amazon EBS is AZ-scoped and attaches to one EC2 instance. The exception is Amazon EBS Multi-Attach with io1/io2 volumes for clustered workloads in the same AZ, but this is outside CLF-C02 depth. For multi-instance sharing, use Amazon EFS (Linux NFS) or Amazon FSx.

Q3: What's the cheapest S3 storage class and what's the trade-off?

S3 Glacier Deep Archive is the cheapest S3 storage class. The trade-off is retrieval time: 12 to 48 hours, minimum 180-day storage duration, and a per-GB retrieval fee. Use it only for regulatory archives that you rarely or never touch.

Q4: Is Amazon S3 regional or global?

Amazon S3 is a regional AWS storage service: objects are stored in the Region you choose and replicated across a minimum of three Availability Zones. Bucket names, however, are globally unique across all AWS accounts. CloudFront can cache S3 content globally at edge locations, but the data itself lives in a Region.

Q5: When would I pick Amazon FSx for Lustre over Amazon S3?

Use Amazon FSx for Lustre when your HPC or ML workload needs POSIX file system semantics with sub-millisecond latency and hundreds of GB/s throughput. FSx for Lustre can pull data directly from Amazon S3 and expose it as a file system — that's the best-of-both-worlds pattern for deep-learning training pipelines.

Q6: What's the difference between AWS Backup and Amazon S3?

AWS Backup is a policy-driven backup orchestration service across 15+ AWS storage services and data services — it stores recovery points and enforces retention. Amazon S3 is an underlying object storage service. AWS Backup may use Amazon S3 internally, but from an exam standpoint: if the question asks "how do I manage backups across Amazon EBS + Amazon RDS + Amazon EFS," the answer is AWS Backup.

Q7: Does Amazon EFS work on Windows?

No. Amazon EFS only supports the NFS protocol and is for Linux workloads. For Windows SMB file shares, use Amazon FSx for Windows File Server. This is one of the most repeated CLF-C02 traps.

Further Reading on AWS Storage Services

Final Study Tips for AWS Storage Services

  1. Drill the Amazon S3 vs Amazon EBS vs Amazon EFS triangle until it's reflex — this alone is worth 3-5 exam points.
  2. Memorize the seven S3 storage classes in order and tie each to a one-line use case.
  3. Learn the four Amazon FSx flavors by protocol (SMB / Lustre / ONTAP / ZFS).
  4. Know AWS Storage Gateway's four modes; "tape library replacement" = Tape Gateway.
  5. AWS Backup is the centralized AWS storage services backup plane — never confuse with individual service snapshots.
  6. For every AWS storage services scenario, first classify the paradigm (object / block / file / archive / hybrid / backup); the right AWS service falls out almost automatically.

Master these AWS storage services patterns and the Domain 3 "3.6 Identify AWS storage services" task statement becomes a reliable source of points on the CLF-C02 exam. Good luck on exam day.

Official sources