examhub .cc The most efficient path to the most valuable certifications.
Vol. I
In this note ≈ 25 min

Data Lifecycle Retention Objectlock

4,820 words · ≈ 25 min read

Why Data Lifecycle Matters in SCS-C02

Data lifecycle is the spine of Domain 5 Task 5.3 in SCS-C02. The exam guide explicitly tells you that you must "design S3 Lifecycle mechanisms to retain data for required retention periods (for example, S3 Object Lock, S3 Glacier Vault Lock, S3 Lifecycle policy)" and "design automatic lifecycle management for AWS services and resources (for example, Amazon S3, EBS volume snapshots, RDS volume snapshots, AMIs, container images, CloudWatch log groups, Amazon Data Lifecycle Manager [Amazon DLM])." A modern security engineer is not just thinking about how data is encrypted at rest; you also need to decide how long data lives, when it transitions to colder storage, and whether it can ever be modified or deleted before its retention term expires. That is the entire scope of data lifecycle on AWS.

The reason data lifecycle gets its own task statement is that retention is where security, cost, and compliance collide. A 7-year retention requirement for SOX or SEC 17a-4 financial records cannot be solved with encryption alone — you need WORM (Write Once Read Many) protection, immutable archives, and an audit trail. A GDPR right-to-erasure request cannot be solved with Glacier Vault Lock — you need a flexible lifecycle that still allows authorized deletion. The tension between "must keep" and "must be able to delete" is what makes data lifecycle questions hard, and what makes the SCS-C02 distractors so similar at first glance.

This deep dive walks through every service you can be tested on: S3 Lifecycle to Glacier Deep Archive, S3 Object Lock governance vs compliance, Glacier Vault Lock for vault-level immutability, EBS snapshot lifecycle through DLM, RDS snapshot retention, AMI deprecation and EC2 Image Builder lifecycle policies, CloudWatch Logs retention, and AWS Backup with cross-region/cross-account copy and Vault Lock. By the end of the data lifecycle journey, you should be able to draw the correct architecture for any retention scenario the exam throws at you.

The Five Pillars of AWS Data Lifecycle

Before diving into individual services, frame the data lifecycle landscape as five pillars. Each pillar answers one question, and exam scenarios usually combine two or three of them at once.

Pillar 1: Storage Class Transition

The first data lifecycle pillar is moving data from hotter (expensive, low-latency) tiers to colder (cheap, high-latency) tiers. S3 Lifecycle rules transition objects from S3 Standard → Standard-IA → Glacier Instant Retrieval → Glacier Flexible Retrieval → Glacier Deep Archive. There is also the Intelligent-Tiering class which automates this without lifecycle rules, but Intelligent-Tiering does not solve compliance retention — it only solves cost.

Pillar 2: Expiration / Deletion

The second data lifecycle pillar is removing data when it is no longer needed. S3 expiration rules, EBS snapshot retention counts in DLM, RDS automated snapshot retention windows, CloudWatch Logs retention settings, and AWS Backup retention rules all answer the question "when does this data get deleted?"

Pillar 3: Immutability (WORM)

The third data lifecycle pillar is making sure data cannot be deleted or modified before its retention term ends. This is where Object Lock, Glacier Vault Lock, and Backup Vault Lock live. Immutability is a regulatory requirement, not a cost optimisation.

Pillar 4: Cross-Region / Cross-Account Copy

The fourth data lifecycle pillar protects against blast-radius incidents. If an attacker compromises the production account, on-account immutability is not enough — the attacker can disable retention before it activates or simply deny themselves access. Cross-account and cross-region copies in AWS Backup, S3 Replication with Replica Lock, and DLM cross-region copy ensure your retained data survives even an account-level breach.

Pillar 5: Compliance Frameworks

The fifth data lifecycle pillar is the regulatory layer that drives the previous four. SEC 17a-4(f), FINRA 4511, HIPAA, PCI-DSS 10.7, GDPR, and SOX all impose specific data lifecycle requirements. AWS publishes a "SEC 17a-4 Cohasset Assessment" demonstrating that S3 Object Lock in compliance mode satisfies WORM rules; this assessment is a frequent reference in SCS-C02 question stems.

Whenever a data lifecycle question appears, ask which pillar is being tested: transition, expiration, immutability, cross-region copy, or compliance framework. Many wrong answers solve the wrong pillar — for example, picking S3 Intelligent-Tiering when the requirement is 7-year WORM (immutability pillar, not transition pillar). S3 storage classes

S3 Lifecycle Policies in Depth

S3 Lifecycle is the single most-tested data lifecycle feature on SCS-C02. An S3 Lifecycle configuration is an XML/JSON document attached to a bucket. Each rule has a filter (prefix, tag, object size), and one or more actions: Transition, Expiration, NoncurrentVersionTransition, NoncurrentVersionExpiration, AbortIncompleteMultipartUpload.

Transition Rules

A transition rule looks like "after 30 days, move objects with prefix logs/ from Standard to Standard-IA; after 90 days move to Glacier Flexible Retrieval; after 365 days move to Glacier Deep Archive." Standard-IA and One Zone-IA require a minimum 30-day storage duration; transitioning earlier is not allowed. Glacier Flexible Retrieval and Deep Archive have minimum 90-day and 180-day storage durations respectively, billed even if you delete the object early. There is also a per-object transition fee, so transitioning very small objects to Glacier can cost more than just leaving them in Standard. The S3 Lifecycle service documents a 128 KB threshold below which transitions to IA classes are skipped automatically.

Expiration Rules

Expiration rules permanently delete objects after N days. For versioned buckets, expiration creates a delete marker on the current version (object becomes a "noncurrent version"). To actually purge old versions you also need a NoncurrentVersionExpiration rule. This two-rule pattern is a classic SCS-C02 trap: candidates set Expiration but forget NoncurrentVersionExpiration, leaving "deleted" objects living forever as noncurrent versions and burning storage cost.

Abort Multipart Uploads

The AbortIncompleteMultipartUpload action cleans up parts from multipart uploads that never completed. Without this rule, partial uploads from failed jobs accumulate forever and you pay for them silently. AWS Trusted Advisor flags this; many security audits also flag it because partial uploads are an unmonitored data surface.

Lifecycle Filters

Filters can match by prefix (logs/2026/), by tag (tag:DataClassification=public), by object size (greater-than/less-than), or by combinations using And. Filtering by tag is the most useful pattern for security: combine Macie classification tags with lifecycle rules so that data automatically tagged "PII" gets a 7-year retention and "ephemeral" tags get 30-day expiration.

S3 Storage Lens shows you predicted lifecycle savings before you apply rules. Always preview transitions on a non-production bucket first. Cross-Region Replication metrics also indicate whether replication or lifecycle is dominating your storage growth. S3 Storage Lens

Lifecycle Interaction With Replication

Lifecycle rules apply per-bucket. If you replicate a bucket cross-region, each bucket has its own lifecycle. A common architecture: source bucket transitions to Glacier Deep Archive after 90 days; replica bucket in another region also transitions to Deep Archive on the same schedule. Without this, you get an expensive cross-region replica sitting in Standard while your primary lives in Deep Archive.

S3 Object Lock — Governance vs Compliance

S3 Object Lock is the WORM feature for S3 and the most exam-critical piece of data lifecycle immutability. Object Lock is enabled at bucket creation (you cannot enable it on a pre-existing bucket without contacting AWS Support, with limited exceptions) and requires versioning. Object Lock has two retention modes plus a separate legal hold control.

Governance Mode

Governance mode prevents most users from deleting or overwriting protected objects, but principals with the s3:BypassGovernanceRetention IAM permission can override the lock. Governance mode is appropriate for internal data lifecycle policy enforcement: "engineers cannot delete production logs, but the security team with bypass permission still can if needed." Governance mode does not satisfy strict regulatory WORM. If an exam scenario says "auditors require that no one, including the AWS account root, can delete the data before the retention period expires," governance mode is wrong.

Compliance Mode

Compliance mode is true WORM. Once a retention period is set in compliance mode, no principal — not the bucket owner, not a power user, not even the AWS account root user — can shorten the retention period or delete the object before retention expires. The only way to delete is to wait for the retention period to expire or to delete the entire AWS account (which AWS does not let you do quickly while compliance-mode objects exist). Compliance mode is the answer for SEC 17a-4(f), FINRA 4511, CFTC 1.31, and similar broker-dealer record retention rules.

Retention Periods and Modes of Application

Object Lock retention can be applied per-object (PutObject with a Retention header) or via a bucket-default retention configuration. The bucket-default applies to new objects only; existing objects keep whatever retention they had at upload time. Retention can be specified in days or years; you cannot mix.

Legal hold is independent of retention modes. It is an on/off flag (s3:PutObjectLegalHold permission) that prevents deletion regardless of retention. Legal holds have no expiry — they remain until explicitly removed. Use legal hold for "freeze evidence pending litigation" scenarios where you do not yet know how long retention needs to be.

You cannot reduce a compliance-mode retention period after it is set; you can only extend it. You cannot disable Object Lock on a bucket that has ever had compliance-mode objects with active retention. Test in a sandbox first. The Cohasset SEC 17a-4 assessment specifically validates compliance mode for broker-dealer record retention. S3 Object Lock overview

A frequent SCS-C02 distractor: a question describes broker-dealer records and offers governance mode as one option. Governance mode allows authorized bypass; SEC 17a-4(f) explicitly prohibits any administrative bypass. The correct answer is Object Lock in compliance mode plus an Object Lock retention period equal to the regulatory term. Cohasset SEC 17a-4 assessment

Object Lock and Replication

If you enable Cross-Region Replication on an Object Lock bucket, the replica bucket must also have Object Lock enabled. Replicated objects keep their retention attributes. This means a 7-year compliance-mode object in us-east-1 is still WORM-protected in us-west-2 after replication. Combine this with Block Public Access and SSE-KMS for a defense-in-depth retained data architecture.

Glacier Vault Lock — Vault-Level Immutability

S3 Glacier Vault Lock is older than Object Lock but still appears on SCS-C02. Vault Lock applies a vault access policy that, once locked, becomes immutable. The data lifecycle workflow is two-step: first, you initiate a vault lock with a policy (status InProgress); within 24 hours you must CompleteVaultLock to make the policy permanent. After completion, the policy can never be edited or removed — you can only abort within the 24-hour window.

Vault Lock vs Object Lock

The differences matter on the exam. Vault Lock is at the vault level (one policy applies to all archives in the vault), while Object Lock is per-object (each object has its own retention). Vault Lock is for direct Glacier API users (rare in modern AWS); Object Lock is for S3 buckets (which transition to the Glacier storage class via lifecycle, not to a Glacier vault). If a question mentions "Glacier vault" it means the legacy Glacier service. If it mentions "S3 Glacier storage class" it means S3 with the Glacier tier.

Vault Lock Policy Patterns

A typical Vault Lock policy denies glacier:DeleteArchive unless the archive is older than 7 years. Once locked, no IAM administrator can ever change this. Vault Lock satisfies the same SEC 17a-4 requirements as S3 Object Lock compliance mode.

EBS Snapshot Lifecycle with DLM

Amazon Data Lifecycle Manager (DLM) is the managed service for automating EBS volume snapshots, EBS-backed AMIs, and cross-account copy. DLM eliminates the need to write custom Lambda functions that take snapshots on a schedule. For SCS-C02, you should know that DLM is the AWS-recommended solution for "schedule + retention" data lifecycle on block storage.

DLM Policy Anatomy

A DLM policy has: target tags (which resources to snapshot), schedules (how often), retention rule (count or age), cross-region copy rule (optional), cross-account share rule (optional), and tags to apply to the new snapshot. Because targeting is tag-based, you can implement a tag-driven data lifecycle: any volume tagged Backup=Daily automatically gets a daily snapshot retained for 30 days.

Cross-Region and Cross-Account Copy

DLM supports cross-region copy of snapshots and AMIs, with optional re-encryption using a destination-region KMS key. Cross-account copy requires the destination account to opt-in; this is the recommended pattern for an isolated backup account in a multi-account organization. The combination of DLM cross-account copy + AWS Backup Vault Lock in the destination account is the canonical answer for "ransomware-resistant EBS snapshots."

Fast Snapshot Restore

DLM can also activate Fast Snapshot Restore (FSR) on retained snapshots, paying a per-AZ-hour fee in exchange for instant restoration. FSR is a recovery-time consideration, not a security one, but you may see it in disaster recovery questions adjacent to data lifecycle.

Use a single tag like BackupClass=Tier1 on EBS volumes. One DLM policy targets the tag and applies the right schedule + retention + cross-region copy. As volumes are created, just apply the tag — the data lifecycle is automatic. DLM tag-based targeting

RDS Snapshot Retention

Amazon RDS has two snapshot types and they behave differently for data lifecycle. Automated backups are taken daily during the backup window and retained for a configurable 1–35 day window. When the retention window expires, automated backups are permanently deleted. Manual snapshots are user-initiated and never expire automatically; they live until you delete them or until the underlying RDS instance is deleted (manual snapshots survive instance deletion).

Backup Window vs Maintenance Window

The backup window is when automated snapshots are taken; the maintenance window is when patches and minor version upgrades happen. They are independent. Both should be in low-traffic hours.

RDS Export to S3

RDS can export snapshots to S3 in Apache Parquet format. Once in S3 you can apply Object Lock, lifecycle rules, and cross-region replication. This is the recommended pattern for keeping RDS data beyond the 35-day automated retention limit while still benefiting from columnar query via Athena.

Cross-Region Automated Backups

RDS supports cross-region automated backups for some engines (PostgreSQL, MySQL, MariaDB, Oracle, SQL Server). This data lifecycle feature replicates both the daily snapshot and transaction logs to a secondary region with a separate retention setting. It is the simplest way to satisfy "RPO ≤ 5 minutes, cross-region" for relational data.

Automated backup retention: 1–35 days. Manual snapshots: unlimited until deleted. Backup window: minimum 30 minutes. PITR (point-in-time-recovery): up to retention boundary, granularity of 5 minutes. These exact numbers appear in SCS-C02 distractors. RDS backup overview

AMI Lifecycle and EC2 Image Builder

AMIs do not delete themselves. Without a lifecycle policy, every AMI you ever created sits in your account, each one anchoring its own EBS snapshot. Three mechanisms manage AMI data lifecycle: AMI deprecation, AMI disable, and EC2 Image Builder lifecycle policies.

AMI Deprecation

EnableImageDeprecation sets a deprecation date on an AMI. Deprecated AMIs no longer appear in default DescribeImages results but can still launch instances if referenced by ID. Deprecation is a soft signal: "stop using this AMI." It does not delete data.

AMI Disable

DisableImage (introduced 2023) makes an AMI un-launchable while keeping the underlying snapshot. Disabling is reversible. Disabled AMIs are useful when you need to investigate whether an AMI is still referenced before deleting it.

Image Builder Lifecycle Policies

EC2 Image Builder supports retention rules that automatically deprecate, disable, or delete AMIs based on age, count, or tags. Policies can also delete the underlying EBS snapshots, completing the data lifecycle. Use Image Builder lifecycle policies in any environment with frequent AMI rebuilds (golden image pipelines).

An AMI is metadata pointing to one or more EBS snapshots plus launch parameters. Deleting (deregistering) an AMI does not delete the underlying snapshots — you have to delete those separately, or use Image Builder lifecycle policies that delete both. This dual-resource data lifecycle is a frequent forgotten cost on AWS bills. AMI deregistration

CloudWatch Logs Retention

CloudWatch Logs retention is the simplest data lifecycle feature in AWS, and the most over-looked. Every log group has a retention setting. The default is "Never expire" — which means costs grow forever unless you change it. SCS-C02 expects you to set explicit retention on every log group: 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, 3653 days are the allowed values, mapping to common compliance windows (90 days, 1 year, 5 years, 10 years).

Log Group Encryption

Log groups can be encrypted with a KMS customer-managed key. KMS encryption is mandatory for HIPAA workloads and for any log group containing PII. Encryption applies to data ingested after encryption is enabled — it does not retroactively encrypt existing log events.

S3 Export for Long-Term Retention

For retention beyond what CloudWatch Logs supports cost-effectively, export to S3. Once in S3, you get the full S3 data lifecycle toolkit: Lifecycle rules to Glacier Deep Archive, Object Lock for WORM, Cross-Region Replication for DR. Export is one-time, on-demand; for continuous archival use a subscription filter to Kinesis Data Firehose → S3.

Subscription Filters and Real-Time Pipelines

A CloudWatch Logs subscription filter pushes log events to Lambda, Kinesis Data Streams, Firehose, or another log group. Pair Firehose with S3 + Glue + Athena for a long-term searchable log archive that complies with the data lifecycle pillar of expiration plus cross-region copy. This is the canonical SCS-C02 pattern for "centralized 7-year log retention across an organization."

AWS Backup — The Cross-Service Data Lifecycle Hub

AWS Backup is a managed service that orchestrates data lifecycle across S3, EBS, EFS, FSx, RDS, DynamoDB, DocumentDB, Neptune, Storage Gateway, VMware, and more. AWS Backup is the SCS-C02 answer when a question requires consistent data lifecycle across multiple services, especially with cross-account or cross-region copy.

Backup Plan Anatomy

A backup plan has rules and selections. A rule defines: schedule (cron), backup window, lifecycle (transition to cold storage at N days, expire at M days), copy destinations (other regions/accounts), and recovery point tags. A selection defines which resources are backed up — by tag, by ARN, or by resource type. Tag-based selection is the AWS Well-Architected pattern.

Backup Vaults

A backup vault is a logical container for recovery points. Each vault has a KMS key for encryption (recovery points inherit). Separating production backups into a dedicated vault in a separate account lets you apply blast-radius isolation: even if production is compromised, the backup account remains intact.

AWS Backup Vault Lock

Backup Vault Lock is the WORM equivalent for AWS Backup. Like Object Lock, it has two modes: governance mode (admins can disable) and compliance mode (no one, including root, can disable). Compliance mode requires a 3-day cooling-off period after enabling, during which it can be aborted; after the cooling-off period the lock is permanent. This is the canonical answer for "ransomware-resistant cross-service backups." Vault Lock prevents shortening retention, deleting recovery points, and changing the lifecycle.

Cross-Region and Cross-Account Copy

Backup plans can replicate recovery points to other regions and other accounts. Cross-account copy requires AWS Organizations and the backup:CopyIntoBackupVault permission. The destination vault can have its own Vault Lock, providing two layers of immutability.

Backup Audit Manager and Frameworks

AWS Backup Framework lets you define controls (e.g., "all production resources must have a daily backup with 7-day retention") and continuously audits compliance. Reports go to S3 and integrate with Audit Manager for SOC2/PCI evidence collection. This is your data lifecycle compliance evidence layer.

The exam-grade answer for "ransomware can encrypt our snapshots" is: AWS Backup → cross-account copy to a dedicated backup account → that account's vault has Vault Lock in compliance mode. The production account's IAM cannot reach the backup vault, and Vault Lock means even the backup account's admin cannot delete recovery points before retention expires. AWS Backup Vault Lock

Plain-Language Explanation:

If the previous sections felt like reading a tax code, this section translates the data lifecycle landscape into plain language with three different analogies.

Analogy 1: The Library

Imagine an enormous library. New books arrive every day and go to the front desk shelves (S3 Standard) where they are easy to grab. After 30 days the librarian moves them to the main reading room (Standard-IA) — still accessible but you have to walk further. After 90 days they go to basement storage (Glacier Flexible Retrieval) and you must request them with a 3-12 hour notice. After a year they go to the off-site warehouse (Glacier Deep Archive) and you wait up to 12 hours. That entire shelf-shuffling routine is S3 Lifecycle policies. Now imagine some books are legal contracts — the librarian must put them in a steel cabinet that even she cannot open until the 7-year retention term expires. That steel cabinet is S3 Object Lock in compliance mode. The librarian's manager owning a master key that can open the cabinet (but only the manager) is governance mode. The Glacier Vault Lock is the same steel cabinet, but for the basement storage room — once locked, the cabinet rules are permanent for the entire room.

Analogy 2: The Kitchen Refrigerator

Your refrigerator has shelves (Standard) for groceries you'll cook this week, a freezer (Standard-IA) for next month's meals, and a deep freezer in the garage (Glacier) for bulk meat. Data lifecycle is the household rule: "after 7 days move it to the freezer; after 60 days move it to the garage; after 6 months throw it out." That rule lives on the fridge as a sticky note — that's your S3 Lifecycle policy. Now imagine your spouse runs a home daycare and is required by law to keep allergen records for 5 years — those records go in a sealed box that even you cannot open. That box is Object Lock compliance mode. The freezer in the garage having a padlock that, once locked, the manufacturer cannot pick — that's Glacier Vault Lock. AWS Backup is the meal-prep routine that not only manages your fridge but also packs duplicates for your in-laws (cross-account copy) in another city (cross-region copy).

Analogy 3: The Hospital Records Room

A hospital has active patient charts at the nurses' station (S3 Standard), recently discharged patients in the medical records office (Standard-IA), and historical records in a basement archive (Glacier Deep Archive). HIPAA requires keeping those records 6 years; some states require longer. The hospital sets a rule: "records auto-transition every quarter, and final destruction happens at year 7." That rule is data lifecycle in S3. The basement archive door has a time-lock that only opens after 7 years — that's Object Lock in compliance mode. The hospital also makes a duplicate of every record and sends it to a sister hospital across the state (cross-region) under a different administrator (cross-account) — that's the AWS Backup cross-account/cross-region copy pattern. The sister hospital's archive also has a time-lock — that's Backup Vault Lock. If a disgruntled IT admin tried to delete records, neither hospital's archive would let them.

Compliance Scenarios on the Exam

SCS-C02 questions are written as compliance scenarios. Translating the regulatory ask into the right data lifecycle features is the skill being tested. Here are the canonical patterns.

Scenario 1: 7-Year Financial Records (SEC 17a-4 / FINRA 4511)

Requirement: WORM, no administrative bypass, 7 years, cost-optimized. Answer: S3 bucket with Object Lock enabled at creation, default retention = 7 years in compliance mode, lifecycle rule transitions objects to Glacier Deep Archive after 90 days. Add Block Public Access, SSE-KMS with a customer-managed key, and CloudTrail S3 data events. Cross-region replication to a second region with Object Lock also enabled (Replica Lock).

Scenario 2: HIPAA Patient Records (6+ Years, KMS Encryption)

Requirement: PHI must be encrypted, retained 6+ years, deletion permitted only after retention. Answer: S3 with Object Lock in compliance mode (retention = 6 years), SSE-KMS with a customer-managed key, KMS key policy that permits CloudTrail event recording, S3 Lifecycle to Glacier Deep Archive after 90 days, AWS Backup for RDS PHI databases with Vault Lock in compliance mode for the same 6-year term.

Scenario 3: GDPR Right-to-Erasure (Flexible Retention)

Requirement: data must be deletable on demand. Answer: governance mode is acceptable (or no Object Lock at all), with retention as a guideline rather than enforcement. Compliance mode is wrong here because GDPR right-to-erasure cannot be satisfied if the data is locked. This is the explicit anti-pattern: many candidates pick compliance mode reflexively because "regulatory" — but GDPR pushes the opposite way.

Scenario 4: Forensic Evidence Preservation

Requirement: capture compromised EC2 EBS state and prevent tampering until investigation completes. Answer: take an EBS snapshot, copy it to a forensic account, store the snapshot ID in an S3 bucket with Object Lock (or apply legal hold to logs related to the incident in S3). The data lifecycle for forensics is "preserve indefinitely until legally released," so legal hold (no expiry) is the right tool, not retention.

Scenario 5: Centralized 7-Year Log Archive

Requirement: organization-wide log retention for 7 years across CloudTrail, VPC Flow Logs, and CloudWatch Logs. Answer: org trail → centralized S3 bucket with Object Lock compliance mode (7 years) + Glacier Deep Archive lifecycle. CloudWatch log groups → Firehose → same bucket. The S3 bucket lives in a dedicated logging account isolated from production.

A subtle SCS-C02 pattern: the question describes EU customer data, retention requirements, and "must comply with GDPR." Picking Object Lock compliance mode looks defensible but is actively wrong because compliance mode prevents the right-to-erasure deletion that GDPR mandates. Use governance mode with lifecycle expiration rules instead. GDPR right-to-erasure on AWS

Cost-Aware Data Lifecycle Design

Data lifecycle is not just about compliance; it is also about cost. The exam may include cost-aware variants of lifecycle questions, and the right architecture depends on object size and access frequency.

Small Object Penalty in Glacier

Glacier classes have a 32 KB metadata overhead per object plus a per-object transition fee. A bucket of 10 KB telemetry events transitioned to Deep Archive can cost more than leaving them in Standard. The data lifecycle fix is to aggregate small objects (CloudWatch Logs export to S3, Kinesis Firehose with buffering) before applying Glacier transitions.

Retrieval Cost Tiers

Glacier Flexible Retrieval has Expedited (1–5 min, premium price), Standard (3–5 hours), and Bulk (5–12 hours, cheapest) retrieval tiers. Glacier Deep Archive has only Standard (12 hours) and Bulk (48 hours). If your incident response runbook needs forensic access faster than 12 hours, do not put forensic copies in Deep Archive — keep them in Glacier Flexible Retrieval or Standard-IA. Data lifecycle is about more than just cost; it interacts with RTO.

Lifecycle Cost Forecasting

Use AWS Cost Explorer's "S3 Storage Class" dimension and S3 Storage Lens recommendations to forecast lifecycle savings. The typical break-even for Glacier Deep Archive is data accessed less than once a year and stored for at least 6 months. For data accessed quarterly, Standard-IA is usually cheaper net of retrieval fees.

If you have millions of small objects (less than 128 KB each), use S3 Inventory to identify them and Lambda or AWS Batch to aggregate them into Parquet or tar archives before applying Glacier transitions. The aggregated objects then enjoy Glacier pricing without the per-object overhead. S3 Inventory

Multi-Account Data Lifecycle Architecture

In a multi-account AWS Organizations setup, data lifecycle becomes an organizational concern, not a per-account one. The Security Reference Architecture pattern is to separate concerns across accounts.

The Logging Account

A dedicated logging account holds the centralized S3 bucket receiving CloudTrail org trails, Config aggregator data, VPC Flow Logs, and exported CloudWatch Logs. The bucket has Object Lock in compliance mode with retention matching organizational policy (often 7 years). SCPs prevent member accounts from disabling logging at source.

The Backup Account

A dedicated backup account holds AWS Backup vaults receiving cross-account copies from production accounts. Each vault has Vault Lock in compliance mode. Production account IAM cannot reach this account; only break-glass principals can. This is the ransomware-resistant data lifecycle pattern.

The Forensics Account

A dedicated forensics account receives EBS snapshots, memory dumps, and S3 object copies for incident response. S3 Object Lock with legal hold preserves evidence without expiry until the legal team releases it.

Cross-Account SCP Guardrails

Service Control Policies at the OU level can deny s3:DeleteObject, glacier:DeleteArchive, backup:DeleteRecoveryPoint, and dlm:DeleteLifecyclePolicy for any role except a designated retention administrator. SCPs alone are not WORM (the SCP can be modified by an Organizations admin), but combined with Object Lock and Vault Lock they harden the data lifecycle perimeter.

Common Architecture Diagram

A reference architecture for end-to-end data lifecycle on a 7-year financial records workload looks like this:

  1. Application writes data to S3 bucket in production account, SSE-KMS with CMK
  2. Bucket has Object Lock enabled at creation, default retention = 7 years compliance mode
  3. Lifecycle rule: 30 days → Standard-IA, 90 days → Glacier Deep Archive, no expiration (Object Lock takes precedence)
  4. Cross-Region Replication to a second-region bucket in the same account (Replica Lock = same Object Lock)
  5. Production account also has AWS Backup plan covering RDS, EBS, DynamoDB
  6. Backup plan copies recovery points to a dedicated backup account in a different region
  7. Backup account vault has Vault Lock in compliance mode, 7 years
  8. SCPs at the OU deny disabling lifecycle, Object Lock, or Vault Lock from any production role
  9. CloudTrail org trail sends events to a logging account bucket with the same Object Lock + Glacier Deep Archive lifecycle
  10. AWS Backup Audit Framework runs daily and reports compliance evidence to Audit Manager

This architecture satisfies SEC 17a-4, FINRA, HIPAA, PCI-DSS 10.7, and ransomware-resilience patterns simultaneously.

Monitoring and Alerting on Data Lifecycle Health

A data lifecycle is only as good as your ability to detect when it breaks. Configure these monitors.

CloudWatch Metrics for S3 Lifecycle

S3 emits BytesDownloaded, BytesUploaded, and storage-class breakdown metrics. Watch for unexpected drops in Glacier storage (someone deleting objects), or spikes in Standard storage (someone disabling lifecycle).

AWS Config Rules

Config managed rules s3-bucket-versioning-enabled, s3-bucket-replication-enabled, s3-bucket-default-lock-enabled, backup-plan-min-frequency-and-min-retention-check, cloudwatch-log-group-encrypted, and dynamodb-pitr-enabled continuously verify the data lifecycle state. Wire findings into Security Hub for organization-wide visibility.

EventBridge for Object Lock Events

S3 emits events when retention is set or extended. Pipe these into EventBridge → SNS for security team alerts. A sudden flurry of s3:PutObjectRetention calls in compliance mode is normal during a regulatory rollout but suspicious otherwise.

AWS Backup Notifications

Backup plan failures should page on-call. Use SNS topic subscribed to the backup vault's BackupVaultEvents notification. A failed daily backup is a data lifecycle silent killer — you don't notice until you need the recovery point and it isn't there.

Schedule quarterly restore drills via AWS Backup's restore testing feature. Without restore testing, your data lifecycle confidence is theoretical. A 7-year-old Glacier Deep Archive object is meaningless if no one has practiced retrieving it. Backup restore testing

Anti-Patterns to Avoid

These are the wrong-answer traps SCS-C02 likes to test.

Anti-Pattern 1: Object Lock without Versioning

Object Lock requires versioning. The exam may offer "enable Object Lock on existing bucket" without enabling versioning first; this fails. Object Lock requires versioning to be enabled and cannot be enabled retroactively except via AWS Support.

Anti-Pattern 2: Lifecycle Expiration with Compliance Mode

Compliance mode prevents deletion before retention expires. Adding a lifecycle expiration rule that fires before retention does nothing — the deletion is blocked. Expiration after retention is fine. The trap is candidates expecting expiration to "win" over Object Lock.

Anti-Pattern 3: Glacier for Frequent Access

Putting hot data in Glacier saves storage cost but inflates retrieval cost so much that net cost increases. The data lifecycle decision must consider access pattern, not just storage volume.

Anti-Pattern 4: Single-Account Backup

Storing backups in the same account as production means a compromised root credential destroys both. AWS Backup cross-account copy + Vault Lock is the only defensible architecture for blast-radius-resistant data lifecycle.

Anti-Pattern 5: Default CloudWatch Logs "Never Expire"

Leaving log groups at "Never expire" is both a cost and a compliance failure. SCS-C02 will ask "what is the most cost-effective way to retain logs for 90 days" and the answer is to set retention on the log group, not to export to S3 with lifecycle (which is more expensive for short retention).

Candidates often pick "export CloudWatch Logs to S3 with lifecycle to Glacier" as the cost-optimized answer. For retention windows under 90 days, just setting the log group retention is cheaper. Export-to-S3 only wins for retention beyond what CloudWatch Logs supports cost-effectively, typically 1+ years. CloudWatch Logs pricing

FAQ

Q1: What is the difference between S3 Object Lock governance and compliance mode?

Governance mode lets users with the s3:BypassGovernanceRetention permission delete or modify protected objects; compliance mode blocks all principals including the AWS account root from deletion or modification before retention expires. Governance is suitable for internal data lifecycle policy enforcement; compliance is required for regulatory WORM such as SEC 17a-4(f), FINRA 4511, and CFTC 1.31. The Cohasset Associates assessment specifically validates compliance mode for broker-dealer record retention.

Q2: How do I implement a 7-year retention requirement on AWS most cost-effectively?

Use S3 Object Lock in compliance mode with a 7-year default retention, plus an S3 Lifecycle rule that transitions objects to Glacier Deep Archive after 90 days. Add Cross-Region Replication with Replica Lock for blast-radius protection, SSE-KMS encryption, Block Public Access, and CloudTrail S3 data events. This is the canonical SCS-C02 answer for 7-year financial records data lifecycle.

Q3: Can I delete an Object Lock compliance-mode object before retention expires?

No. Compliance mode is true WORM. No principal — including the AWS account root user — can shorten retention or delete the object before retention expires. If you create a 7-year compliance-mode object today, that object is non-deletable until 2033 (other than closing the entire account, which AWS does not let you do quickly while compliance objects exist). Test in a sandbox first.

Q4: What is the difference between S3 Object Lock and Glacier Vault Lock?

S3 Object Lock applies per-object inside an S3 bucket and supports both governance and compliance modes. Glacier Vault Lock applies a vault-level access policy to a legacy S3 Glacier vault and is always immutable once completed. They are different services. S3 Lifecycle to "Glacier storage class" uses Object Lock for retention, not Vault Lock. Vault Lock is only relevant if you use the legacy direct Glacier API.

Q5: How does AWS Backup Vault Lock provide ransomware resistance?

AWS Backup Vault Lock in compliance mode prevents anyone — including the backup account's root user — from shortening retention or deleting recovery points before retention expires. Combine this with cross-account copy: production accounts copy backups into a dedicated backup account, and that account's vault has Vault Lock. A ransomware attacker who compromises production cannot reach the backup vault, and even a compromised backup account cannot disable Vault Lock. This is the SCS-C02 canonical answer for ransomware-resistant data lifecycle. After you enable Vault Lock in compliance mode there is a 3-day cooling-off period during which it can be aborted; afterward it is permanent.

Q6: Do I need DLM if I am using AWS Backup?

AWS Backup supersedes DLM for most use cases — it covers EBS, EFS, RDS, DynamoDB, and more in one place. DLM remains useful for two scenarios: when you need cross-account EBS snapshot sharing with a non-Backup-managed account, and when you need EBS Fast Snapshot Restore activated via lifecycle. For new deployments, prefer AWS Backup; for legacy DLM policies, migrate to Backup when feasible to centralize the data lifecycle.

Q7: How long does CloudWatch Logs retain data by default?

By default, CloudWatch log groups retain data forever ("Never expire"). This is the most common data lifecycle misconfiguration on AWS. You must explicitly set retention to one of the supported values (1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, 3653 days). Use AWS Config rule cw-loggroup-retention-period-check to enforce this organization-wide. For retention beyond 10 years, export to an S3 bucket with Object Lock.

Q8: Can I change S3 Lifecycle rules after they apply?

Yes — lifecycle rules are mutable and apply prospectively. Adding a new rule does not retroactively transition objects that have already aged past the threshold; instead, S3 evaluates objects at the next lifecycle batch run (within 24-48 hours). Removing a rule stops future transitions but does not undo past ones. Object Lock retention, in contrast, cannot be reduced for compliance-mode objects.

Q9: What is the minimum storage duration for Glacier classes in S3 Lifecycle?

Standard-IA and One Zone-IA require 30 days minimum; Glacier Instant Retrieval requires 90 days; Glacier Flexible Retrieval requires 90 days; Glacier Deep Archive requires 180 days. If you delete an object before the minimum, S3 still bills you for the full minimum duration. Plan your data lifecycle transitions accordingly: small short-lived objects belong in Standard, not Deep Archive.

Q10: Does Object Lock protect against accidental bucket deletion?

No. Object Lock prevents deletion of objects, but a bucket with Object Lock objects can still be deleted by the account owner if the bucket is empty. To prevent bucket deletion entirely, combine Object Lock with s3:DeleteBucket deny in an SCP and with MFA Delete on the bucket. Compliance-mode objects cannot be deleted, so the bucket cannot be emptied either, which is an indirect protection — but the right defense-in-depth is the SCP layer.

Summary Cheat Sheet

For exam-day recall, memorize this table.

Service Purpose Mutability Cross-Region Cross-Account
S3 Lifecycle Transition / expire Mutable rules Per bucket N/A
S3 Object Lock (governance) WORM with bypass Bypass-able Via CRR + Replica Lock N/A
S3 Object Lock (compliance) True WORM Immutable Via CRR + Replica Lock N/A
Glacier Vault Lock Vault-level WORM Immutable after CompleteVaultLock No No
DLM EBS snapshot lifecycle Mutable policy Yes Yes
RDS automated backup 1-35 day retention Mutable window Yes (some engines) No
RDS manual snapshot Indefinite retention Mutable Yes Yes
AMI deprecation Soft deprecation Mutable N/A N/A
CloudWatch Logs retention 1-3653 days Mutable No (export to S3) No (export)
AWS Backup plan Cross-service lifecycle Mutable Yes Yes
AWS Backup Vault Lock (governance) Vault WORM with bypass Bypass-able Inherits Inherits
AWS Backup Vault Lock (compliance) Vault WORM Immutable after 3-day cooling Inherits Inherits

Closing Thoughts

Data lifecycle on AWS is the intersection of three concerns: cost (transition to cheaper tiers), compliance (WORM, retention), and resilience (cross-region, cross-account copy). The SCS-C02 exam tests all three at once. Master the five-pillar mental model, memorize the difference between governance and compliance modes for both Object Lock and Vault Lock, know the minimum storage durations and retention windows by heart, and you will recognize the right answer in any data lifecycle scenario the exam presents. The data lifecycle pattern that wins on the exam is almost always: tag-driven automation, dedicated logging and backup accounts, cross-region copy, Vault Lock or Object Lock in compliance mode for regulated data, and continuous Config + Audit Manager evidence. Build that data lifecycle architecture once and the questions answer themselves.

For deeper reading, the official Data Protection in AWS chapter of the security best practices whitepaper, the AWS KMS Best Practices whitepaper, the AWS Backup Developer Guide, the S3 Object Lock overview, and the Cohasset SEC 17a-4 assessment are essential references for both the exam and real-world data lifecycle work.

Official sources