examhub .cc The most efficient path to the most valuable certifications.
In this note ≈ 40 min

Centralized Logging, Observability, and Encryption Architecture

7,850 words · ≈ 40 min read

Centralized security logging is the architectural discipline of collecting every security-relevant event — API calls, network flows, DNS queries, web-ACL decisions, threat findings, compliance evaluations — from every account and every region of an AWS Organization into a single auditable, queryable, retention-governed store so that a small SOC team can investigate, detect, and prove compliance across a sprawling multi-account estate. On the SAP-C02 exam, centralized security logging shows up across Task 3.1 (improve operational excellence) and Task 3.2 (improve security) of Domain 3 whenever a scenario sets "15 accounts today, each logging locally" or "auditor demands 10-year immutable retention" or "SOC analysts need one pane of glass with Splunk" — the centralized security logging answer is always a combination of CloudTrail Lake, Amazon Security Lake, cross-account S3 delivery, subscription filters, and delegated-admin aggregation, never a single-service shortcut.

This Pro-depth guide walks through every component you will encounter on SAP-C02: organization-wide CloudTrail trails writing to a Log Archive account, CloudTrail Lake's SQL engine and 10-year retention, Amazon Security Lake's OCSF normalization and subscriber model, VPC Flow Logs centralized to S3 for Athena vs streamed via Firehose for real-time SIEM, CloudWatch Logs cross-account destinations with Kinesis Firehose, GuardDuty / Security Hub / Inspector aggregation to a delegated admin, Route 53 Resolver query logging, WAF full-traffic logs, log retention and Object Lock for tamper-evidence, third-party SIEM integrations, and AWS Audit Manager for evidence packaging. Centralized security logging also carries a canonical Pro scenario — the 30-day retrofit for an existing 15-account organization — which we dissect step by step.

What Is Centralized Security Logging?

Centralized security logging in AWS is the aggregation pattern where every account's security-relevant telemetry is delivered to dedicated logging and security accounts rather than being retained only in the account that produced it. In a landing zone, centralized security logging lives in two tightly-scoped accounts: the Log Archive account (raw log retention, immutable storage, compliance evidence) and the Security/Audit account (detective tooling delegated admins, SIEM pipelines, analyst tooling). Workload accounts produce logs but do not own them — a fundamental separation that prevents a compromised workload administrator from tampering with the evidence trail.

Why Centralized Security Logging Matters for SAP-C02

Single-account logging knowledge from SAA-C03 is inadequate. SAP-C02 Task 3.1 and 3.2 stems set scenarios with 15, 50, or 200 accounts and ask which centralized security logging architecture solves a specific retrofit problem in a constrained window (commonly 30, 60, or 90 days). The exam will punish answers that enable logging only in a single account, that route logs to the workload account's own S3 bucket, or that depend on per-account manual CloudTrail creation. The correct pattern is always organization-scoped delivery to a Log Archive bucket in a separate account, with detective tooling delegated admins operating from a separate Security account.

The Six Log Streams You Must Centralize

A complete centralized security logging architecture aggregates six distinct streams: management-plane events (CloudTrail management events, control-plane API calls), data-plane events (CloudTrail data events for S3 objects and Lambda invocations), network flows (VPC Flow Logs at VPC, subnet, or ENI granularity), DNS queries (Route 53 Resolver query logging), web-layer traffic (WAF full logs and ALB access logs), and security findings (GuardDuty, Security Hub, Inspector, Macie, Access Analyzer, Config, Firewall Manager). Each stream has its own delivery mechanism, retention consideration, and query workflow — centralized security logging is the umbrella pattern that ties them together.

Plain-Language Explanation: Centralized Security Logging

Centralized security logging sounds like another acronym soup, but three everyday analogies make its structure intuitive. Read all three — each exposes a different property of centralized security logging.

Analogy 1 — The Postal Sorting Facility (Postal System Analogy)

Picture a nationwide postal system with 15 regional post offices (your AWS member accounts) and one central sorting facility (your Log Archive account) plus one investigations bureau (your Security account). Centralized security logging is the design of mail flow so every letter and every parcel — no matter where it was posted — ends up sorted, indexed, and retained at the central facility, with any suspicious parcel flagged to the bureau.

In this system, the registered-mail ledger is the CloudTrail organization trail — every posting (API call) is written to a master book regardless of which regional office accepted it. The ledger is cryptographically signed so you can prove nobody tampered after the fact. Sorting machines are Amazon Security Lake — they take in every mail piece in its original format (raw CloudTrail JSON, raw VPC Flow Log text, raw DNS queries) and re-label them all using one common address schema (OCSF) so an investigator searching for "all parcels from sender X" does not need to know whether the parcel came in as a letter, postcard, or package. The tamper-evident vault for long-term retention of sensitive certified mail is the S3 Object Lock configuration on the Log Archive bucket — once written, nothing can delete or modify it for ten years. The central CCTV recording capturing every handoff is VPC Flow Logs centralized from every branch's loading dock. The forensic analysts at the bureau are the SOC team using CloudTrail Lake (for SQL queries on the ledger) and Athena (for queries on the flow log archive). The alarm bell network notifying the bureau of suspicious packages is GuardDuty findings flowing to the delegated admin. Finally, the external audit courier periodically picking up evidence packets mapped to ISO, SOC 2, and PCI is Audit Manager. Everything converges to one facility; no regional office gets to say "we lost that receipt."

Analogy 2 — The Hospital Medical Records System (Hospital / Medical Analogy)

Imagine a hospital system with 15 clinics (member accounts), one central medical records department (Log Archive account), and one infection control unit (Security account). Centralized security logging is how every patient encounter across every clinic becomes one unified, tamper-evident chart accessible to the right clinicians without risking local loss.

The electronic health record tracking every prescription, procedure, and referral is the CloudTrail Lake — a single event data store where any clinician (with permission) can run SQL queries like "show me all insulin prescriptions for patient X across all clinics in the last 12 months." Because CloudTrail Lake is queryable for up to 10 years with immutable retention, it fulfils the multi-year record-keeping mandate. The standardised chart notation that translates every clinic's local shorthand into a common vocabulary is OCSF in Amazon Security Lake — one clinic might log "BP 120/80" while another logs "blood pressure normal," but OCSF normalises everything so cross-clinic analytics work. The vital-signs monitors streaming real-time are VPC Flow Logs and CloudWatch Logs subscription filters delivering to Firehose in sub-minute latency. The infection-control screening spotting disease clusters across clinics is GuardDuty plus Security Hub, with findings flowing to the central infection unit's dashboard via Security Hub's cross-region aggregation. The sealed-envelope physical backup for regulatory inspections is the S3 Object Lock vault on the Log Archive bucket — inspector-grade evidence that cannot be altered. The external auditor's package for Joint Commission or CMS review is Audit Manager's evidence collection. The same event travels multiple pipelines (real-time to the SOC SIEM; archived to immutable S3; queryable via SQL in CloudTrail Lake; normalised into OCSF for Security Lake subscribers), because different consumers have different latency and retention needs. Centralized security logging is the hospital-wide information backbone that makes a 15-clinic system feel like one hospital.

Analogy 3 — The Highway Toll and Surveillance Network (Transportation / Highway Analogy)

Picture a national highway authority with 15 regional toll operators (member accounts) and one central traffic operations centre (Security account) that depends on a central recordings archive (Log Archive account). Centralized security logging is how every camera, every toll transaction, every radar reading from every regional operator ends up in one archive and one live operations centre.

The toll transaction ledger is the CloudTrail organization trail — every licence plate scanned at every booth is recorded to the central ledger with the regional operator's identifier attached. The highway-surveillance raw footage repository stored on immutable discs for years is the S3 Log Archive bucket with Object Lock enabled — gigabytes of VPC Flow Logs, WAF logs, and Firehose output land here in their raw form. The unified dashboard in the operations centre showing live traffic, accidents, and alerts is CloudWatch Unified Cross-Account Observability plus Security Hub's cross-region aggregation region — one operator can see all 15 regions without flipping between screens. The structured incident database that lets the lead investigator write SQL like "give me every lane closure, radar alert, and toll anomaly from region 7 between 2 AM and 4 AM last Tuesday" is CloudTrail Lake plus Athena over partitioned flow logs. The real-time radar and speed-camera feeds stream via Firehose subscription filters to the operations centre SIEM (Splunk or Datadog or a regional OpenSearch domain). The provincial-government inspection reports are Audit Manager evidence packages mapping stored toll data to regulatory controls. When a new regional operator (new AWS account) is onboarded, they are automatically wired into all of this via the delegated-admin auto-enable setting — no manual cabling. Centralized security logging is the nervous system turning 15 independent highway operators into one coherent authority.

The Centralized Security Logging Reference Architecture

Every SAP-C02 centralized security logging question implicitly assumes the Security Reference Architecture (SRA) layout. Memorise this canonical diagram — the exam answers slot directly into its boxes.

Three Account Roles

Centralized security logging involves three account roles regardless of organization size:

  • Log Archive account — holds the S3 bucket receiving CloudTrail, Config, VPC Flow Logs, ALB access logs, WAF logs, Route 53 Resolver query logs from every member account. Bucket policies grant cross-account write from known log-producing services and principals. S3 Object Lock in Compliance mode provides WORM (write-once-read-many) retention.
  • Security/Audit account — hosts the delegated admins for GuardDuty, Security Hub, Config, Inspector v2, Macie, IAM Access Analyzer, Detective, and Audit Manager. Also hosts CloudTrail Lake event data stores (organization-level), CloudWatch Cross-Account Observability monitoring account, and the central OpenSearch domain or Firehose-to-SIEM pipelines.
  • Member workload accounts — produce logs but do not retain them centrally. Local CloudWatch Logs groups may exist for application logs and short-term debugging; security-relevant logs always flow out.

Log Flow From Producer to Archive

Every log stream follows the same pattern: produce in a member account → deliver via service-specific mechanism → land in Log Archive S3 or Security account CloudWatch → index and query from the Security account. The delivery mechanism differs per stream (direct S3 writes for CloudTrail and Flow Logs; subscription filter plus Firehose for CloudWatch Logs content; event bus for findings) but the destination is always cross-account.

Why Two Accounts, Not One

Log Archive and Security account are separate on purpose. The Log Archive account holds the raw evidence — write-only for producers, read-only for auditors, protected by Object Lock. Nobody, not even SOC analysts, can delete from it in Compliance mode. The Security account holds active tooling and analyst access — GuardDuty consoles, Security Hub dashboards, CloudTrail Lake query consoles, Firehose-to-SIEM delivery streams. Separating the roles means a compromised Security account analyst credential still cannot destroy the evidence archive, and a compromised Log Archive credential still cannot operate the detective tooling. This separation is a hard exam signal — any answer putting both roles in one account is usually wrong.

Centralized security logging requires two dedicated accounts — Log Archive and Security — that are separate from the management account and from any workload. The management account is exempt from SCPs and must be kept minimally used. Putting log buckets in the management account violates least-privilege principles and exposes evidence to management-account compromise. Putting detective tooling in the Log Archive account exposes analyst workflows to the same bucket-policy blast radius. The canonical SRA split — dedicated Log Archive (immutable storage), dedicated Security account (active tooling), management account exempt from both — is the answer pattern for every SAP-C02 centralized security logging scenario.

CloudTrail Organization Trail — The Foundation of Centralized Security Logging

CloudTrail is the first and most fundamental component of any centralized security logging architecture. Without an organization-wide CloudTrail trail, no other centralized security logging component works correctly.

Organization Trail Mechanics

An organization trail is a single CloudTrail trail created in the management account (or from a CloudTrail delegated admin account) that applies automatically to every current and future member account in every region. The trail delivers to one S3 bucket in the Log Archive account — cross-account delivery is authorised by the Log Archive bucket's resource policy. Member accounts cannot disable or modify the organization trail from their own consoles; only the management account or delegated admin can.

Management Events vs Data Events

CloudTrail logs two event classes with different cost and coverage implications. Management events (default on, free for the first copy) record control-plane API calls — RunInstances, CreateRole, PutBucketPolicy. Data events (off by default, priced per event) record data-plane activity — every S3 GetObject, every Lambda Invoke, every DynamoDB GetItem on selected tables. Centralized security logging almost always enables data events on S3 buckets holding sensitive data, on all Lambda functions, and on DynamoDB tables holding regulated data. The exam will test whether candidates remember that data events must be explicitly enabled and cost additional per-event fees.

CloudTrail Insights — Anomalous API Detection

CloudTrail Insights (paid add-on) identifies anomalous patterns in management events — a sudden spike in DeleteObject, an unusual concentration of AssumeRole calls, or an unexpected region being used. Insights findings integrate with EventBridge for automated response. For centralized security logging, Insights is enabled on the organization trail so the aggregate pattern across all accounts is analysed rather than per-account patterns that might miss cross-account lateral movement.

S3 Delivery and Bucket Hardening

The Log Archive S3 bucket receiving CloudTrail needs a bucket policy granting s3:PutObject to the CloudTrail service principal cloudtrail.amazonaws.com with aws:SourceArn pinned to the trail ARN — this is the confused-deputy protection documented in the CloudTrail guide. Bucket hardening for centralized security logging: block all public access, enable default encryption with an organization-shared KMS CMK, enable versioning, enable MFA delete on the root account, apply S3 Object Lock in Compliance mode with a retention period matching regulatory requirements (often 7 or 10 years).

Integrity Validation

CloudTrail log-file integrity validation produces hourly digest files signed with SHA-256 and RSA, allowing cryptographic verification that log files were not altered after delivery. Enable integrity validation on every centralized security logging trail — auditors expect it, and courts consider signed log files as admissible evidence.

CloudTrail Lake — SQL on Top of CloudTrail for Long-Term Investigation

CloudTrail Lake is the newer SQL-queryable event data store that sits alongside (or replaces) traditional S3 delivery for many centralized security logging use cases.

What CloudTrail Lake Does

CloudTrail Lake ingests CloudTrail events (management, data, Insights) plus non-AWS and custom events into an event data store with schema-less storage and Apache Presto-style SQL querying. Retention is configurable up to 10 years. Queries run against years of events without provisioning OpenSearch or running Glue jobs — CloudTrail Lake handles indexing and partitioning internally.

Organization Event Data Store

Create an organization-level event data store from the management account or CloudTrail delegated admin. The store automatically collects events from every member account in every region where CloudTrail is enabled. Queries from the Security account (with appropriate IAM permissions) scan across the entire Organization's event history.

SQL Query Patterns for Centralized Security Logging

Typical CloudTrail Lake queries that appear in SAP-C02 scenarios: "find every AssumeRole from a specific source IP across all accounts in the last 30 days," "list every DeleteObject on any bucket tagged Compliance=HIPAA over the past year," "show every ConsoleLogin using root credentials across the organization," "identify all CreateAccessKey events where the principal had not previously created keys in that account." These queries would be painful on raw S3+Athena without partition tuning, but CloudTrail Lake returns answers in seconds on years of data.

CloudTrail Lake vs S3 + Athena

The two approaches are complementary. CloudTrail Lake: managed query engine, 10-year retention, SQL without infrastructure, simple one-click setup, priced per GB ingested and per GB scanned. S3 + Athena: requires partition design, Glue Data Catalog, Athena workgroups, but offers cheaper long-term storage for the raw objects and access to the raw format for non-CloudTrail consumers. Most organizations run both — S3 archive for raw compliance evidence, CloudTrail Lake for active investigations.

CloudTrail Lake Immutability

Events in CloudTrail Lake are immutable for the configured retention period. Unlike CloudWatch Logs (which permit delete-after-write), CloudTrail Lake cannot be purged before retention expiry. This is the specific feature that makes CloudTrail Lake suitable for regulatory-compliance query layers.

CloudTrail Lake mental model. One organization-level event data store in the Security account. Ingests every management and data event from every member account in every enabled region. SQL queryable. Up to 10-year retention. Immutable. Priced per GB ingested plus per GB scanned. Replaces the need to stand up OpenSearch for long-term CloudTrail forensics. On the SAP-C02 exam, any scenario demanding "multi-year SQL-queryable audit trail across all accounts" with minimal operational overhead is CloudTrail Lake; any scenario demanding "raw CloudTrail evidence preserved for 10 years immutably with Object Lock" is S3 + Object Lock; a mature centralized security logging design uses both.

Amazon Security Lake — OCSF-Normalized Centralized Security Logging

Amazon Security Lake is AWS's purpose-built service for aggregating, normalizing, and serving security data from AWS and non-AWS sources into a customer-owned S3 data lake.

What Security Lake Does

Security Lake provisions an S3 bucket (in the customer's Log Archive account by convention), creates Glue Data Catalog tables, and ingests security-relevant data from supported AWS sources — CloudTrail management and data events, VPC Flow Logs, Route 53 Resolver query logs, Security Hub findings, AWS WAF logs — plus custom sources via the OCSF schema. All ingested data is normalized into the Open Cybersecurity Schema Framework (OCSF) JSON format and partitioned by source, account, region, and date for efficient querying.

OCSF Normalization

OCSF is an open standard (led by AWS, Splunk, IBM, and others) for unifying security event data across vendors. Without OCSF, a SOC analyst querying "show me every authentication failure from the network, from the application, from the cloud control plane" has to write three different queries across three different schemas. OCSF gives every authentication failure the same field names (actor.user.name, src_endpoint.ip, activity_id, disposition) regardless of whether it came from CloudTrail, VPC Flow Logs, or a third-party EDR tool.

Delegated Admin and Multi-Account Enablement

Security Lake follows the same delegated-admin pattern as every other organization-wide AWS security service. The Security account is designated Security Lake delegated admin. From there, enable Security Lake across every member account in every required region, with auto-enable for new accounts. All member-account data flows to the central Security Lake bucket.

Subscriber Model — Query and Data Access

Security Lake supports two subscriber types: query subscribers (Athena / Lake Formation grant to a data-analyst role, either in-account or cross-account) and data subscribers (a third-party SIEM or SOAR consumes S3 events via SQS notifications, pulling new OCSF objects as they land). Splunk Cloud, Datadog, IBM QRadar, Palo Alto Cortex XSOAR, and SentinelOne all support Security Lake data subscription natively — no bespoke data pipeline required.

Custom Sources

Security Lake accepts custom sources (CrowdStrike Falcon, Okta logs, on-prem firewall logs) as long as they are transformed into OCSF JSON and written to the custom-source prefix the lake provisions. The OCSF schema has categories for authentication, network activity, file activity, process activity, system activity, findings, and discovery — the custom source picks the relevant category and maps fields.

Security Lake vs Building Your Own Data Lake

Before Security Lake, the equivalent required CloudTrail → S3 → Glue crawler → Athena partition projection → custom ETL for VPC Flow Logs → custom ETL for Resolver query logs → manual OCSF transform if you wanted normalization. Security Lake collapses this to an enable-service-in-delegated-admin-account operation. Cost is per GB ingested per month. For SAP-C02 scenarios asking "central security data lake with OCSF normalization and Splunk/Datadog integration in 30 days," Security Lake is the canonical answer — the custom-build alternative cannot hit that timeline.

Amazon Security Lake is the managed, OCSF-normalized centralized security logging data lake. It ingests CloudTrail, VPC Flow Logs, Route 53 Resolver logs, Security Hub findings, WAF logs, and custom sources; normalizes to OCSF; stores in S3 under the customer's control; serves query subscribers via Lake Formation and data subscribers via SQS-notified S3 pulls. Delegated-admin pattern identical to GuardDuty. Replaces months of custom data-lake engineering with minutes of configuration. On the SAP-C02 exam, Security Lake is the answer when the scenario mentions "OCSF," "normalized security data," "SIEM integration via subscriber," or "30-day SOC retrofit across multiple accounts."

VPC Flow Logs Centralization Patterns

VPC Flow Logs capture IP-layer traffic metadata for every packet traversing a VPC, subnet, or ENI. Centralizing them is one of the highest-volume challenges in centralized security logging.

Where to Enable Flow Logs

Flow logs can be enabled at three granularities: VPC-level (covers all ENIs in the VPC, easiest to manage organization-wide), subnet-level (for focused monitoring of specific tiers), ENI-level (for targeted troubleshooting). For centralized security logging baseline coverage, enable flow logs at VPC-level in every VPC in every account in every enabled region. Automate the enablement via AWS Config with a managed rule (vpc-flow-logs-enabled) plus SSM auto-remediation, or via CloudFormation StackSets from the management account.

Destination Choice — S3 vs CloudWatch Logs vs Firehose

Flow logs support three destinations with different trade-offs:

  • S3 (recommended for centralized security logging archive) — direct delivery to the Log Archive bucket, partitioned by year/month/day. Cheap long-term storage, Athena-queryable with partition projection, integrates with Security Lake.
  • CloudWatch Logs — log group in the producing account, then subscription filter to Firehose for cross-account delivery. More expensive per GB than S3 but enables real-time CloudWatch Logs Insights queries and alarm-based triggers.
  • Amazon Data Firehose — direct delivery to Firehose, transform via Lambda, deliver to S3 / OpenSearch / Splunk / third-party HTTP endpoints. Best pattern for real-time SIEM ingestion.

Custom Flow Log Format

Default flow log format includes fourteen fields (srcaddr, dstaddr, srcport, dstport, protocol, packets, bytes, windowstart, windowend, action, logstatus, version, account-id, interface-id). The custom format adds twenty more (vpc-id, subnet-id, instance-id, tcp-flags, type, pkt-srcaddr, pkt-dstaddr, region, az-id, sublocation-type, sublocation-id, pkt-src-aws-service, pkt-dst-aws-service, flow-direction, traffic-path). For centralized security logging, always use the custom format — the default format lacks critical fields for forensic analysis (TCP flags for connection-establishment detection, packet-level source/destination for NAT-rewritten traffic).

Partitioning for Athena Query Cost

Athena charges per GB scanned. A year of VPC Flow Logs from 100 VPCs is terabytes. Without partitioning, every query scans everything. With Hive-style partitioning (/year=2026/month=04/day=20/hour=14/) and Athena partition projection configured on the Glue table, queries touch only the relevant partitions. Proper partitioning reduces Athena cost by 50x to 100x on long time-range queries.

Centralized Flow Log Architecture

Canonical pattern: VPC Flow Logs in every member VPC → direct delivery to Log Archive S3 bucket (prefix per account per region) → Glue crawler builds partitioned table → Athena workgroup in Security account queries the table with result location in a Security-account bucket. For real-time, a Firehose stream in each member account subscribes to the VPC Flow Log CloudWatch log group and delivers to a central Firehose-to-Splunk endpoint. Most mature organizations run both pipelines — S3 archive for forensic queries, Firehose for real-time SOC alerting.

Cost Optimization

VPC Flow Logs at VPC level on busy production VPCs generate gigabytes per hour. Cost-optimize by: sampling at 10 percent for low-criticality sandbox VPCs, using S3 Intelligent-Tiering on the Log Archive bucket with transition to Glacier Deep Archive after 180 days, and disabling flow logs in Sandbox OU via SCP exemption if regulatory scope does not require them.

CloudWatch Logs Cross-Account Destinations and Subscription Filters

CloudWatch Logs is the destination for all operating-system logs (CloudWatch agent), application logs, Lambda logs, VPC Flow Logs when delivered to CloudWatch, and many other log sources. Centralized security logging for CloudWatch Logs uses two core features: cross-account destinations and subscription filters.

Subscription Filters — Real-Time Forwarding

A subscription filter on a CloudWatch log group sends matching log events in near real time to one of: Amazon Data Firehose delivery stream, Kinesis Data Stream, Lambda function, or a CloudWatch Logs destination (a cross-account receiver). Filter patterns let you forward only the events you care about — for example, { ($.eventName = "ConsoleLogin") && ($.responseElements.ConsoleLogin = "Failure") } forwards only failed console logins.

Cross-Account Destinations

A CloudWatch Logs destination in the Security account is a named endpoint wrapping a Kinesis Data Stream or Firehose delivery stream. Member accounts create subscription filters pointing at the destination's ARN. The destination's access policy grants logs:PutSubscriptionFilter to specific member account IDs. This is the canonical centralized security logging pattern for real-time cross-account log forwarding — one destination in Security account, N subscription filters across all member accounts.

Subscription Filter → Firehose → S3 Pattern

The complete real-time centralized security logging pipeline: member account CloudWatch log group → subscription filter (pattern match for security-relevant events) → CloudWatch Logs destination in Security account → Kinesis Firehose → S3 Log Archive bucket in OCSF-normalized format via Lambda transform → Security Lake ingestion → queryable by SIEM subscribers. This chain turns per-account local logs into a centralized, normalized, queryable, SIEM-ready stream.

CloudWatch Unified Cross-Account Observability

CloudWatch Cross-Account Observability (introduced 2022) is the newer feature letting a monitoring account see logs, metrics, and traces from linked source accounts in unified dashboards. Set up: designate a monitoring account in the Security account, send an invitation to each source account, accept from each, and all of the source account's metrics / logs / X-Ray traces become queryable from the monitoring account's CloudWatch console. This is a separate (complementary) pattern to subscription filters — Cross-Account Observability is for interactive dashboards, subscription filters are for automated pipelines.

When to Use CloudWatch vs Direct S3

For pure archival (VPC Flow Logs, CloudTrail, ALB access logs), bypass CloudWatch and deliver directly to the Log Archive S3 bucket — it is cheaper per GB and simpler. For real-time alerting or SIEM integration, use CloudWatch Logs as the intermediate store with subscription filters. For interactive operational troubleshooting, use CloudWatch Cross-Account Observability in addition.

For centralized security logging, prefer direct-to-S3 delivery for archival streams and CloudWatch subscription filters only for real-time streams. CloudWatch Logs charges per GB ingested plus per GB stored — at scale, archiving all VPC Flow Logs through CloudWatch is 5x to 10x more expensive than direct S3 delivery. Use CloudWatch Logs as the transport only when a downstream subscriber (Firehose, Lambda, destination) needs real-time access. The canonical SAP-C02 answer for "centralized archival of 100 VPCs' flow logs" is direct-to-S3; the answer for "real-time stream of failed login events to Splunk" is CloudWatch subscription filter → Firehose → Splunk endpoint.

GuardDuty, Security Hub, Inspector — Finding Aggregation to Delegated Admin

Security findings are a separate centralized security logging stream from raw event logs. Findings are already enriched (threat detection, CVE analysis, compliance evaluation) and the centralization pattern is finding aggregation rather than raw log forwarding.

GuardDuty Delegated Admin and Findings Flow

As described in the cross-account-security-controls topic, GuardDuty delegated admin in the Security account provides a single pane of glass for threat findings across all member accounts and regions. Findings export to the Log Archive S3 bucket every 5 minutes (frequency configurable), and automatically flow to Security Hub for cross-region aggregation. Exported findings in S3 are raw GuardDuty JSON — usable for long-term archive and custom analytics.

Security Hub Cross-Region Aggregation

Security Hub in the Security account with cross-region aggregation (aggregation region chosen once, typically us-east-1) receives findings from every other region across every member account. Findings normalize to AWS Security Finding Format (ASFF). A Security Hub Automation Rule can auto-enrich, auto-suppress, or auto-escalate findings based on attributes (suppress Environment=Sandbox findings, escalate Severity=CRITICAL from Environment=Production to PagerDuty via EventBridge).

Inspector v2 for Vulnerability Findings

Inspector v2 delegated admin in the Security account enables continuous vulnerability scanning across EC2 instances, ECR container images, and Lambda functions in every member account. Findings include CVE IDs, CVSS scores, and fix-version information. All findings flow to Security Hub; a copy can be exported to S3 for long-term vulnerability trend analysis.

Integration With Centralized Security Logging

Security Hub findings export to Amazon Security Lake automatically when Security Lake is enabled. This is the cleanest path: GuardDuty detects → Security Hub aggregates → Security Lake normalizes to OCSF → SIEM subscriber consumes. Bypassing Security Lake and sending Security Hub findings directly to a third-party SIEM via EventBridge is also supported but requires schema mapping on the SIEM side.

EventBridge for Real-Time Finding Response

Every Security Hub finding update publishes to the default EventBridge event bus. Rules on the bus can trigger Lambda for auto-remediation, SNS for paging, Systems Manager runbook for patching, or route to a custom event bus in the SOC tooling account. Centralized security logging responsiveness is measured in minutes-to-acknowledge; EventBridge is how you get below that threshold for critical findings.

Route 53 Resolver Query Logging

DNS query logs are a critical but often-overlooked centralized security logging stream. They capture every DNS resolution request from VPC workloads — invaluable for detecting malware C2 domains, data-exfiltration to unauthorized endpoints, and configuration drift.

Enabling Resolver Query Logging

Route 53 Resolver query logging is enabled per VPC with a configuration that specifies destination (CloudWatch Logs, S3, or Firehose), and the query log configuration can be shared across accounts using AWS RAM. The recommended centralized pattern: create the query log configuration in the Security or shared Networking account, share via RAM with all member accounts, associate the configuration with every VPC at VPC creation time via CloudFormation / CDK / Terraform.

What Gets Logged

Every DNS query made by any resource in the associated VPC: the query name, type, response code, answer records, and the source VPC / ENI. Queries to private hosted zones, forwarded queries to on-prem resolvers, and queries to public authoritative nameservers are all captured.

Security Use Cases

  • Malware C2 detection — queries to known-malicious domains captured in query logs, GuardDuty generates Backdoor:EC2/C&CActivity.B!DNS findings from this stream.
  • Data exfiltration detection — queries to unauthorized external DNS, especially large volumes of TXT queries which can be a DNS tunnel indicator.
  • Compliance evidence — proof that internal services only resolved approved domains.

Athena Query Patterns

Query logs delivered to S3 are JSON with one record per query. Athena over the partitioned table answers "which instances resolved malicious.example.com last Tuesday?" or "which VPC had the most external DNS queries at 3 AM?" Pair with VPC Flow Logs to correlate a DNS resolution with the subsequent IP connection — a core DFIR workflow.

WAF Logs → Firehose → Centralized Security Logging

AWS WAF full-traffic logs record every request evaluated by a Web ACL — blocked and allowed — along with the rule that matched, the source IP, country, user agent, headers, and body snippet.

Enabling WAF Logging

Each Web ACL can log to Amazon Data Firehose, CloudWatch Logs, or S3. For centralized security logging, Firehose is the recommended destination because it enables transform (drop redacted fields, enrich with geo-IP), buffer, and deliver to S3 in the Log Archive account with optional OpenSearch mirror.

Firewall Manager for Multi-Account WAF Logging

Using AWS Firewall Manager policies from the Security account, you can deploy the same Web ACL with logging configuration to every member account's ALB / CloudFront / API Gateway. Policy changes propagate automatically. This is the cross-account WAF logging pattern — one policy, N accounts covered.

Correlation With CloudTrail and Flow Logs

A classic attack reconstruction: WAF log shows SQL injection attempt from IP X blocked on rule Y. Correlate the same IP in VPC Flow Logs to see if the attacker scanned other ports. Correlate CloudTrail for any successful AssumeRole calls from that IP. This three-way correlation is why centralized security logging must bring all three streams into the same query layer (Athena or Security Lake).

Log Retention, Lifecycle, and Tamper-Evident Storage

Retention and immutability are as important to centralized security logging as collection itself. An auditor asking for two-year-old logs does not care that they were collected if they were deleted six months ago.

S3 Object Lock — Compliance vs Governance Mode

S3 Object Lock places objects in a WORM state for a defined retention period. Two modes:

  • Governance mode — authorized IAM principals can bypass the lock with s3:BypassGovernanceRetention permission. Suitable for internal tamper-resistance without absolute immutability.
  • Compliance mode — even the root account cannot delete during retention. Suitable for regulated data (SEC 17a-4, FINRA, HIPAA audit trails). Irreversible — setting compliance mode is a one-way door.

For centralized security logging under regulatory scope, use Compliance mode on the Log Archive bucket with a retention matching the longest regulatory requirement (typically 7 years for SOX, 10 years for some financial services regulators).

S3 Vault Lock for AWS Backup

Separately, AWS Backup supports Vault Lock on Backup vaults — similar concept applied to AWS Backup snapshots rather than S3 objects. For centralized security logging, Backup vault lock protects the configuration backups of critical resources — EC2 AMIs, RDS snapshots, EFS backups — from ransomware-style deletion.

Lifecycle Policies for Cost Optimization

Compliance mode locks the object; it does not prevent lifecycle transitions. A canonical lifecycle for centralized security logging:

  • Days 0-30 — S3 Standard (active investigation window, hot query).
  • Days 31-90 — S3 Standard-IA (routine queries, 40 percent cheaper).
  • Days 91-365 — S3 Glacier Instant Retrieval (compliance queries, 68 percent cheaper than Standard).
  • Days 366-3650 — S3 Glacier Deep Archive (tombstone retention, 95 percent cheaper than Standard, 12-hour retrieval).

Combined with Object Lock Compliance mode at the retention period, this delivers tamper-evident long-term retention at minimum cost.

CloudWatch Logs Retention

Every CloudWatch log group has a retention setting (default: never expire). For centralized security logging hygiene, set explicit retention on every log group — typically 30 days for production application logs (after which they transition to S3 archive via Firehose), 7 days for sandbox debugging logs. Use AWS Config rule cw-loggroup-retention-period-check to audit compliance.

S3 Object Lock Compliance mode is irreversible. Once enabled on a bucket with a retention period, even the AWS root account cannot shorten the retention or delete locked objects before expiry. This is the feature that makes S3 a qualifying storage medium for SEC 17a-4(f), FINRA 4511, and CFTC 1.31(c)-(d) — the regulators specifically require that the storage prevent any form of alteration or deletion during the retention period. For centralized security logging that must survive a ransomware attack, insider threat, or rogue administrator, Compliance mode is the only correct choice. Governance mode does not satisfy these regulations because privileged principals can still bypass.

Third-Party SIEM Integration — Splunk, Datadog, and Beyond

Most mature organizations run a third-party SIEM — Splunk, Datadog, Sumo Logic, Sentinel, QRadar. Centralized security logging on AWS must feed this SIEM without replacing it.

Path 1 — Security Lake Subscriber

The recommended path for new SIEM integrations: enable Security Lake, then subscribe the SIEM as a data subscriber. Splunk Cloud, Datadog, IBM QRadar, Palo Alto Cortex, and others have native Security Lake subscribers — configured in the SIEM's console, not in AWS. The SIEM receives an SNS notification per new OCSF object and pulls via S3 GetObject. Benefits: OCSF normalization happens before SIEM ingestion (saves SIEM license costs based on unstructured volume), one subscriber covers all log streams Security Lake ingests, and adding a new stream to Security Lake automatically flows to the SIEM.

Path 2 — Direct Firehose to SIEM HTTP Endpoint

For SIEMs without Security Lake support, or for real-time latency-sensitive streams, send CloudWatch Logs subscription filter → Kinesis Firehose → HTTP endpoint delivery (Splunk HEC, Datadog intake, generic HTTPS). Firehose handles buffering, retries, and backup-to-S3 on delivery failure. This is the pre-Security-Lake pattern and remains valid for streams Security Lake does not cover (custom application logs, third-party EDR).

Path 3 — Managed OpenSearch as SIEM

Amazon OpenSearch Service with the Security Analytics plugin can act as a first-party SIEM alternative. Logs flow via Firehose into OpenSearch indices; Security Analytics provides prebuilt detection rules, anomaly detection, and correlation. Cost-effective for organizations wanting to avoid per-GB SIEM licensing. On the SAP-C02 exam, OpenSearch appears when the scenario says "cost-optimized SIEM" or "avoid third-party licensing."

Data Transfer Cost Considerations

A large SIEM ingestion pipeline can move petabytes per month across VPC or cross-region. Centralized security logging cost is dominated by: GB ingested by SIEM license (Splunk charges per daily GB), GB egress if SIEM is outside AWS (Splunk Cloud egress from AWS), and Firehose per-GB processing plus retries. Mitigation: use Security Lake's OCSF normalization to drop noisy fields before SIEM ingest, sample low-criticality streams (sandbox VPC Flow Logs at 10 percent), and use VPC endpoints for Firehose if the SIEM is on AWS to avoid NAT Gateway charges.

AWS Audit Manager for Compliance Evidence Collection

Audit Manager packages centralized security logging data into auditor-ready evidence aligned to compliance frameworks.

How Audit Manager Uses Centralized Security Logging

Audit Manager continuously collects evidence from CloudTrail (for "who did what when" attestation), AWS Config (for configuration-compliance attestation), and Security Hub (for "was this control in a passing state") across every account in scope. It maps each piece of evidence to specific control IDs in frameworks like SOC 2 Type 2, PCI DSS 4.0, HIPAA Security Rule, NIST 800-53, ISO 27001, GDPR.

Multi-Account Assessments

With AWS Organizations delegated admin, a single Audit Manager assessment in the Security account scopes across every member account automatically. At audit time, you export the assessment report as PDF plus structured evidence archive — exactly what the external auditor requests.

Why Audit Manager Belongs in Centralized Security Logging

Without centralized CloudTrail, centralized Config, and Security Hub aggregation already in place, Audit Manager has nothing to collect. Audit Manager is the top-of-stack consumer of the centralized security logging foundation — it is meaningless in isolation. When an SAP-C02 scenario says "we need to prepare for SOC 2 Type 2 audit," the answer chain is: ensure CloudTrail organization trail → ensure Config aggregator → enable Security Hub with FSBP + SOC 2 controls → enable Audit Manager with SOC 2 framework assessment scoped to the organization. Audit Manager without centralized security logging produces empty evidence.

The Pro Audit Workflow — Investigating a Cross-Account Incident

A canonical SAP-C02 scenario: "A GuardDuty finding fires for cryptocurrency mining in account 42 at 03:17 UTC. Walk through the investigation workflow using centralized security logging." Here is the reference answer.

Step 1 — Triage in Security Hub

Open Security Hub in the aggregation region. The finding has full context: affected account 42, affected EC2 instance ID, severity, associated IAM role if any. Click through to GuardDuty for the full finding details including observed domain or IP destinations.

Step 2 — Verify in VPC Flow Logs

Query the Log Archive Athena table for flow logs from the affected ENI over the last 24 hours. Confirm external IP destinations match the mining-pool IPs in the GuardDuty finding. Identify any other ENIs that contacted the same destinations (lateral movement detection).

Step 3 — Correlate With DNS Queries

Query Route 53 Resolver query log table for DNS resolutions from the affected instance. Identify domains like pool.supportxmr.com or similar mining-pool DNS. Cross-reference across all VPCs — if another instance resolved the same, investigate it too.

Step 4 — Reconstruct Access in CloudTrail Lake

SQL query in CloudTrail Lake: SELECT * WHERE userIdentity.sessionContext.sessionIssuer.arn LIKE '%AffectedInstanceRole%' AND eventTime > 2026-04-19T03:00:00Z. Identifies the session that established persistence — did it run aws configure to add credentials, did it call ec2:RunInstances to spread, did it call s3:GetObject on sensitive buckets.

Step 5 — Scope Affected Data

If CloudTrail data events are enabled on sensitive S3 buckets, query the data-event table for GetObject calls by the compromised role. Determine what data was exfiltrated.

Step 6 — Remediate via Systems Manager

From the Security account, invoke SSM Automation runbook AWS-IsolateInstance to apply an isolation security group. Invoke key-rotation on any credentials the instance used. Disable the IAM role while forensic imaging is captured.

Step 7 — Document in Audit Manager

Attach finding, investigation evidence, and remediation actions to the Audit Manager assessment for the affected framework (SOC 2 CC7.3 incident response, PCI DSS 12.10 incident response plan). This step produces the auditor-ready evidence.

Total investigation time with centralized security logging: 30-90 minutes. Without centralized security logging — hopping between 15 account consoles — the same investigation takes days and typically misses cross-account indicators.

The Pro Retrofit Pattern — Centralizing 15 Accounts in 30 Days

The most-tested SAP-C02 scenario pattern: an existing AWS Organization with 15 member accounts, each logging locally (or not at all), needs a central SOC view and centralized security logging architecture deployed in 30 days.

Week 1 — Foundation

  1. Dedicate a Log Archive account and Security/Audit account via Control Tower Account Factory if using Control Tower; otherwise provision manually.
  2. Create the Log Archive S3 bucket with default encryption via org-shared KMS CMK, versioning, Object Lock Compliance mode at 7-year retention, lifecycle rules (Standard → Standard-IA → Glacier Instant → Glacier Deep Archive).
  3. Create the CloudTrail organization trail from the management account, delivering to the Log Archive bucket. Enable management events (free tier) plus data events on sensitive S3 and all Lambda. Enable CloudTrail Insights. Enable log-file integrity validation.
  4. Designate Config delegated admin to Security account, create organization-wide Config with delivery to Log Archive bucket, enable Config aggregator.
  5. Designate GuardDuty, Security Hub, Inspector v2, Macie, Access Analyzer delegated admins to Security account. Enable each org-wide with auto-enable for new accounts.

Week 2 — Network and DNS Logs

  1. Enable VPC Flow Logs at VPC level in every VPC using CloudFormation StackSets from the Security account. Custom format with all 34 fields. Direct delivery to Log Archive S3 bucket.
  2. Create Route 53 Resolver query log configuration in the Security or Networking account. Share via RAM. Associate with every VPC. Deliver to Log Archive bucket.
  3. Deploy AWS WAF via Firewall Manager to cover every ALB / CloudFront / API Gateway in every member account. Configure WAF logging to Firehose delivering to Log Archive S3.
  4. Set up Athena workgroup and Glue Data Catalog tables in the Security account for flow logs, DNS logs, WAF logs, CloudTrail. Partition projection enabled to reduce query cost.

Week 3 — Security Lake and Finding Aggregation

  1. Enable Amazon Security Lake in the Security account as delegated admin. Ingest CloudTrail, VPC Flow Logs, Route 53 Resolver logs, Security Hub findings, and WAF logs. Choose the Log Archive bucket region as the primary, plus any additional regions where workloads run.
  2. Configure Security Hub cross-region aggregation with the same region as primary. Enable AWS Foundational Security Best Practices and any required compliance standards (CIS, PCI, NIST).
  3. Add a Splunk or Datadog subscriber to Security Lake using the SIEM's native Security Lake integration. Validate OCSF records flowing into the SIEM.
  4. Enable CloudTrail Lake with an organization-scope event data store, 10-year retention. Validate SQL queries return events across all accounts.

Week 4 — Response Automation and Evidence

  1. Build EventBridge rules in the Security account for high-severity findings → SNS → PagerDuty for the SOC on-call. Automation rules in Security Hub for common false-positives.
  2. Deploy Systems Manager Automation runbooks for AWS-IsolateInstance, AWS-DisableIAMAccessKey, and custom runbooks for your stack. Delegate execution from Security account into member accounts via cross-account roles.
  3. Enable AWS Audit Manager with the required framework assessments (SOC 2, PCI, or sector-specific). Scope to the entire Organization.
  4. Run a tabletop exercise simulating a compromised EC2 instance finding and walk the seven-step investigation workflow documented above. Refine the runbooks based on gaps discovered.

Outcome After 30 Days

Every log stream from every account flows to a central immutable store. SOC analysts query from one console (CloudTrail Lake, Athena, Security Hub). Third-party SIEM receives normalized OCSF events. Findings route to on-call within five minutes. Audit Manager continuously collects compliance evidence. New accounts added via Account Factory inherit all of this automatically.

CloudTrail data events are OFF by default, even with an organization trail. The most expensive centralized security logging mistake is assuming "organization trail covers everything" — it only covers management events unless data events are explicitly enabled per resource. A compromised S3 bucket with no data events enabled produces zero GetObject records — the investigator cannot prove or disprove exfiltration. Always enable data events on: all Lambda functions (cheap, high value), S3 buckets holding regulated data, DynamoDB tables holding PII. Data events are priced per event — for very high-throughput S3 buckets, use selectors to narrow scope (specific prefixes, specific event types). On the SAP-C02 exam, any scenario asking "how do we prove what S3 objects were accessed" requires CloudTrail data events, not just management events.

Centralized Security Logging — Cost Architecture

Centralized security logging has recurring costs that scale with organization size and log volume. The SAP-C02 exam will test cost-aware centralized security logging design.

CloudTrail Cost Model

Management events: first copy free (the organization trail), additional trails $2 per 100,000 events. Data events: $0.10 per 100,000 events. CloudTrail Insights: $0.35 per 100,000 events analysed. CloudTrail Lake: $2.50 per GB ingested + $0.005 per GB scanned. For a 100-account organization with data events enabled on sensitive resources and CloudTrail Lake with 10-year retention, budget $5K-20K per month depending on activity volume.

Security Lake Cost Model

Security Lake is priced per GB ingested per month, with tiered discounts at higher volumes. A typical multi-account deployment ingests 1-10 TB per month across all streams. At the standard rate, $1-5K per month for the aggregation service alone — plus S3 storage, Lake Formation, and downstream Athena or subscriber costs.

S3 Storage Cost With Lifecycle

One year of centralized logs from a 50-account organization is typically 10-50 TB. With lifecycle to Glacier Deep Archive after day 180, long-term cost drops to $0.00099 per GB per month for archived data. Ten-year Object-Lock compliance retention remains under $10K per year for mid-size organizations.

VPC Flow Logs Cost

Flow log ingestion to S3 is $0.50 per GB for the data ingestion fee. A high-throughput production VPC can generate 100 GB per day. Organization-wide flow logs can reach $10-50K per month at enterprise scale — the largest single line item in centralized security logging.

Firehose Cost

Firehose charges per GB ingested (under 500 TB/month: $0.029 per GB). For 10 TB per month of centralized security logging through Firehose, $290/month — cheap compared to downstream SIEM costs.

Cost Optimization Patterns

  • Sample non-critical streams — 10 percent sampling for sandbox VPC Flow Logs.
  • Aggressive lifecycle — move to Glacier Deep Archive as soon as compliance allows.
  • OCSF normalization first — Security Lake drops redundant fields, reducing downstream SIEM ingest volume.
  • SCP-enforced region restriction — centralized security logging in regions you do not use wastes money; SCP deny non-approved regions.

Common Exam Traps for Centralized Security Logging

Trap 1 — A Single Account Can Hold Both Logs and Tooling

Wrong. The canonical architecture separates Log Archive (immutable evidence) from Security/Audit (active tooling). Answers combining them are a signal of a distractor.

Trap 2 — CloudWatch Cross-Account Observability Replaces Subscription Filters

Wrong. Cross-Account Observability is for interactive dashboards in the monitoring account console. Subscription filters are for automated pipelines (to Firehose, Lambda, destinations). They solve different problems — mature centralized security logging uses both.

Trap 3 — CloudTrail Lake Eliminates the Need for S3 Archive

Wrong. CloudTrail Lake is queryable storage; S3 with Object Lock is tamper-evident regulatory storage. CloudTrail Lake is not currently a certified WORM store for SEC 17a-4 purposes — S3 Object Lock Compliance mode is. Most mature centralized security logging runs both.

Trap 4 — Security Lake Replaces CloudTrail Lake

Partial. Security Lake ingests CloudTrail events, normalises them to OCSF, and serves subscribers. CloudTrail Lake is a purpose-built SQL query engine for CloudTrail-specific forensics with richer CloudTrail semantics. They are complementary; Security Lake for cross-source OCSF analytics, CloudTrail Lake for deep CloudTrail queries.

Trap 5 — VPC Flow Logs Cover All Network Traffic

Wrong. VPC Flow Logs capture IP-layer metadata for traffic traversing ENIs. They do not capture: traffic inside the same ENI (localhost), link-local Amazon DNS queries to 169.254.169.253 (use Resolver query logging instead), traffic mirrored at line rate (use VPC Traffic Mirroring), encrypted-payload content (flow logs are metadata only).

Trap 6 — S3 Object Lock Governance Mode Is Tamper-Proof

Wrong. Governance mode permits authorized principals with s3:BypassGovernanceRetention to delete. Only Compliance mode prevents all deletion including by root. Regulated workloads require Compliance mode.

Trap 7 — Organization Trail Covers Data Events Automatically

Wrong. Data events are opt-in per resource, even on organization trails. Forgetting to enable data events on sensitive S3 buckets is the single most common centralized security logging gap uncovered in audits.

Trap 8 — GuardDuty Alone Is Centralized Security Logging

Wrong. GuardDuty is one finding source. Centralized security logging is the umbrella architecture collecting CloudTrail, Flow Logs, Resolver logs, WAF logs, Security Hub findings, Inspector findings, and Config snapshots into one normalized, retention-governed store.

Trap 9 — Security Lake Requires Splunk or Datadog

Wrong. Security Lake works with no subscriber at all — it normalises data into the customer's S3 bucket and provides Lake Formation access. Third-party SIEMs are optional downstream consumers. Many organizations use Security Lake purely for in-house Athena analytics.

Trap 10 — CloudWatch Logs Retention Is Automatic

Wrong. Default retention is "never expire." Explicit retention settings are required or logs accumulate indefinitely and drive unnecessary cost. Use Config rule cw-loggroup-retention-period-check to audit.

Key Numbers and Must-Memorize Centralized Security Logging Facts

CloudTrail

  • Organization trail from management account or CloudTrail delegated admin.
  • Management events: free first copy. Data events: opt-in, priced per event.
  • CloudTrail Lake: up to 10-year retention, SQL queryable, immutable.
  • Integrity validation produces hourly SHA-256 RSA signed digest files.

Security Lake

  • Delegated admin in Security account.
  • OCSF JSON normalization, partitioned S3 storage in customer-owned bucket.
  • Query subscribers via Lake Formation grant; data subscribers via SQS + S3 pull.
  • Native subscribers: Splunk, Datadog, IBM QRadar, Palo Alto, SentinelOne.

VPC Flow Logs

  • VPC / subnet / ENI granularity.
  • Destinations: S3 (cheapest), CloudWatch Logs (expensive), Firehose (real-time).
  • Custom format with 34 fields vs default 14 — always use custom.
  • Partition projection on Athena reduces query cost 50x-100x.

CloudWatch Logs Cross-Account

  • Destination is a named endpoint wrapping Kinesis Stream or Firehose.
  • Subscription filter from member account → destination in Security account.
  • Cross-Account Observability: monitoring account + source accounts for unified dashboards.

S3 Object Lock

  • Governance mode — bypass-able by authorized principals.
  • Compliance mode — no bypass, including root account.
  • Required for SEC 17a-4, FINRA 4511, CFTC 1.31(c)-(d).

Route 53 Resolver Query Logging

  • Per-VPC configuration, shareable via RAM.
  • Destinations: CloudWatch Logs, S3, Firehose.
  • Captures every DNS query including to private hosted zones and forwarded on-prem.

Finding Aggregation

  • GuardDuty, Security Hub, Inspector v2, Macie, Access Analyzer — all delegated-admin pattern.
  • Security Hub cross-region aggregation: one aggregation region per org.
  • ASFF is the finding format; OCSF is the log format (Security Lake).

Audit Manager

  • Consumes CloudTrail, Config, Security Hub.
  • Frameworks: SOC 2, PCI DSS, HIPAA, NIST 800-53, ISO 27001, GDPR, FedRAMP, AWS WA.
  • Organization-scope assessments from delegated admin.

Retrofit Timing

  • Foundation (CloudTrail + Config + findings delegated admins): Week 1.
  • Network + DNS + WAF logs: Week 2.
  • Security Lake + SIEM subscriber + CloudTrail Lake: Week 3.
  • Automation + Audit Manager + tabletop: Week 4.

FAQ — Centralized Security Logging Top Questions

Q1 — What is the difference between CloudTrail Lake and Amazon Security Lake, and which should I use?

CloudTrail Lake is a purpose-built SQL query engine for CloudTrail events (management, data, Insights, plus custom CloudTrail-schema events). It retains up to 10 years, is immutable, and excels at deep CloudTrail forensics with CloudTrail-specific semantics. Amazon Security Lake is a broader centralized security logging data lake ingesting CloudTrail, VPC Flow Logs, Route 53 Resolver logs, Security Hub findings, and WAF logs (plus custom sources), normalizing everything to OCSF for cross-source analytics and SIEM subscribers. Use both: CloudTrail Lake when the analyst is specifically querying CloudTrail with the richest semantics; Security Lake when the analyst needs cross-source queries (a CloudTrail API call correlated with a VPC Flow Log and a WAF decision) or when a third-party SIEM consumes data. They are complementary, not competitive.

Q2 — For a 50-account organization, should VPC Flow Logs go to S3 or CloudWatch Logs?

S3 for centralized security logging archival — it is roughly 5x cheaper per GB than CloudWatch Logs and integrates natively with Security Lake. Use CloudWatch Logs only when you need real-time subscription-filter forwarding to a Lambda or Firehose for specific security events (a subset). The canonical production pattern for 50 accounts is direct-to-S3 delivery with a Glue table and Athena partition projection for queries, plus optional Firehose for real-time streams to a SIEM. CloudWatch Logs in the middle of this path wastes money without adding capability.

Q3 — How do I centralize CloudWatch Logs from 15 accounts to a single Splunk instance?

Three components. (1) In the Security account, create a Kinesis Firehose delivery stream targeting Splunk HTTP Event Collector with appropriate buffering and retry. (2) In the Security account, create a CloudWatch Logs destination wrapping that Firehose. Its access policy grants logs:PutSubscriptionFilter to each of the 15 member account IDs. (3) In each member account, create subscription filters on the log groups you want forwarded, pointing at the destination's ARN. Filter patterns select only security-relevant events. Result: every matching log event in every account flows through one central Firehose into Splunk with deduplication and retry handled by AWS. Use Firewall Manager or CloudFormation StackSets to deploy the subscription filters at scale.

Q4 — What is OCSF and why does Amazon Security Lake use it?

Open Cybersecurity Schema Framework (OCSF) is an open vendor-neutral standard for security event data, co-developed by AWS, Splunk, IBM, and others. Before OCSF, every security tool had its own schema — CloudTrail fields, VPC Flow Log fields, Splunk Common Information Model, Elastic Common Schema, Microsoft Sentinel schema — forcing analysts to learn each one and write tool-specific queries. OCSF unifies field names (actor.user.name, src_endpoint.ip, activity_id, disposition) across categories (authentication, network activity, file activity, findings) so one query works regardless of source. Security Lake normalizes all ingested data to OCSF before storing, which means a downstream SIEM consumer receives uniformly-structured events regardless of whether they originated in CloudTrail, VPC Flow Logs, or a custom EDR source. This is a major centralized security logging accelerator.

Q5 — How do I enable S3 Object Lock on the centralized security logging Log Archive bucket without breaking existing CloudTrail delivery?

S3 Object Lock must be enabled at bucket creation time — it cannot be enabled on an existing bucket. The retrofit path: create a new bucket with Object Lock enabled and default retention configured, create a new CloudTrail trail delivering to the new bucket (or update the existing trail's destination), validate delivery, then migrate historical data from the old bucket to the new bucket using S3 Batch Operations (Batch Operations can apply retention at copy time for Object Lock). After validation, remove the old trail configuration. Choose Compliance mode rather than Governance mode for regulated workloads because Governance is bypass-able. Object Lock is per-object; Batch Operations sets per-object retention during copy.

Q6 — Does AWS Security Hub require CloudTrail to be enabled?

Yes, effectively. Security Hub's Foundational Security Best Practices and CIS controls include rules that require CloudTrail to be enabled and logging to an S3 bucket with integrity validation. Without CloudTrail, Security Hub compliance scores plummet and several controls fail outright. More broadly, Security Hub aggregates findings from GuardDuty (which itself consumes CloudTrail event stream as a primary data source), so even ignoring the specific FSBP controls, the detective value of Security Hub depends on CloudTrail. Always enable the organization CloudTrail trail before enabling Security Hub delegated admin — it is the foundational stream.

Q7 — Can I use AWS Firewall Manager to deploy WAF logging across 100 accounts?

Yes. Firewall Manager policies include a logging configuration that gets applied to every Web ACL deployed by the policy. Set up: in the Security account (as Firewall Manager delegated admin), create a Firewall Manager policy of type WAFv2 with a specified rule group and a logging configuration pointing to a Kinesis Firehose ARN. The policy targets specified OUs or accounts. Firewall Manager deploys the Web ACL with logging enabled to every target account's ALB / CloudFront / API Gateway. A single Firewall Manager policy replaces 100 manual per-account WAF configurations and guarantees logging consistency.

Q8 — What log retention is required for SOC 2 Type 2, PCI DSS, and HIPAA?

SOC 2 Type 2 does not mandate a specific retention period; auditors typically expect at least 12 months of logs covering the audit period, with some auditors requesting two full cycles. PCI DSS 4.0 requires at least one year of log retention with three months immediately available for audit (Requirement 10.5.1). HIPAA Security Rule 164.316(b)(2) requires documentation retention for six years from the later of creation or last effective date. Financial services regulations go further: SEC 17a-4(f) requires seven years, with the first two years readily accessible. The centralized security logging lifecycle pattern for a regulated organization is: S3 Standard for 90 days (readily accessible), Standard-IA for 6-12 months, Glacier Instant for 1-3 years, Glacier Deep Archive to the 7-year or 10-year mark with Compliance-mode Object Lock throughout.

Q9 — How do I centralize security logs from accounts that are not yet in my AWS Organization?

Two paths depending on whether the external accounts can join your Organization. If they can join (standard enterprise acquisition pattern), invite them via Organizations, accept, and the organization CloudTrail trail plus delegated-admin services auto-enable logging. If they cannot join (joint venture, vendor account, partner), use cross-account S3 bucket policies on your Log Archive bucket granting the external account's CloudTrail service principal s3:PutObject with the appropriate aws:SourceArn and aws:SourceAccount conditions; the external account creates its own trail delivering to your bucket. For VPC Flow Logs and other streams, a similar resource-policy approach works. Note: external-account logs lack the automatic delegated-admin aggregation in GuardDuty / Security Hub / Inspector — those services require organization membership for their centralized features.

Q10 — What is the single highest-ROI centralized security logging improvement for an organization that currently has nothing?

Enabling the AWS Organizations CloudTrail trail to a Log Archive account S3 bucket with Object Lock Compliance mode. This single action produces: tamper-evident record of every control-plane API call across every account and every region, foundation for GuardDuty detection, foundation for Security Hub compliance scoring, foundation for Audit Manager evidence collection, and most regulatory audit trails at their minimum acceptable form. Cost: near zero (management events free on the organization trail; S3 storage with lifecycle is cents per GB per month). Time to implement: one day. Everything else in this guide builds on top of this foundation — if you only do one thing, do this.

Further Reading — Official AWS Documentation for Centralized Security Logging

For depth beyond SAP-C02 scope, the authoritative AWS sources are: AWS CloudTrail User Guide (especially the organization-trail and CloudTrail Lake sections), Amazon Security Lake User Guide (OCSF schema, subscriber management, custom sources), VPC User Guide (Flow Logs section), Amazon CloudWatch Logs User Guide (cross-account subscriptions, subscription filters, Cross-Account Observability), Amazon Data Firehose Developer Guide, Route 53 Developer Guide (Resolver query logging), AWS WAF Developer Guide (logging section), Amazon S3 User Guide (Object Lock), GuardDuty / Security Hub / Inspector user guides (organization integration sections), and AWS Audit Manager User Guide.

The AWS Security Reference Architecture (SRA) whitepaper is the canonical centralized security logging reference — it codifies the Log Archive / Security account / delegated admin pattern and is mandatory reading for the SAP-C02 exam. The AWS Well-Architected Security Pillar whitepaper provides the conceptual anchors. AWS re:Inforce recorded sessions from the last two years include multiple deep dives on Security Lake and OCSF in production use. Finally, the OCSF Schema Browser (schema.ocsf.io) is essential for custom-source work.

Official sources