examhub .cc The most efficient path to the most valuable certifications.
Vol. I
In this note ≈ 27 min

Centralized Logging Cloudtrail Vpcflow

5,400 words · ≈ 27 min read

Centralized logging is the foundation that every other Domain 2 control depends on — without a tamper-evident, complete, queryable record of what happened across every account and every region, threat detection becomes guesswork, incident response becomes archaeology, and compliance evidence becomes fiction. On the SCS-C02 exam, centralized logging anchors Task 2.3 (Design and implement a logging solution) and Task 2.4 (Troubleshoot logging solutions) of Domain 2 (Security Logging and Monitoring, 18 percent weight). The Specialty exam pushes harder than SAP-C02 on troubleshooting — you will be handed a scenario where CloudTrail says "trail enabled" but no objects appear in S3, or a VPC Flow Log configuration shows "Active" but Athena returns zero rows, and you must trace the failure to an exact line in a bucket policy, KMS key policy, IAM role, or log group resource policy.

This guide is built for the security-engineer perspective. It covers the centralized-logging architecture (CloudTrail organization trail, VPC Flow Logs, Route 53 Resolver query logs, CloudWatch Logs cross-account delivery to a Log Archive account), then drills deep into the troubleshooting decision trees the exam loves: missing logs, AccessDenied at the destination, KMS encryption failures, log file validation mismatches, and cross-account delivery bugs. It also covers log integrity guarantees — CloudTrail digest files, S3 Object Lock for WORM immutability, and chain-of-custody patterns that hold up to audit and to legal discovery.

What Is Centralized Logging in the SCS-C02 Context?

Centralized logging in AWS is the discipline of delivering every security-relevant log stream — CloudTrail management and data events, VPC Flow Logs, Route 53 Resolver query logs, CloudWatch Logs, ALB access logs, WAF logs — from every member account into a separate, hardened Log Archive account so that a security engineer can investigate, detect, and prove integrity across the entire organization from one place. Centralized logging on SCS-C02 is judged on three dimensions: completeness (every account, every region, every relevant stream is captured), integrity (logs cannot be altered or deleted by an attacker who compromises a workload), and operability (when something stops working, the security engineer can diagnose and restore in minutes, not days).

Why SCS-C02 Tests Centralized Logging Differently From SAP-C02

The Solutions Architect Professional exam tests architecture choice — given a 30-day retrofit deadline, pick the right combination of CloudTrail Lake, Security Lake, and SIEM subscribers. The Security Specialty exam tests mechanics and failure modes — given that CloudTrail logs stopped appearing in S3 three days ago, identify the exact misconfiguration. Expect SCS-C02 stems that paste a bucket policy, a KMS key policy, or an IAM role trust policy and ask "why is delivery failing?" The right answer requires reading the policy line by line.

The Six Centralized Logging Streams a Security Engineer Owns

A complete centralized logging setup feeds six streams into the Log Archive bucket: CloudTrail management events (control-plane API calls), CloudTrail data events (S3 GetObject, Lambda Invoke, DynamoDB GetItem on selected resources), VPC Flow Logs (IP-layer network metadata), Route 53 Resolver query logs (every DNS resolution from VPC workloads), CloudWatch Logs (OS-level and application logs forwarded via subscription filters), and service-specific logs (ALB access, WAF full-traffic, S3 server access). Each has a different delivery mechanism, a different IAM permission surface, and a different failure mode — and all six must converge to one immutable archive.

Plain-Language Explanation: Centralized Logging

Centralized logging is the kind of topic where six AWS services fight for the same paragraph. Three concrete analogies make the structure stick.

Analogy 1 — The Building Security CCTV and Logbook System

Imagine a corporate campus with fifteen buildings (your AWS member accounts), one central security office (your Log Archive account), and one investigations desk (your Security/Audit account). Centralized logging is how every keycard swipe, every camera frame, every visitor signature from every building ends up at the central office on tamper-proof storage with a forensic chain of custody.

The building access ledger that records every keycard swipe and every door event is CloudTrail — every API call across every account is written to a master logbook regardless of which building issued the swipe. The CCTV recorder vault with write-once optical discs in a sealed cabinet is the S3 Log Archive bucket with Object Lock Compliance mode — once footage lands, nobody can edit or delete it for the regulatory retention window. The wide-angle camera in every parking lot is VPC Flow Logs — capturing every car (packet) entering or leaving without recording what was inside. The visitor radio dispatcher logging every phone call to reception is Route 53 Resolver query logs — every DNS query (lookup) from every workload. The investigator's notepad flipping through ten years of swipe records by SQL is CloudTrail Lake. When the central security office discovers a missing day of footage, the troubleshooting walk-through is: did the camera have power (IAM role attached), did the cable reach the recorder (S3 bucket policy allows the service principal), did the recorder have storage (KMS key policy allows encryption), and did anybody trip a circuit breaker (SCP at the OU level). That walk-through is exactly the SCS-C02 Task 2.4 troubleshooting tree.

Analogy 2 — The Bank Branch Vault Network

Picture a national bank with fifteen branches (member accounts), one central archive vault (Log Archive account), and one fraud-investigations team (Security account). Centralized logging is how every transaction slip, every wire instruction, every audit trail from every branch reaches the central vault with bank-grade tamper evidence.

The transaction journal stamped with cryptographic hashes is CloudTrail with log file integrity validation — every hour, the bank publishes a digest file containing SHA-256 hashes of every log file, and that digest is signed with an RSA key only AWS holds. If anyone alters a log after delivery, validation fails. The bonded-courier service that picks up sealed bags from each branch and delivers them to the central vault is the CloudTrail-to-S3 cross-account delivery flow, with the bucket policy acting as the security guard at the vault door — bag accepted only if the courier wears the right uniform (cloudtrail.amazonaws.com service principal) and presents the right SourceArn ID. The vault safety-deposit box for ten-year retention is S3 Object Lock Compliance mode. The fraud team's microscope examining every transaction is CloudTrail data events — without it, the team can prove a manager opened the vault but not which boxes they touched. When a branch reports "we sent the courier but the bag never arrived," the fraud team works the troubleshooting tree: did the bag get sealed (trail status enabled), did the courier accept it (KMS key allows decryption), did the vault door open (bucket policy permits PutObject), and was the address correct (S3ARN matches the trail config). Every miss in the chain leaves AccessDenied breadcrumbs in the bank's CloudTrail-on-CloudTrail self-monitoring.

Analogy 3 — The Hospital Emergency Department Record-Keeping

Picture a regional hospital network with fifteen clinics (member accounts), a central medical records department (Log Archive account), and an infection-control unit (Security account). Centralized logging is how every patient encounter — every prescription, every lab result, every vital-sign reading, every doctor's note — is duplicated to the central archive with provable chain of custody for malpractice and regulatory inspection.

The electronic health record system is CloudTrail — a single signed source of truth recording every clinical action across every clinic. The vital-signs telemetry stream is VPC Flow Logs — high-volume continuous data showing what flows in and out of each clinic. The infection-screening dispatch log is Route 53 Resolver query logs — every external lookup that might indicate a patient contacted an unsafe environment. The chain-of-custody envelope is the CloudTrail digest file — a tamper-evident wrapper proving the records were not altered. The regulatory storage vault with sealed envelopes in a bonded warehouse is S3 Object Lock Compliance mode. When the records department reports "clinic 7 has stopped sending records for 48 hours," the chief medical officer runs the troubleshooting checklist: is the local terminal logged in (CloudTrail trail status), does the doctor have prescribing rights (IAM role for delivery), is the printer connected (S3 bucket policy), is the toner cartridge installed (KMS key policy), and was there a fire-alarm test (SCP exemption) blocking the network? Each missing record points to one specific failure surface, and the security engineer must know which surface produces which symptom.

Reference Architecture for Centralized Logging on SCS-C02

The exam assumes the Security Reference Architecture (SRA) layout. Memorize the boxes — exam answers slot directly into them.

The Three Account Roles

  • Log Archive account — owns the S3 bucket receiving CloudTrail, Config, VPC Flow Logs, Resolver query logs, ALB access logs, and WAF logs. Bucket policy grants cross-account write to the relevant service principals. KMS key policy grants encrypt and decrypt to those principals. S3 Object Lock in Compliance mode enforces WORM retention.
  • Security/Audit account — hosts detective-tooling delegated admins (GuardDuty, Security Hub, Inspector v2, Macie, Detective, Access Analyzer), CloudTrail Lake event data stores for SQL forensics, CloudWatch Cross-Account Observability monitoring account, and Firehose pipelines to SIEM.
  • Member workload accounts — produce logs but do not retain them locally for security purposes. Local CloudWatch Logs may exist for application debugging; security-relevant streams always flow out to the Log Archive bucket.

Why Two Accounts, Not One

The Log Archive account holds raw evidence — write-only for producers, read-only for auditors, protected by Object Lock. Even SOC analysts cannot delete from it. The Security account holds active tooling — the GuardDuty console, Security Hub dashboards, CloudTrail Lake query consoles. Splitting the roles means a compromised SOC analyst credential cannot destroy the evidence archive, and a compromised Log Archive credential cannot turn off detective tooling. This separation is a hard exam signal.

Centralized logging always uses dedicated Log Archive and Security accounts that are separate from the management account and from any workload account. Putting log buckets in the management account violates least privilege and exposes evidence to management-account compromise. Putting detective tooling in the Log Archive account exposes analyst credentials to the same blast radius as the evidence archive. The SRA split — Log Archive (immutable storage), Security account (active tooling), management account exempt from both — is the answer pattern for every SCS-C02 centralized logging architecture question.

CloudTrail Organization Trail — The Foundation

CloudTrail is the first centralized logging stream. Without an organization trail, every other stream is incomplete because attackers can disable per-account trails.

Organization Trail Mechanics

An organization trail is created in the management account or from a delegated admin and applies to every current and future member account in every region. The trail delivers to one S3 bucket in the Log Archive account. Member accounts cannot disable, modify, or read the organization trail config — only the management account or delegated admin can. This is the structural defence against an attacker who compromises a workload account and tries to turn off logging.

Management Events vs Data Events

Management events (default on, free for the first copy) record control-plane API calls — RunInstances, CreateRole, PutBucketPolicy. Data events (off by default, priced per event) record data-plane activity — every S3 GetObject, every Lambda Invoke, every DynamoDB GetItem. SCS-C02 will test whether you know that data events are opt-in. A forensic question of the form "prove what objects were exfiltrated from this bucket" requires data events; if they were not enabled, you cannot answer.

CloudTrail Insights — Anomalous API Detection

CloudTrail Insights (paid add-on) flags anomalous patterns in management events — a sudden spike in DeleteObject, an unusual concentration of AssumeRole, a region used for the first time. Insights findings publish to EventBridge for automated response. For centralized logging, Insights runs at the organization-trail level so cross-account lateral movement is caught.

S3 Delivery — The Bucket Policy Anatomy

The Log Archive bucket policy needs three statements for CloudTrail to deliver successfully: an s3:GetBucketAcl on the bucket itself, an s3:PutObject on the per-account log prefix path with the bucket-owner-full-control ACL condition, and a Condition block pinning aws:SourceArn to the trail ARN to defeat confused-deputy attacks. Missing any of these produces a specific failure signature in the trail status: "delivery failed — AccessDenied" with the principal cloudtrail.amazonaws.com.

KMS Encryption Of CloudTrail Logs

If the Log Archive bucket uses SSE-KMS (recommended), the KMS key policy must allow kms:GenerateDataKey* to the CloudTrail service principal with the same aws:SourceArn condition. Forgetting this is one of the most common centralized logging breakages — the bucket policy looks fine, but every PutObject fails because CloudTrail cannot generate a data key to encrypt the object before upload.

A CloudTrail trail writing to a SSE-KMS-encrypted bucket needs three policy permissions, not one. The S3 bucket policy must allow the CloudTrail service principal to PutObject. The KMS key policy must allow the CloudTrail service principal to GenerateDataKey. And both policies must include the matching aws:SourceArn condition pointing at the trail ARN. The S3 console does not surface the KMS dependency clearly, so a trail can be configured, encryption can be enabled, the trail status will say "Logging," and zero objects will land. The trail's most recent delivery error message is the diagnostic surface — read it before reading any policy.

Log File Integrity Validation — How CloudTrail Proves Tamper-Evidence

CloudTrail log file integrity validation produces an hourly digest file in a parallel S3 prefix (AWSLogs/<account>/CloudTrail-Digest/) containing the SHA-256 hash of each log file delivered in the previous hour, plus a digital signature signed with an RSA private key held by AWS. Each digest file also contains the SHA-256 hash of the previous digest, forming a hash chain. To validate, the security engineer runs aws cloudtrail validate-logs --trail-arn <arn> --start-time <ts>; the CLI fetches the digest, recomputes hashes on the actual log files, verifies the RSA signature on the digest, and walks the chain backward to the trail's first delivery. Any modification to any log file or any digest breaks validation and prints the exact file that failed.

Why Log File Validation Matters For Chain Of Custody

In legal discovery and regulatory audit, the question is not just "what did the logs say" but "can you prove they were not altered after delivery." The CloudTrail digest hash chain provides this proof — even if an attacker compromised the bucket and modified a log file, the digest mismatch is detectable and the chain reveals exactly when. Always enable integrity validation on every centralized logging trail; auditors expect it and courts treat the signed digest as admissible cryptographic evidence.

VPC Flow Logs — Network Layer Centralized Logging

VPC Flow Logs capture IP-layer metadata for every packet traversing a VPC, subnet, or ENI. They are the second pillar of centralized logging.

Where to Enable Flow Logs

Three granularities: VPC-level (every ENI in the VPC, easiest org-wide rollout), subnet-level (focused on specific tiers like a DMZ), or ENI-level (targeted troubleshooting). For SCS-C02 baseline coverage, enable VPC-level in every VPC in every account and every region — automate via Config managed rule vpc-flow-logs-enabled plus SSM auto-remediation, or via CloudFormation StackSets from the management account.

The IAM Role For Flow Log Delivery

Flow Logs require an IAM role that the VPC Flow Logs service assumes to write to the destination. For S3 destinations, the service uses a service-linked construct — no role needed. For CloudWatch Logs destinations, a customer-managed IAM role with logs:CreateLogGroup, logs:CreateLogStream, logs:PutLogEvents, and logs:DescribeLogStreams is required, with a trust policy allowing vpc-flow-logs.amazonaws.com to assume it. Forgetting one permission produces partial delivery — log groups are created but events fail to land, or events land but no streams are created.

Custom Flow Log Format

The default flow-log format has fourteen fields. The custom format adds twenty more — including vpc-id, subnet-id, instance-id, tcp-flags, pkt-srcaddr, pkt-dstaddr, flow-direction, and traffic-path. For security forensics, always use the custom format. The default lacks tcp-flags (needed to distinguish a SYN-only port scan from an established connection) and lacks pkt-srcaddr/pkt-dstaddr (needed to identify the original source behind a NAT-rewritten packet).

Destination Choice And Trade-Offs

  • S3 (recommended for archival) — direct delivery, partitioned by year/month/day, cheap long-term, Athena-queryable.
  • CloudWatch Logs — log group in the producing account, then subscription filter to Firehose for cross-account. Expensive per GB but enables real-time CW Logs Insights queries and metric filter alarms.
  • Amazon Data Firehose — direct delivery for real-time SIEM, with Lambda transform.

What Flow Logs Do Not Capture

Common SCS-C02 trap. Flow Logs do not capture: traffic to and from 169.254.169.254 (instance metadata service), DHCP traffic, traffic to the Amazon DNS resolver at 169.254.169.253 (use Resolver query logging instead), traffic to/from the reserved IP for the default VPC router, payload content (only metadata), and traffic that never reaches an ENI (intra-host loopback). If the question is "how do I see what DNS names a workload resolved," Flow Logs cannot answer — Route 53 Resolver query logging is required.

VPC Flow Logs do not capture DNS queries to the Amazon DNS resolver, instance metadata service requests, or packet payloads. Security engineers regularly assume "I have flow logs, so I can see DNS exfiltration" — they cannot. DNS lookups to the AWS-managed resolver at 169.254.169.253 and IMDS calls to 169.254.169.254 are explicitly excluded from Flow Logs, and payloads are never captured. Detect DNS exfiltration via Route 53 Resolver query logging. Detect IMDS abuse via CloudTrail (for STS calls using the role) or via IMDSv2 enforcement plus GuardDuty UnauthorizedAccess:EC2/MetadataDNSRebind. The exam will plant a stem like "we have full Flow Logs but cannot find the malicious domain" — the answer is to enable Resolver query logging.

Route 53 Resolver Query Logging — The DNS Pillar

DNS query logging is the third high-value stream and the one most often missed in centralized logging builds.

Enabling Resolver Query Logging

Per-VPC configuration with destination of CloudWatch Logs, S3, or Firehose. The recommended centralized pattern: create the query log configuration in the Networking or Security account, share via AWS RAM with all member accounts, associate with every VPC at creation time. For the Log Archive bucket destination, the bucket policy needs logs.amazonaws.com (yes, it goes through the CloudWatch Logs service principal even for S3 delivery) with appropriate aws:SourceArn condition.

What Gets Logged

Every DNS query made by any resource in the associated VPC: query name, type, response code, answer records, source VPC and ENI. Queries to private hosted zones, forwarded queries to on-prem resolvers, and queries to public authoritative nameservers are all captured.

Detection Use Cases

  • Malware C2 — queries to known-malicious domains; GuardDuty's Backdoor:EC2/C&CActivity.B!DNS finding is generated from this stream.
  • DNS exfiltration — large volumes of TXT queries to a single suspicious domain.
  • Unauthorized DNS — workloads bypassing internal resolvers to query 8.8.8.8 directly (combine with Flow Logs for full reconstruction).

CloudWatch Logs Cross-Account Delivery

CloudWatch Logs collects OS, application, and Lambda logs. Cross-account delivery to the Security account is the centralized logging pattern for these streams.

Subscription Filters

A subscription filter on a log group sends matching events to one of: Firehose, Kinesis Data Stream, Lambda, or a CloudWatch Logs cross-account destination. Filter patterns can be SQL-like — { ($.eventName = "ConsoleLogin") && ($.responseElements.ConsoleLogin = "Failure") } forwards only failed console logins.

Cross-Account Destinations

A CloudWatch Logs destination in the Security account is a named endpoint wrapping a Kinesis stream or Firehose. Member accounts create subscription filters pointing at the destination ARN. The destination's access policy grants logs:PutSubscriptionFilter to specific member account IDs. Forgetting to add a new member account ID to the destination access policy is the canonical "logs from the new account never appear in central pipeline" troubleshooting story.

Log Group Resource Policies — The Forgotten Permission Surface

A log group can have its own resource policy granting other AWS services or accounts the right to write to it. Common need: a Route 53 Resolver query log configuration writing to a log group in the Networking account, but the source-of-truth resolver in another account needs the resource policy to permit logs:CreateLogStream and logs:PutLogEvents. Forgetting to attach the resource policy is a classic SCS-C02 troubleshooting trap — the configuration looks correct, the IAM role looks correct, but log group resource-level deny blocks the write.

For services that write to CloudWatch Logs without an explicit IAM role (Route 53 Resolver, EventBridge logging, AWS Network Firewall logging), the log group itself needs a resource policy granting the service principal permission to call CreateLogStream and PutLogEvents. This is separate from any IAM role — many AWS services use a service-principal-direct write pattern. The symptom of a missing resource policy is "feature configured, but log group is empty, no errors visible in console." The fix is either to use the AWS console wizard (which creates the resource policy automatically when you select an existing log group) or to write the resource policy manually with aws logs put-resource-policy. On the SCS-C02 exam, this surface is tested in stems where one specific log stream (often DNS or firewall) is missing while others work.

Centralized Logging Troubleshooting — The Decision Tree For Missing Logs

This section is the heart of SCS-C02 Task 2.4. Memorize the decision trees.

CloudTrail Not Delivering To S3

Symptoms: trail status shows "Logging" but no objects appear in S3, or the trail status panel shows "Last delivery error: AccessDenied" or "KMSAccessDenied" or similar.

Decision tree:

  1. Read the trail's "Last delivery error" message first. AWS prints the exact reason there. Most engineers skip this and start guessing.
  2. If error is AccessDenied and bucket exists — check the bucket policy. Required statements: s3:GetBucketAcl on the bucket and s3:PutObject on arn:aws:s3:::<bucket>/AWSLogs/<account-id>/* with the cloudtrail.amazonaws.com service principal. Check the aws:SourceArn condition matches the trail ARN exactly.
  3. If bucket policy looks correct — check for an explicit deny in the bucket policy or in any organization SCP applied to the management account. SCPs deny without leaving a friendly message; if you suspect SCP, test with the IAM Access Analyzer policy simulator.
  4. If error is KMSAccessDenied — the bucket uses SSE-KMS and the KMS key policy does not allow CloudTrail. Add kms:GenerateDataKey* to the CloudTrail service principal with aws:SourceArn condition pinned to the trail ARN.
  5. If error is throttling-related — usually transient, but a sustained throttle suggests too many trails delivering to the same bucket. Mitigation: use one organization trail rather than many per-account trails.
  6. If trail says Logging but bucket has zero objects and no error — verify the trail is logging to the right bucket (one common bug: a manually edited trail config pointing at a deleted bucket name; CloudTrail does not validate bucket existence at config time).

VPC Flow Logs Not Capturing

Symptoms: Flow Log configuration shows "Active" but Athena returns zero rows, or partial coverage where some ENIs appear and others do not.

Decision tree:

  1. Verify the flow log status — Active vs Failed. Console: VPC → Flow Logs tab. CLI: aws ec2 describe-flow-logs --filter Name=resource-id,Values=<vpc-id>. If status is Failed, read the error message — usually missing IAM role permission or destination not found.
  2. For S3 destination — check the bucket policy. Required: s3:PutObject to delivery.logs.amazonaws.com (the VPC Flow Logs service principal) with aws:SourceAccount condition. Check the bucket KMS policy if SSE-KMS is enabled.
  3. For CloudWatch Logs destination — check the IAM role. The role must have logs:CreateLogGroup, logs:CreateLogStream, logs:PutLogEvents, logs:DescribeLogStreams and trust vpc-flow-logs.amazonaws.com. Forgetting CreateLogStream is the most common partial failure — the log group exists and is empty.
  4. Check the log format alignment. If you switched from default to custom format mid-life, Athena queries against the original Glue table will silently return wrong columns. Re-crawl the Glue table and update the Athena schema.
  5. Check sampling configuration. Flow Logs do not natively sample, but if you placed a Lambda transform or Firehose with random sampling for cost reasons, that explains low row counts.
  6. Check for ENIs not covered. Flow Logs at VPC level cover ENIs that exist when the configuration is created and ENIs created afterward — but ENIs in transit gateway attachments, AWS Lambda Hyperplane ENIs in some configurations, and ENIs in service-linked VPCs may not appear. The ENI inventory in describe-network-interfaces is the ground truth.
  7. Check AWS Health for service issues. Rare but real — VPC Flow Logs has had regional delivery delays; AWS Health dashboard surfaces these.

CloudWatch Logs Cross-Account Subscription Not Delivering

Symptoms: Member account creates a subscription filter pointing at a Security-account destination, but no log events arrive at the destination.

Decision tree:

  1. Check the destination's access policy. It must list every member account ID that is allowed to call logs:PutSubscriptionFilter against the destination ARN. New accounts must be added explicitly.
  2. Check the IAM role attached to the destination. Destinations wrap a Kinesis Stream or Firehose; the destination's role must allow kinesis:PutRecord or firehose:PutRecordBatch to the wrapped resource.
  3. Check the Kinesis stream or Firehose itself. If Kinesis, check the shard count — at high event rates, ProvisionedThroughputExceededException causes silent drops. If Firehose, check the destination's CloudWatch error logs for delivery errors to S3, OpenSearch, or Splunk.
  4. Check the log group resource policy in the member account. Some setups also require the member account's log group to grant access — usually unnecessary for subscription filters, but required for some advanced patterns.
  5. Validate with a test event. aws logs put-log-events with a synthetic event matching the filter pattern; watch the destination's monitoring metrics for IncomingRecords increasing.

Common AccessDenied Patterns In Cross-Account Log Delivery

The most frustrating SCS-C02 troubleshooting flavor is the cross-account AccessDenied that does not say which permission is missing. Top patterns:

  1. aws:SourceAccount mismatch. A bucket policy condition aws:SourceAccount = 111111111111 rejects deliveries from any other account, even if the bucket policy grants the service principal. Multi-account delivery requires either a list of source accounts in the condition or a key condition referencing aws:PrincipalOrgID.
  2. aws:SourceArn mismatch. Tightening confused-deputy protection too aggressively — a bucket policy locking SourceArn to a single trail ARN refuses delivery from any other trail. For multi-trail delivery, list ARNs or use a wildcard pattern.
  3. KMS key policy missing the service principal. Bucket policy looks fine, KMS does not allow encrypt for the writer. The error message in the trail status panel says KMSAccessDenied explicitly.
  4. SCP at the OU level denying s3:PutObject even for service principals. Some compliance SCPs block all S3 writes outside an approved bucket list; if the Log Archive bucket is not on the list, even the service-linked write fails.
  5. VPC endpoint policy denying the call. If S3 access goes through a VPC gateway endpoint with a restrictive policy, the service-linked write may be blocked. Less common but appears in stems with explicit VPC-endpoint mentions.

Always include aws:SourceArn and aws:SourceAccount conditions in cross-account log destination policies. These are the confused-deputy protection conditions — they prevent another customer's CloudTrail trail (or other AWS service) from being weaponized to write to your bucket. Without them, the bucket policy says "any CloudTrail can write here," which is a cross-account vulnerability AWS calls confused-deputy. With them, only your specific trail and your specific account can write. The exam will plant a bucket policy missing these conditions and ask "what is the security risk?" — the answer is confused deputy attack.

Log Integrity Verification — Proving Logs Were Not Altered

Centralized logging without integrity guarantees is worth less than no logging — an attacker who can edit logs can rewrite history.

CloudTrail Digest Files

As covered above, CloudTrail's hourly digest files contain SHA-256 hashes of every delivered log file plus the previous digest's hash, all signed with an RSA private key held by AWS. The chain links every log file from trail creation forward; tampering at any point breaks the chain and is detectable by aws cloudtrail validate-logs.

S3 Object Lock For Log Immutability

S3 Object Lock places objects in a WORM (write-once-read-many) state for a defined retention period. Two modes:

  • Governance mode — IAM principals with s3:BypassGovernanceRetention can override the lock. Suitable for internal tamper-resistance, not for regulated workloads.
  • Compliance mode — even the AWS root account cannot delete or shorten retention. Required for SEC 17a-4(f), FINRA 4511, and CFTC 1.31(c)-(d). Irreversible — once Compliance is set on a bucket, you cannot move back to Governance.

For SCS-C02 centralized logging, use Compliance mode on the Log Archive bucket with retention matching the longest regulatory mandate (typically 7 to 10 years).

Object Lock Must Be Enabled At Bucket Creation

Object Lock is a property set when the bucket is created — it cannot be enabled retroactively. The retrofit path: create a new bucket with Object Lock, redirect CloudTrail and other deliveries, then use S3 Batch Operations to copy historical objects with retention applied at copy time, then decommission the old bucket. This is a multi-week migration on busy logging buckets and a known SCS-C02 test point.

Lifecycle Policies Coexist With Object Lock

Compliance mode prevents deletion but does not prevent storage-tier transitions. Lifecycle policies can move locked objects from S3 Standard to Standard-IA to Glacier Instant Retrieval to Glacier Deep Archive on schedule, dropping cost by 95 percent for the long retention tail without breaking the WORM guarantee.

CloudTrail Lake Immutability As An Alternative

CloudTrail Lake (the SQL-queryable event data store) provides its own immutability — events ingested cannot be modified or deleted before retention expiry (up to 10 years). For SQL-forensics use cases, Lake's immutability is enough and simpler than running Object Lock validation. For raw-evidence regulatory requirements like SEC 17a-4, Object Lock on S3 is still required because Lake is not currently a certified WORM store for those regulations. Mature centralized logging runs both — Lake for queries, S3 Object Lock for evidence.

S3 Object Lock Compliance mode is the only AWS storage configuration that survives a compromised root account. Compliance mode means even the AWS root account cannot shorten retention or delete locked objects before expiry — by design, the configuration is irreversible. This is what makes it qualifying storage for SEC 17a-4(f), FINRA 4511, and CFTC regulations. Governance mode is bypass-able by privileged IAM principals and does not satisfy these regulations. On the SCS-C02 exam, any scenario asking "log storage that survives ransomware, insider threat, or root account compromise" is Compliance mode; any answer offering Governance mode for regulated logs is wrong.

Cross-Account Log Delivery Patterns

Centralized logging is fundamentally a cross-account delivery problem. Master the patterns and you master the troubleshooting.

Pattern 1 — Direct Service-Principal Cross-Account Write

The most common: a member account's CloudTrail or VPC Flow Logs writes directly to a Log Archive bucket in another account. The bucket policy grants the service principal (cloudtrail.amazonaws.com, delivery.logs.amazonaws.com) permission to s3:PutObject, with aws:SourceAccount and aws:SourceArn conditions for confused-deputy protection. No IAM role assumption — the service principal writes directly with policy-granted permission.

Pattern 2 — Cross-Account Subscription Filter Via Destination

The CloudWatch Logs cross-account flow above. Member account subscription filter → Security account CloudWatch Logs destination → Security account Kinesis Firehose → S3 or SIEM. The destination's access policy is the cross-account permission surface.

Pattern 3 — Cross-Account Role For Pull-Based Aggregation

Some patterns inverted — the Security account assumes a role in each member account to read logs locally. Used for SIEM connectors that prefer pull architecture. The trust policy on the cross-account role allows the Security account; the role's permissions allow logs:GetLogEvents, logs:FilterLogEvents, etc.

Pattern 4 — EventBridge Cross-Account Event Bus

Security findings (Security Hub, GuardDuty) flow via EventBridge. Member account default event bus → cross-account rule targeting the Security account's event bus → SOC tooling consumes from there. EventBridge resource policies on the Security event bus grant events:PutEvents to the member account.

The AccessDenied Bug Inventory

For every pattern, the AccessDenied bug catalog: missing service principal in resource policy, missing aws:SourceAccount or aws:SourceArn condition causing wide-open allow (security risk) or overly tight rejection (delivery failure), KMS key policy missing service principal, log group resource policy missing entirely, IAM role trust policy missing vpc-flow-logs.amazonaws.com or other service principal, SCP-level deny at the OU, and VPC endpoint policy denying the call.

Cost Architecture Of Centralized Logging

Cost matters because excessive logging cost forces engineers to disable streams and create blind spots.

CloudTrail Cost Model

Management events: first copy free on the organization trail; additional copies are $2 per 100,000 events. Data events: $0.10 per 100,000 events. Insights: $0.35 per 100,000 events analyzed. CloudTrail Lake: $2.50/GB ingested plus $0.005/GB scanned. Budget for a security-mature 100-account organization: $5K-20K per month depending on activity volume.

VPC Flow Logs Cost

S3 ingestion fee: $0.50/GB. A high-throughput production VPC can produce 100 GB/day. Organization-wide flow logs at enterprise scale can reach $10K-50K/month — typically the largest centralized logging line item.

S3 Storage With Lifecycle

A year of centralized logs from a 50-account organization is 10-50 TB. With lifecycle to Glacier Deep Archive after 180 days, long-term cost drops to about $0.00099/GB/month for archived data. Ten-year Object Lock retention stays under $10K/year for mid-size organizations.

CloudWatch Logs Cost

CloudWatch Logs: $0.50/GB ingested plus $0.03/GB stored per month. At scale, this is 5-10x more expensive per GB than direct-to-S3. For pure archival, always bypass CloudWatch.

Cost-Driven Blind Spots

Common organizational mistake: disable VPC Flow Logs in sandbox accounts to save cost. Sandbox is exactly where attackers test lateral movement before pivoting to production — a sampling configuration (10 percent) preserves detection capability at 10 percent of the cost. Never disable entirely.

SCS-C02 Common Exam Traps For Centralized Logging

Trap 1 — A Trail Without Data Events Is Sufficient For S3 Forensics

Wrong. Management events do not record GetObject. To prove what objects were exfiltrated, data events on the bucket must have been enabled before the incident.

Trap 2 — Log File Validation Detects Real-Time Tampering

Partially wrong. Validation is performed retroactively when an investigator runs aws cloudtrail validate-logs; it does not alert in real time. For real-time tamper alerting, use CloudWatch alarm on the s3:DeleteObject or s3:PutObject events on the Log Archive bucket itself.

Trap 3 — VPC Flow Logs Capture DNS Queries

Wrong. Flow Logs record IP-layer metadata only. DNS queries to the Amazon resolver are explicitly excluded. Use Route 53 Resolver query logs.

Trap 4 — Object Lock Governance Mode Is Tamper-Proof

Wrong. Governance mode is bypass-able by IAM principals with s3:BypassGovernanceRetention. Only Compliance mode prevents all deletion.

Trap 5 — CloudTrail Insights Logs Every API Call

Wrong. Insights only logs anomalous patterns flagged by the model. Standard CloudTrail captures the full event log; Insights is an analytics layer on top.

Trap 6 — Subscription Filter Pattern Syntax Matches CloudWatch Logs Insights Syntax

Wrong. Subscription filter patterns use a specific filter-pattern grammar (terms, JSON paths, metric filter expressions). CloudWatch Logs Insights uses a separate query language. They are not interchangeable.

Trap 7 — Enabling Object Lock On An Existing Bucket

Impossible without AWS Support intervention in some cases. Object Lock must be enabled at bucket creation. The retrofit path is migration to a new bucket.

Trap 8 — Log Group Resource Policy Is Optional For Service Writes

Wrong. Several AWS services (Route 53 Resolver query logging, Network Firewall logging, EventBridge log destinations) write directly via service principal and require a log group resource policy. The IAM-role-only mental model does not apply.

Trap 9 — CloudTrail Captures Console Login Failures

Partial. Console login failures are captured as ConsoleLogin events with responseElements.ConsoleLogin = "Failure" in us-east-1 (where IAM events flow). If your CloudTrail trail is regional and excludes us-east-1, you will miss console login activity. Use a multi-region trail.

Trap 10 — Centralized Logging Is Complete With CloudTrail Alone

Wrong. CloudTrail covers control-plane API. VPC Flow Logs, Resolver query logs, WAF logs, ALB access logs, and CloudWatch Logs are separate streams that require separate centralization configuration.

Key Numbers And Must-Memorize Centralized Logging Facts

CloudTrail

  • Organization trail from management account or delegated admin
  • Management events: free first copy. Data events: opt-in, priced
  • Integrity validation produces hourly SHA-256 RSA signed digest files
  • Multi-region trail required to capture IAM events (which flow only in us-east-1)

VPC Flow Logs

  • Three granularities: VPC, subnet, ENI
  • Default 14 fields, custom 34 fields — always custom for security
  • Excludes IMDS, AWS DNS resolver, payload content
  • Status field Failed indicates IAM role or destination misconfiguration

CloudWatch Logs Cross-Account

  • Destination wraps Kinesis or Firehose, lives in Security account
  • Destination access policy lists permitted member account IDs
  • Log group resource policy required for service-principal-direct writes

Route 53 Resolver Query Logging

  • Per-VPC configuration, shareable via RAM
  • Captures every DNS query including private hosted zones
  • Bucket policy uses logs.amazonaws.com service principal even for S3 destination

S3 Object Lock

  • Must be enabled at bucket creation, cannot retrofit
  • Compliance mode irreversible, even root cannot bypass
  • Required for SEC 17a-4(f), FINRA 4511, CFTC 1.31(c)-(d)
  • Coexists with lifecycle transitions to Glacier Deep Archive

Cross-Account Bucket Policy Requirements

  • Service principal allowed
  • aws:SourceAccount and aws:SourceArn conditions for confused-deputy protection
  • KMS key policy granting matching kms:GenerateDataKey* to the same service principal

FAQ — Centralized Logging Top Troubleshooting Questions

Q1 — CloudTrail says "Logging" but no objects appear in S3 for two hours. What do I check first?

Open the trail in the console and read the "Last delivery error" field at the top of the configuration page. AWS prints the exact reason there — AccessDenied, KMSAccessDenied, BucketNotFound, or throttling-related. Most engineers skip this and start guessing at policies; reading the error message saves an hour. If the error is AccessDenied, inspect the bucket policy for the CloudTrail service principal allow with the matching aws:SourceArn. If KMSAccessDenied, inspect the KMS key policy for kms:GenerateDataKey* on the CloudTrail service principal. If BucketNotFound, the bucket was renamed or deleted — recreate or update the trail destination. Only after the error message points at the failing surface should you start reading policies.

Q2 — VPC Flow Logs configuration shows Active but Athena returns zero rows. Where is the breakdown?

Three diagnostic steps. First, verify in the EC2 console that the flow log status is genuinely Active and not Failed (Failed prints the error inline). Second, check the destination bucket directly with aws s3 ls against the expected prefix path — if objects exist in S3 but Athena is empty, the Glue table partition projection or Glue crawler is misconfigured (most common: partitions for the current month not yet crawled, or Hive-style partition path mismatches what AWS writes). Third, if S3 is genuinely empty, check the bucket policy for the delivery.logs.amazonaws.com service principal and the bucket's KMS key policy. If the destination is CloudWatch Logs, check the IAM role for logs:CreateLogStream — partial permission produces empty log streams.

Q3 — How do I prove a CloudTrail log file was not altered after delivery?

Run aws cloudtrail validate-logs --trail-arn <trail-arn> --start-time <ISO-timestamp> --end-time <ISO-timestamp>. The CLI fetches the digest files for the time range from the digest prefix in S3, recomputes SHA-256 hashes against the actual log file objects, verifies the RSA signature on each digest using the AWS-published public key, and walks the digest hash chain backward to confirm every digest links to the previous one. Output is per-file pass or fail. For chain-of-custody documentation, capture the CLI output as evidence and include it in the audit package. If validation fails on any file, capture the specific file name printed in the output, snapshot it for forensic investigation, and treat it as a confirmed integrity breach.

Q4 — A Route 53 Resolver query log configuration says it is associated with the VPC but nothing appears in CloudWatch Logs. Why?

The most common cause is the log group resource policy missing the logs.amazonaws.com service principal — Route 53 Resolver writes to CloudWatch Logs via the logs.amazonaws.com service principal, not via an IAM role, and the destination log group must have a resource policy granting logs:CreateLogStream and logs:PutLogEvents. Run aws logs describe-resource-policies to inspect existing policies; if the policy is missing or does not cover the target log group ARN, attach one. The console wizard creates the resource policy automatically when you select an existing log group, so this gap usually appears only when the configuration is automated via Terraform, CloudFormation, or CDK and the resource policy resource was not included.

Q5 — How do I migrate an existing centralized logging bucket to enable Object Lock without losing chain of custody?

Object Lock cannot be enabled on an existing bucket — it must be set at bucket creation. The migration path: (1) create a new bucket with Object Lock enabled and the desired default retention in Compliance mode, (2) update CloudTrail trail destination, VPC Flow Log destinations, Resolver query log destinations, and any other producers to write to the new bucket, (3) validate new deliveries succeeding to the new bucket for at least 24 hours, (4) use S3 Batch Operations with a manifest of every object in the old bucket to copy to the new bucket — Batch Operations supports applying retention metadata at copy time, preserving the WORM property for migrated history, (5) run CloudTrail log file validation against both old and new locations to confirm no objects were lost or corrupted in transfer, (6) keep the old bucket for at least the legal-hold period plus a buffer, then delete. The full migration takes weeks for busy buckets and is one of the most heavily-tested troubleshooting scenarios on SCS-C02.

Q6 — A new member account joined the organization and its CloudWatch Logs subscription filter is not delivering to the central destination. What did we forget?

The destination's access policy in the Security account. Cross-account CloudWatch Logs destinations have an access policy listing every account ID permitted to call logs:PutSubscriptionFilter against the destination ARN. New accounts must be explicitly added — joining the organization does not auto-grant. Run aws logs describe-destinations --destination-name-prefix <name> to see the current access policy, then aws logs put-destination-policy to add the new account ID. Automate this by sourcing the account list from AWS Organizations and updating the destination policy on every account-creation event, typically via an EventBridge rule on the CreateAccountResult event triggering a Lambda.

Q7 — How do I detect when CloudTrail logging is disabled or paused on the organization trail?

Three layers of defence. First, AWS Config managed rule cloudtrail-enabled evaluates whether at least one trail is logging in every account; pair with cloudtrail-s3-dataevents-enabled to require data events on sensitive buckets. Second, CloudWatch metric filter on the trail's own log group looking for StopLogging, DeleteTrail, or UpdateTrail events with alarm via SNS to the SOC. Third, a synthetic canary: an EventBridge scheduled rule running every 15 minutes that calls a benign API (sts:GetCallerIdentity) from a known role and a Lambda checking that the corresponding event appears in CloudTrail Lake within 30 minutes; missing events trigger an alarm. The combination catches deletion, pause, throttling, and bucket-policy breaks within minutes.

Further Reading — Official AWS Documentation For Centralized Logging Troubleshooting

For depth beyond SCS-C02 scope, the authoritative AWS sources are: AWS CloudTrail User Guide (especially the organization trail, log file integrity validation, and bucket-policy sections), VPC User Guide (Flow Logs section including the troubleshooting page), Amazon CloudWatch Logs User Guide (cross-account subscriptions, resource policies, IAM access control), Route 53 Developer Guide (Resolver query logging), Amazon S3 User Guide (Object Lock), and the IAM User Guide (confused deputy problem).

The AWS Security Reference Architecture (SRA) whitepaper codifies the Log Archive / Security account / delegated admin pattern. The AWS Well-Architected Security Pillar provides the conceptual anchors. AWS re:Inforce sessions on detection and response include multiple deep dives on troubleshooting log delivery failures. The AWS Security Blog has a long history of CloudTrail and VPC Flow Logs troubleshooting articles that mirror SCS-C02 stem patterns. Finally, the AWS Knowledge Center is the single best source for "I configured X and it stopped working" answers, indexed by symptom.

Official sources