Why CloudTrail Sits at the Center of SCS-C02
If you skim the SCS-C02 exam guide, AWS CloudTrail shows up explicitly under tasks 2.3, 2.4, and 2.5, and it implicitly powers tasks 1.2 (threat detection), 1.3 (incident response), 4.1 and 4.2 (auth troubleshooting), and 6.1 (multi-account governance). In other words, CloudTrail is the single most cross-cutting service on the entire SCS-C02 blueprint, and the exam writers know it. Almost every "who did what, when, from where" question on SCS-C02 collapses to CloudTrail, and almost every "we have no logs of that activity" trap on SCS-C02 also collapses to a misunderstood CloudTrail configuration.
CloudTrail records API activity across every AWS account. CloudTrail logs are immutable when you protect the destination correctly, CloudTrail logs are queryable through Athena or CloudTrail Lake, and CloudTrail logs are forwardable to EventBridge for real-time response. But CloudTrail also has a long list of footguns that the SCS-C02 exam absolutely loves: data events default OFF, Insights only watch management events, single-region trails miss global-service events, and Organization Trails behave differently from member-account trails. This deep dive walks every one of those traps, with the same mental model the exam expects.
By the end of this note you should be able to answer, in under thirty seconds, three SCS-C02-style questions: "Why didn't CloudTrail capture this S3 GetObject?", "Why didn't CloudTrail Insights flag this anomaly?", and "Why does the security account see CloudTrail logs from some member accounts but not others?". Each has a sharp, repeatable answer rooted in how CloudTrail is actually wired.
Domain 2 (Logging and Monitoring) is 18% of SCS-C02, and CloudTrail is the dominant service inside it. Add the indirect appearances in Domains 1, 4, and 6 and you will see CloudTrail-shaped questions on roughly a third of the exam. AWS Certified Security – Specialty Exam Guide
CloudTrail at a Glance — What It Is and What It Is Not
CloudTrail is the AWS-native API audit log. Every time an IAM principal — a human user, a federated identity, an EC2 role, a Lambda execution role, or even an internal AWS service — calls a public AWS API, CloudTrail records the event. CloudTrail captures who, what, when, where, and how: principal ARN, action name, resource ARN, source IP, user-agent, timestamp, request parameters, response elements, and error codes when the call fails.
CloudTrail is not a network log. CloudTrail does not see data plane traffic on a TCP socket; that is what VPC Flow Logs are for. CloudTrail does not see application logs from inside an EC2 instance; that is what CloudWatch Logs is for. CloudTrail does not perform behavioral analytics on its own; CloudTrail Insights does part of that, and GuardDuty consumes CloudTrail to do the rest. Keeping these boundaries clear avoids most SCS-C02 distractor answers.
Three Things CloudTrail Always Records
CloudTrail always records these for every captured event: the identity that made the call (userIdentity block), the API name (eventName), and the service (eventSource). CloudTrail almost always records the source IP, the user-agent, the AWS region, and the request and response payloads. CloudTrail does not record the body of an S3 object (CloudTrail only sees that GetObject happened, never the bytes), and CloudTrail does not record Lambda function arguments by default unless you opt in to data events.
Three Things CloudTrail Does Not Record by Default
By default, CloudTrail does not record S3 object-level events, Lambda function-invoke events, or DynamoDB item-level events. These three categories are the canonical CloudTrail data events, and they all share one thing: you must explicitly opt in. This is the single most common SCS-C02 trap, so we will hit it three more times in this note.
CloudTrail Event Types — Management, Data, and Insights
CloudTrail organizes everything it records into three event types, and each has a different cost, default state, and visibility scope. Memorizing the differences is the highest-leverage thirty minutes you can spend on CloudTrail for SCS-C02.
Management Events
Management events are control-plane API calls. Creating an IAM user, attaching a policy, launching an EC2 instance, modifying a security group, putting an S3 bucket policy, rotating a KMS key — all management events. CloudTrail captures management events by default in every AWS account, retains the last 90 days in CloudTrail Event History at no charge, and the first copy of management events delivered to S3 by a trail is also free.
Management events split into two sub-categories that occasionally appear on SCS-C02: Write management events (CreateUser, DeleteRole, PutBucketPolicy) and Read management events (ListBuckets, DescribeInstances, GetBucketAcl). When you create a CloudTrail trail, you can choose Write only, Read only, or both. The default is both.
Data Events
Data events are data-plane API calls against high-volume resources: S3 GetObject and PutObject, Lambda Invoke, DynamoDB GetItem and PutItem, EBS direct APIs, AppConfig configuration retrieval, Step Functions activity polling, and a growing list of others. CloudTrail data events are OFF by default, you pay for them at roughly $0.10 per 100,000 events delivered, and you opt in per resource — per S3 bucket, per Lambda function, or per DynamoDB table — or per resource selector at the trail level.
A typical SCS-C02 question describes an S3 bucket with sensitive data, an attacker who exfiltrated objects, and a security team that "checked CloudTrail and found nothing". The correct answer is almost always that the bucket did not have S3 data events enabled on any CloudTrail trail. Default-on management events would not have captured the GetObject calls. CloudTrail data events
Insights Events
CloudTrail Insights events are machine-learning anomaly detections. Insights baselines the normal rate of management API calls in your account over a rolling seven-day window, then fires an Insights event when the call rate deviates significantly. Classic example: an attacker who lands a key and runs a RunInstances storm — Insights flags the spike against your baseline.
Insights only operates on Write management events by default; you can also enable Insights on API error rates. Insights does not analyze data events. This is the second most-tested CloudTrail fact on SCS-C02, and it is paired with the previous one in the most evil distractor pattern: "We turned on CloudTrail Insights, why didn't it catch the S3 exfiltration?". The answer: because Insights does not look at data events, and S3 GetObject is a data event.
- Management events: control plane, default ON, first copy free, 90-day Event History, includes Read or Write or both
- Data events: data plane (S3 / Lambda / DynamoDB and others), default OFF, paid, opt-in per resource
- Insights events: ML anomaly detection on management Write events and error rates only — NEVER on data events
If a question asks about S3 object access, Lambda invokes, or DynamoDB item access, the answer involves data events. If a question asks about a sudden spike in API rate, the answer involves Insights (but only management). CloudTrail concepts
CloudTrail Trails vs Event History vs CloudTrail Lake
CloudTrail surfaces data through three different stores, and SCS-C02 will probe whether you know which to use when.
Event History — Always-On, 90 Days, Free
Every AWS account gets a free CloudTrail Event History view in the console, automatically populated with the last 90 days of management events. Event History is great for ad-hoc lookup ("who deleted that role yesterday?") but it is not durable beyond 90 days, not exportable in bulk, not queryable with SQL, and not aggregatable across accounts. For anything compliance-grade, you need a CloudTrail trail.
Trails — Durable Delivery to S3 (and optionally CloudWatch Logs)
A CloudTrail trail is a configuration that says "deliver these CloudTrail events continuously to this S3 bucket, optionally also to this CloudWatch Logs group, and optionally encrypt with this KMS key". Trails are how you achieve durable, long-retention CloudTrail logs that you can hand to auditors years later.
A trail belongs to either a single region or all regions, and a trail can be either an account-level trail or an Organization Trail. Each trail you create can capture management events, data events, Insights events, or any mix. The first copy of management events sent through a trail is free; subsequent copies (a second trail in the same account capturing the same events) are billed.
CloudTrail Lake — SQL-Queryable Event Store
CloudTrail Lake is a managed, immutable event data store that you query with SQL. CloudTrail Lake stores events for up to seven years, supports federated queries across multiple accounts when configured at the Organization level, and accepts non-CloudTrail event sources too — Audit Manager evidence, AWS Config configuration items, and arbitrary custom events you push via the PutAuditEvents API.
CloudTrail Lake replaces the older "CloudTrail to S3, then Glue, then Athena" pattern for many use cases. CloudTrail Lake is more expensive per ingested event than plain trail-to-S3 delivery, but you skip building the Athena schema, you skip partition management, and you get out-of-the-box SQL queries. SCS-C02 typically asks "which option requires the least operational overhead for ad-hoc, federated security queries across accounts?" — that is CloudTrail Lake.
- Immutable event data store — once written, events cannot be modified or deleted before retention expires
- SQL queries — Trino-style SQL directly against the store, no Glue catalog, no Athena workgroup
- Up to 7-year retention — configurable per data store, much longer than Event History's 90 days
- Multi-source ingestion — CloudTrail, AWS Config, Audit Manager, and custom events via
PutAuditEventsCloudTrail Lake user guide
Single-Region vs Multi-Region Trails — The Global Services Trap
When you create a CloudTrail trail in the console today, the default checkbox says "apply trail to all regions" — that is, multi-region. You can override this and create a single-region trail via the API or CLI, but you usually shouldn't.
A multi-region CloudTrail trail captures events from every AWS region, including future regions you have not opted into yet, with a single trail definition. A multi-region CloudTrail trail also captures global-service events — IAM, STS (excluding regional STS endpoints), CloudFront, Route 53 control plane, AWS Organizations, AWS Support, and AWS Account-level APIs. Global-service events are emitted in us-east-1 regardless of where the principal called from, and only multi-region trails (or single-region trails specifically in us-east-1 with the IncludeGlobalServiceEvents flag) record them.
SCS-C02 question pattern: "An auditor cannot find any IAM CreateUser events in your CloudTrail logs even though you confirmed the activity occurred." The trail is single-region, set to a region other than us-east-1, and global-service events are not being delivered. Fix: create a multi-region trail or move the trail to us-east-1 with IncludeGlobalServiceEvents=true.
Receiving CloudTrail log files from multiple regions
There is one more multi-region nuance worth memorizing: a multi-region trail captures events from a region the moment that region is enabled in your account. Single-region trails do not auto-extend. For SCS-C02, anytime you see "future-proof" or "any region the team enables" in the question stem, the answer involves a multi-region CloudTrail trail.
Organization Trail — One Trail to Rule Them All
AWS Organizations lets you create a CloudTrail Organization Trail from the management account (or a delegated administrator account for CloudTrail). An Organization Trail captures events from every member account in the organization, including accounts created after the trail was configured, and delivers them to a single S3 bucket — typically in a dedicated log-archive security account.
Why Organization Trails Are the Default Recommendation
Organization Trails solve four real-world problems that a fleet of per-account trails do not:
First, member accounts cannot turn off, modify, or delete the Organization Trail from inside their own account. Only the management account or the delegated CloudTrail administrator can. This blocks a compromised IAM principal in a member account from disabling logging.
Second, future member accounts inherit the Organization Trail automatically. You do not need to remember to onboard logging when a new account is provisioned through Control Tower or CreateAccount.
Third, all CloudTrail logs land in a single bucket with a uniform prefix structure (AWSLogs/<org-id>/<account-id>/CloudTrail/<region>/...), making cross-account analysis with Athena or CloudTrail Lake trivial.
Fourth, Organization Trails are eligible for the "first copy free" management-event pricing tier across the organization, not just per account.
Whenever the question stem mentions "AWS Organizations", "multi-account", "centralize", "tamper-resistant", or "future accounts must be covered", the answer almost always includes an Organization Trail delivering to a dedicated log-archive account, not a per-account trail in each account. Creating a trail for an organization
The Delegated Administrator Pattern
For SCS-C02, you should know that CloudTrail supports delegated administration through AWS Organizations. The management account can register a member account (typically the dedicated security tooling account) as the delegated CloudTrail administrator. The delegated admin can then create, modify, and manage Organization Trails and CloudTrail Lake event data stores on behalf of the entire organization, without granting that account general management-account privileges. This pattern aligns directly with the AWS Security Reference Architecture (SRA) recommendation of separating the management account from day-to-day security operations.
Cross-Account Log Delivery — The Bucket Policy Pattern
When CloudTrail in account A delivers logs to a bucket in account B, the bucket in account B must have a policy that allows the CloudTrail service principal to write objects. AWS auto-generates this policy when you create the trail through the console, but you should know the structure for SCS-C02 troubleshooting questions.
The bucket policy must allow cloudtrail.amazonaws.com to perform s3:PutObject and s3:GetBucketAcl against the bucket and prefix. The policy must include a Condition block with aws:SourceArn matching the trail ARN and aws:SourceAccount matching the account ID owning the trail. This Condition block is the confused-deputy mitigation — without it, any other AWS account could trick the CloudTrail service to write to your bucket on their behalf, exfiltrating their CloudTrail logs into your storage and consuming your budget.
On the exam, look for these four pieces in the policy: (1) Service: cloudtrail.amazonaws.com as principal, (2) s3:PutObject and s3:GetBucketAcl as actions, (3) aws:SourceArn matching the trail ARN, (4) aws:SourceAccount matching the trail-owning account. If any one of those is missing or wrong, log delivery silently fails.
CloudTrail cross-account S3 bucket policy
For Organization Trails, the bucket policy must additionally allow the organization's CloudTrail service-linked role and reference aws:SourceOrgID (or aws:PrincipalOrgID) for tighter scoping. AWS again auto-generates a correct policy through the console; the most common SCS-C02 failure mode is someone editing the policy by hand and removing the aws:SourceArn condition, breaking confused-deputy protection without breaking log delivery — a silent regression.
CloudTrail KMS Encryption — At-Rest Defense
CloudTrail logs are encrypted at rest by default with SSE-S3. For most SCS-C02 questions about "encrypt CloudTrail with a customer-managed key" or "ensure only authorized users can read CloudTrail logs", the answer is SSE-KMS with a customer-managed key (CMK).
Enabling KMS encryption on a CloudTrail trail requires three policy edits:
First, the KMS key policy must allow the CloudTrail service principal (cloudtrail.amazonaws.com) to call GenerateDataKey* and DescribeKey. Without this, CloudTrail cannot encrypt new log files and delivery silently fails.
Second, the KMS key policy must allow the IAM principals or roles that need to read CloudTrail logs to call Decrypt. Without this, your security analysts can list the objects in the S3 bucket but cannot decrypt them.
Third, the S3 bucket policy on the destination bucket must permit CloudTrail to write KMS-encrypted objects (typically via s3:x-amz-server-side-encryption and s3:x-amz-server-side-encryption-aws-kms-key-id conditions).
SCS-C02 questions about "regulatory requirement to encrypt all audit logs with a customer-managed key" or "the data protection officer wants to revoke CloudTrail log access by deleting a key" both point at SSE-KMS on the CloudTrail destination bucket, with a KMS CMK whose policy you control. Disable the key, and CloudTrail logs become permanently unreadable. Encrypting CloudTrail log files with AWS KMS managed keys
For multi-region CloudTrail trails encrypted with KMS, you use a single KMS key with a key policy that names every region the CloudTrail service operates in. There is no need to provision a separate key per region; multi-region keys are not required for CloudTrail (CloudTrail handles cross-region encryption against a single key just fine).
Log File Integrity Validation — Tamper Detection
Even with KMS encryption, an attacker with write access to the destination bucket could still delete or replace CloudTrail log files. CloudTrail's log file integrity validation feature defends against this by writing digest files, which are cryptographically signed manifests of the CloudTrail log files delivered each hour.
How Digest Files Chain
Every hour, CloudTrail writes a digest file to the destination bucket alongside the regular log files. Each digest file lists the SHA-256 hash of every CloudTrail log file delivered in that hour, plus the hash of the previous hour's digest file. This chain means tampering with any single hour's logs invalidates every subsequent digest. Digest files are signed by AWS using an asymmetric key, so even if an attacker has full write access to the bucket, the attacker cannot forge a valid digest without AWS's private key.
Validating Logs in Practice
The AWS CLI ships with aws cloudtrail validate-logs --trail-arn <arn> --start-time <iso-8601>. The command walks the digest chain backwards from the most recent digest, verifies the AWS signature on each digest, and verifies that every log file referenced in each digest still has the expected SHA-256 hash. Mismatches mean tampering, missing files, or — most often on SCS-C02 — a misconfigured S3 lifecycle policy that expired logs while their corresponding digests still expected them.
Log file integrity validation is detective. S3 Object Lock in compliance mode is preventive. The SCS-C02 best-practice answer for "tamper-evident, tamper-proof CloudTrail logs" pairs both: validate the digest chain to detect any deviation, and Object Lock to prevent deletion or overwrite in the first place. Glacier Vault Lock can play a similar preventive role for archival CloudTrail logs. CloudTrail log file integrity validation
CloudTrail Insights — Anomaly Detection on Management Events
CloudTrail Insights is the built-in machine-learning anomaly detector that ships with CloudTrail. Once enabled on a trail, Insights baselines your normal management-event call rate over a seven-day rolling window per API per region per account, then writes an Insights event to the destination when the observed rate deviates significantly from the baseline.
What Insights Watches
Two metric types: API call rate (RunInstances per minute, AssumeRole per minute, etc.) and API error rate (the proportion of calls returning an error). The first catches credential-stuffing-style automation that hammers an API. The second catches reconnaissance scanning that probes many APIs the principal does not have permission for, generating an error-rate spike.
What Insights Does Not Watch
Insights does not analyze data events, ever. Insights does not analyze Read management events for call-rate anomalies (only Write); it does analyze errors across both Read and Write. Insights also does not work cross-account natively — each trail is its own baseline. For organization-wide anomaly detection on data events, you need Amazon GuardDuty with the S3 Protection or Lambda Protection feature enabled.
A pattern SCS-C02 uses: "An attacker stole credentials and exfiltrated data quietly over weeks." CloudTrail Insights does not catch this — there is no spike in API rate and no error spike. The right detection is GuardDuty UnauthorizedAccess:IAMUser/AnomalousBehavior paired with S3 data events for forensic detail, not Insights.
CloudTrail Insights events
CloudTrail to EventBridge — Real-Time Response
CloudTrail integrates with Amazon EventBridge so that every CloudTrail event becomes an EventBridge event you can match with rules. This is the foundation of every "auto-remediate when X happens" pattern on SCS-C02.
The integration has a few surprises:
Management events appear in EventBridge in roughly real-time (typically under fifteen minutes of the API call). Data events appear in EventBridge only when you have enabled CloudTrail data events on the relevant resource AND turned on the data-event-to-EventBridge feature. Insights events appear in EventBridge as AWS API Call via CloudTrail events with eventName of Insights.
Common EventBridge rule patterns for CloudTrail include matching on eventSource: iam.amazonaws.com and eventName: CreateUser to alert on new IAM users, matching on errorCode: AccessDenied for access-denied storms, and matching on userIdentity.type: Root to alert on any root-account activity. Each rule can target Lambda for remediation, SNS for paging, Step Functions for orchestrated runbooks, or Security Hub for finding aggregation.
CloudTrail to CloudWatch Logs — Live Stream and Metric Filters
A CloudTrail trail can also deliver events to a CloudWatch Logs log group as they happen, in addition to the S3 destination. This is how you power CloudWatch metric filters and CloudWatch alarms on CloudTrail content.
Classic SCS-C02 patterns built on CloudTrail-to-CloudWatch-Logs:
- A metric filter for
{ $.eventName = ConsoleLogin && $.errorMessage = "Failed authentication" }feeding a CloudWatch alarm that pages on three failures in five minutes. - A metric filter for
{ $.userIdentity.type = Root }feeding an alarm that fires on any root use. - A metric filter for
{ $.eventName = StopLogging || $.eventName = DeleteTrail || $.eventName = UpdateTrail }to detect anyone trying to disable CloudTrail.
The trade-off versus EventBridge: CloudWatch Logs costs include ingestion plus retention storage, and metric filters operate on log lines after ingestion (slightly slower than EventBridge), but metric filters give you alarms that integrate with the CloudWatch dashboarding model and historical metric-based reporting. EventBridge gives you faster, lower-overhead reactive automation but no historical metric.
CloudTrail and Athena — Long-Tail Forensics
CloudTrail logs in S3 are JSON, gzip-compressed, partitioned by region and date. Pointing Amazon Athena at the bucket gives you SQL across years of CloudTrail history. The console even has a one-click "Create table for CloudTrail logs in Athena" wizard that builds the right Glue table with partition projection.
CloudTrail-to-Athena is the answer when SCS-C02 asks "which is the most cost-effective way to query several years of CloudTrail logs across many accounts?". CloudTrail Lake is the answer when "least operational overhead" is the qualifier; Athena wins when "lowest cost" is the qualifier, especially if you are already storing logs in S3 anyway. Athena bills only when you query (per terabyte scanned), while CloudTrail Lake bills per ingested event.
Common SCS-C02 Trap Patterns — Speed-Run
Here is the rapid-fire list of CloudTrail traps SCS-C02 has been observed using, distilled to one paragraph each.
Trap 1: Data events default OFF. "Why didn't CloudTrail catch the S3 exfiltration?" Because data events are off by default; the team must enable S3 data events on the trail per bucket or with a resource selector.
Trap 2: Insights only watches management events. "Why didn't Insights flag the data-exfiltration anomaly?" Because Insights ignores data events; for data-plane anomaly detection use GuardDuty S3 Protection.
Trap 3: Single-region trail in non-us-east-1 misses global services. "Where are my IAM CreateUser events?" In us-east-1. Fix: multi-region trail.
Trap 4: Future regions are not covered by single-region trails. "Why are events from the new ap-southeast-4 region missing?" Because a single-region trail does not extend; multi-region trails do.
Trap 5: Member accounts can disable per-account trails but cannot disable Organization Trails. "How do we make sure a compromised member account cannot disable logging?" Use an Organization Trail managed from the delegated CloudTrail admin account.
Trap 6: Confused-deputy on cross-account bucket delivery. "Why is account X's CloudTrail logs landing in our bucket?" Because the bucket policy is missing aws:SourceArn and aws:SourceAccount conditions; CloudTrail's service principal is too generous without them.
Trap 7: KMS key deletion silently breaks CloudTrail. "Why did all our recent CloudTrail logs become unreadable?" Because the KMS CMK used for SSE-KMS on the destination bucket was scheduled for deletion. CloudTrail logs encrypted with that key are now permanently lost.
Trap 8: Lifecycle policy expired logs but digest file expects them. "Why does validate-logs fail with 'log file missing'?" Because S3 lifecycle expired the log file but not the digest. Match retention between digest and log files, or extend retention.
Trap 9: CloudTrail Lake event data store has its own retention. "Why are events older than 90 days missing from CloudTrail Lake?" Because the event data store retention was left at the default. Extend retention up to seven years.
Trap 10: First copy free, second copy paid. "Why did our CloudTrail bill double?" Likely because you have two trails capturing the same management events. Only the first delivery copy is free.
SCS-C02 questions reuse these patterns with surface-level changes. If you can recognize the underlying mechanism in under thirty seconds, you will save real exam time. CloudTrail troubleshooting
Production Architecture Pattern — The Reference Layout
The widely-recommended SCS-C02-grade CloudTrail architecture, drawn from AWS Security Reference Architecture and the Logging in AWS whitepaper, looks like this:
- AWS Organizations is enabled with a dedicated
log-archiveaccount and a dedicatedsecurity-toolingaccount, both under aSecurityOU. - CloudTrail is enabled at the Organization level via the management account, with the
security-toolingaccount registered as the delegated CloudTrail administrator. - A single multi-region Organization Trail captures all management events and selected data events (typically S3 and Lambda for the buckets and functions tagged
Sensitivity=High). - The Organization Trail delivers to an S3 bucket in the
log-archiveaccount, with SSE-KMS using a CMK in thelog-archiveaccount, S3 Object Lock in compliance mode for one-year retention, and S3 Versioning enabled. - Log file integrity validation is on; digest files chain through the same bucket.
- A second copy of the trail also flows to a CloudWatch Logs group in
security-toolingfor metric filters on root login, ConsoleLogin failures, and CloudTrail tampering attempts. - CloudTrail Lake is configured at the organization level for federated SQL queries, with a seven-year retention event data store.
- EventBridge rules in
security-toolingmatch high-severity CloudTrail events (root use, IAM CreateUser, security-group wide-open changes) and forward to Lambda runbooks and Security Hub. - SCPs in the management account
Denycloudtrail:StopLogging,cloudtrail:DeleteTrail,cloudtrail:UpdateTrail,cloudtrail:PutEventSelectors, and similar APIs across the entire organization, even from member-account admins.
This layered design is the canonical answer to any "which architecture provides the best assurance of complete, tamper-evident, tamper-resistant audit logging across the organization?" question on SCS-C02.
Frequently Asked Questions
Why does CloudTrail show the API call but not the data inside the request?
CloudTrail records request parameters (the metadata of the call) but not the data payload of services like S3 object bodies, DynamoDB item attributes for data events, or Lambda invoke event objects (those are recorded only as data events when explicitly enabled, and even then CloudTrail captures the API metadata rather than the full payload). For object bytes you need server access logs from S3 or application-level logging. For SCS-C02 purposes, "CloudTrail tells you the call happened, not what data moved through it" is the right mental model. See the CloudTrail record contents reference for the exact fields captured.
Can CloudTrail capture cross-region API calls in a single-region trail?
A single-region trail captures only events for the region you specified, plus optionally global-service events if (and only if) the trail is in us-east-1. If a principal in your account calls an API in a region your single-region trail does not cover, that call is invisible to that trail. Multi-region trails are the safe default. The multi-region trail documentation goes deeper.
What happens when a member account leaves the AWS Organization while an Organization Trail is active?
CloudTrail stops capturing events for that account at the moment it leaves the organization. The historical CloudTrail logs already delivered remain in the log-archive bucket, but new activity from the now-standalone account is not recorded by the (now ex-)Organization Trail. The standalone account would need to set up its own per-account CloudTrail trail to keep logging. See updating an Organization Trail.
Are CloudTrail Lake event data stores billed for storage even after I stop ingesting?
Yes. CloudTrail Lake bills per event ingested at write time, and additionally for retained storage on a monthly basis until the configured retention expires. Switching off ingestion does not stop storage charges; only deleting the event data store, or letting retention expire, ends the storage bill. See CloudTrail pricing for the current numbers.
Should I send CloudTrail to S3, to CloudWatch Logs, or to both?
Both, almost always, when budget allows. S3 gives you durable, cheap, long-retention storage that Athena and CloudTrail Lake can query. CloudWatch Logs gives you live metric filters, alarms, and Logs Insights queries on a shorter horizon. The dual-destination pattern is exactly what the Logging in AWS whitepaper recommends, and it is the answer SCS-C02 expects when you see "real-time alerting" and "long-term forensic search" in the same question.
How do I detect someone trying to disable CloudTrail itself?
Three layers: (1) an SCP at the Organization root that Denys cloudtrail:StopLogging, cloudtrail:DeleteTrail, and related APIs to everyone except a narrow break-glass role; (2) a CloudWatch metric filter on eventName = StopLogging || DeleteTrail || UpdateTrail feeding an alarm that pages immediately; (3) an EventBridge rule on the same patterns triggering Lambda auto-remediation that re-enables logging. SCS-C02 frequently expects the SCP-plus-detection layered answer, not just one of them. See CloudTrail security best practices.
Does CloudTrail capture failed API calls or only successful ones?
CloudTrail captures both. Failed calls are recorded with the errorCode and errorMessage fields populated. This is critical for SCS-C02 questions about reconnaissance detection, IAM denied calls, and least-privilege troubleshooting — errorCode = AccessDenied is one of the highest-signal fields in CloudTrail forensics. The CloudTrail event record reference lists every field.