examhub .cc The most efficient path to the most valuable certifications.
Vol. I
In this note ≈ 25 min

Compute Workload Security Inspector Ssm

4,900 words · ≈ 25 min read

Why Compute Workload Security Matters for SCS-C02

Compute workload security is the heart of AWS Certified Security – Specialty Domain 3 Task 3.3, and it is where the SCS-C02 exam expects you to combine vulnerability management, patching, hardening, and credential delivery into a single coherent story. Compute workload security on AWS is not a single service — it is a discipline that pulls together Amazon Inspector, AWS Systems Manager, EC2 Image Builder, IAM roles, IMDSv2, AWS Secrets Manager, and AWS Systems Manager Parameter Store. The exam will rarely ask "what does Inspector do?"; it will ask "given a fleet of EC2 instances and ECR images, how do you continuously scan, prioritise, and remediate?" That is compute workload security in action. This topic builds the muscle memory you need so that any compute workload security scenario on the exam triggers the right service combination automatically.

To pass questions on compute workload security you need to internalise four mental models: continuous scanning (Inspector v2), declarative patching (Patch Manager), immutable hardening (Image Builder golden AMIs), and least-privilege credential delivery (instance roles, IMDSv2, Secrets Manager). Compute workload security on the SCS-C02 also bleeds into Domain 1 (incident response automation when Inspector raises a critical finding) and Domain 2 (Patch Compliance shipped to Security Hub). Treat compute workload security as the connective tissue between detection, governance, and remediation, not as a checkbox in a single service console. Every section that follows reinforces compute workload security from a different angle.

Compute workload security is tested heavily on SCS-C02 because Domain 3 is 20% of the scored items, and Task 3.3 alone covers patching, scanning, hardening, IAM roles, host-based firewalls, and secret delivery — six skill bullets in one task. Expect at least 4 to 6 scenario questions on compute workload security in any single exam attempt.

Amazon Inspector v2 Continuous Scanning Architecture

Amazon Inspector v2 is the cornerstone of compute workload security on AWS. Inspector v2 is a wholesale rewrite of the original Inspector — the v2 engine moves away from agent-based assessment runs and toward continuous, account-wide scanning of three workload types: Amazon EC2 instances, container images in Amazon ECR, and AWS Lambda functions (including Lambda layers). Compute workload security questions on the SCS-C02 almost always assume Inspector v2 unless explicitly stated otherwise; if a question references "Inspector classic" it is usually a distractor.

Inspector v2 continuously evaluates workloads against a CVE database curated from upstream sources such as the National Vulnerability Database, vendor security advisories, and AWS-internal feeds. For EC2, Inspector v2 uses the AWS Systems Manager (SSM) agent to gather a software bill of materials (SBOM); the SSM agent must be installed, running, and have network reachability to the SSM endpoints (typically via VPC endpoints in private subnets). Inspector v2 then correlates installed packages against known CVEs and produces findings with severity scores. For ECR, Inspector v2 scans on push and continuously rescans for newly disclosed CVEs. For Lambda, Inspector scans the deployed package and any associated layers.

Software bill of materials (SBOM): A formal inventory of every software component, library, and dependency that ships inside a workload. Inspector v2 builds an SBOM per EC2 instance via the SSM agent and per container image via ECR's manifest, then matches that SBOM against CVE feeds. Without an accurate SBOM, compute workload security is impossible — you cannot patch what you cannot see.

Network Reachability Findings

In addition to package vulnerabilities, Inspector v2 produces network reachability findings for EC2 instances. These findings combine VPC routing, security groups, NACLs, internet gateways, and VPC peering to compute the actual reachability of an instance from the internet or from other VPCs — independent of installed packages. Network reachability findings are the bridge between Task 3.3 (compute workload security) and Task 3.4 (network troubleshooting). On the exam, if a scenario describes "an instance has an open SSH port reachable from 0.0.0.0/0", expect Inspector network reachability to be the right answer, not raw VPC Flow Logs.

Inspector Finding Severity, Exploit Availability, and Prioritisation

Inspector v2 assigns each finding a severity (Critical, High, Medium, Low, Informational) computed from a base CVSS v3 score, plus an Inspector-specific score that incorporates exploit availability, network reachability, and the age of the CVE. Compute workload security teams rarely have the bandwidth to remediate every finding; the exam tests whether you can prioritise. The standard prioritisation hierarchy is: Critical findings with known exploits and network reachability come first; Critical findings without exploits next; High findings on internet-facing instances next; everything else last.

Inspector findings flow automatically into AWS Security Hub using the AWS Security Finding Format (ASFF). From Security Hub, findings can be routed to Amazon EventBridge for automated remediation. This is the canonical SCS-C02 pattern for compute workload security automation: Inspector → Security Hub → EventBridge → Lambda or Systems Manager Automation. Memorise this chain; it appears repeatedly in scenario questions.

Many candidates assume that activating Inspector v2 automatically scans every existing EC2 instance. It does not. Inspector v2 only scans EC2 instances that have the SSM agent installed, running, and registered with Systems Manager as a Managed Instance, AND that have a tag-based resource group eligible for scanning. If the SSM agent cannot reach the SSM endpoints — for example, a private subnet without VPC endpoints — the instance shows up as "Unmanaged EC2 instance" in Inspector and is silently skipped. This is a classic compute workload security gotcha on the SCS-C02.

Inspector Finding Remediation Workflow

The exam loves automation patterns. The canonical compute workload security remediation workflow is:

  1. Inspector v2 raises a Critical CVE on an EC2 fleet.
  2. Inspector forwards the finding to Security Hub via ASFF.
  3. EventBridge rule on Security Hub Findings - Imported with filter severity.label = CRITICAL fires.
  4. EventBridge target invokes either an AWS Step Functions state machine or directly a Systems Manager Automation runbook.
  5. Step Functions orchestrates: tag the instance for "patching pending", invoke Patch Manager AWS-RunPatchBaseline against the instance, wait for compliance, verify via Inspector rescan, notify via SNS.
  6. If patching fails, Step Functions invokes a fallback path: snapshot the instance, terminate, and let the Auto Scaling group launch a fresh instance from a freshly baked golden AMI.

This workflow is compute workload security at production scale. The exam will not ask you to write the JSON, but it will ask you which services orchestrate which step. Step Functions is the right answer when the workflow has more than two steps with branching; Lambda alone is the right answer for one-shot remediations.

For exam-time pattern matching: if a scenario mentions "multi-step" or "long-running" remediation, pick Step Functions. If it says "one event triggers one action", pick Lambda directly from EventBridge. If it says "patch a fleet", pick Systems Manager Automation with the AWS-RunPatchBaseline document. Compute workload security automation rarely needs a custom EC2 instance — let managed services do the work.

AWS Systems Manager Patch Manager Deep Dive

AWS Systems Manager Patch Manager is the AWS-native answer to fleet patching across Linux, Windows, and macOS instances — including hybrid (on-premises) servers registered as Managed Instances. Patch Manager is the second pillar of compute workload security after Inspector. Inspector tells you what is broken; Patch Manager fixes it.

Patch Manager has four conceptual building blocks:

  • Patch baseline: a rule set that defines which patches are approved (by classification, severity, vendor, product) and which are rejected. AWS publishes default baselines (e.g., AWS-AmazonLinux2DefaultPatchBaseline); you create custom baselines to enforce your organisation's compute workload security policy, such as "auto-approve Critical and Important security patches after 7 days, reject all driver updates."
  • Patch group: a tag-based grouping of instances (Patch Group tag value) that maps to a baseline. Patch groups let you apply different baselines to dev vs prod compute workload security tiers.
  • Maintenance window: a scheduled time slice during which Run Command executes the AWS-RunPatchBaseline document. Maintenance windows enforce change windows, concurrency, and error thresholds.
  • Patch compliance: post-execution evaluation that reports per-instance compliance to Systems Manager Compliance and onward to Security Hub.

::memorize[Patch Manager mental model:

  • Baseline = which patches count as "approved"
  • Group = which instances get which baseline (via tag Patch Group)
  • Window = when patching runs
  • Compliance = did patching succeed?

If a scenario asks "how do I apply different patch policies to dev vs prod?", the answer is patch groups with different baselines, not separate maintenance windows. Memorise this distinction — it is one of the most tested compute workload security details on SCS-C02.]{href="https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-patch-baselines.html"}

Hybrid Patch Management for On-Premises Servers

Patch Manager extends to on-premises Linux and Windows servers via Systems Manager hybrid activations. You generate a hybrid activation, install the SSM agent on the on-prem server, and the server registers as a Managed Instance with an mi- prefix instead of i-. From that moment on, Patch Manager treats it identically to a native EC2 instance. This is the only AWS-native answer for "we have on-prem Windows servers and we want unified compute workload security patching"; third-party tools like SCCM are out of scope on SCS-C02.

EC2 Image Builder and Hardened Golden AMIs

Patching running instances is reactive. The proactive compute workload security pattern is the golden AMI: a hardened, pre-patched, pre-configured base image that every new instance launches from. EC2 Image Builder is AWS's managed service for building, testing, and distributing golden AMIs (and container images) on a schedule.

Image Builder has five primitives:

  • Image recipe: declares a parent image (e.g., latest Amazon Linux 2023 AMI) plus an ordered list of components.
  • Component: a YAML document that runs build-phase and test-phase steps. AWS publishes managed components (CIS Level 1 hardening, STIG hardening, CloudWatch agent install); you author custom components for app-specific compute workload security baselines.
  • Image pipeline: schedules recipe execution (cron or on-demand) and applies a distribution configuration.
  • Distribution configuration: defines target Regions, target accounts, AMI permissions (encrypted with KMS), and launch templates updated automatically.
  • Infrastructure configuration: declares the build-time EC2 instance type, subnet, IAM instance profile, and SNS topic for build notifications.
Image Builder pipelines should be invoked via EventBridge whenever a new CVE is published in Inspector for the parent image. This closes the loop on compute workload security: a critical CVE in your golden AMI triggers a fresh build, which triggers a launch template update, which triggers Auto Scaling group instance refresh, which retires the vulnerable instances. End-to-end automated, no humans required.

The CIS Level 1 and STIG components are particularly important for compute workload security on SCS-C02. CIS = Center for Internet Security; STIG = Security Technical Implementation Guide (US DoD). Both impose host-level configuration baselines: disable unused services, set kernel parameters, configure auditd, harden sshd_config. Memorise that "we need a CIS-hardened AMI" is best answered by Image Builder with the AWS-managed CIS component, not by hand-rolled scripts.

IAM Instance Roles vs Service Roles

A core compute workload security skill is knowing which IAM role does what. The SCS-C02 distinguishes:

  • Instance role (a.k.a. EC2 instance profile): an IAM role assumed by code running on an EC2 instance. The credentials are surfaced via the Instance Metadata Service (IMDS). Use cases: an app on EC2 reads from S3, writes CloudWatch metrics, fetches a secret from Secrets Manager.
  • Service role: an IAM role assumed by an AWS service on your behalf to act against your resources. Use cases: Image Builder needs a service role to create AMIs; Systems Manager Automation needs a service role to invoke other APIs; Lambda needs an execution role.

Both are IAM roles, but the trust policy differs. An instance role's trust policy lists ec2.amazonaws.com as the principal; a service role's trust policy lists the relevant service principal (imagebuilder.amazonaws.com, ssm.amazonaws.com, lambda.amazonaws.com).

The exam-critical permission is iam:PassRole. To attach an instance role to an EC2 instance, the IAM user or role doing RunInstances must have iam:PassRole for the role being passed. To delegate a service role to Image Builder, the same iam:PassRole is required. Compute workload security misconfigurations frequently come from over-broad iam:PassRole grants — a developer with iam:PassRole on * can hand any role to any EC2 instance, including overprivileged roles.

iam:PassRole is one of the top three IAM-related compute workload security pitfalls on SCS-C02. If a scenario describes "developers can launch EC2 instances but should not be able to attach the production-admin role", the correct fix is to scope iam:PassRole to a specific list of allowed roles via a Resource element on the policy — NOT to remove RunInstances permission. The exam will offer "remove RunInstances" as a tempting distractor.

Instance Metadata Service v2 (IMDSv2) Enforcement

IMDSv2 is the single most important compute workload security control on EC2. The legacy IMDSv1 was a plain HTTP GET against http://169.254.169.254/latest/meta-data/iam/security-credentials/.... Server-side request forgery (SSRF) attacks abused this: a vulnerable web app, tricked into proxying a request, could exfiltrate the instance role's temporary credentials.

IMDSv2 fixes SSRF with three controls:

  1. Token-based session: the client must first PUT /latest/api/token with a header X-aws-ec2-metadata-token-ttl-seconds, then include the returned token on every GET. SSRF that only forwards GETs is blocked.
  2. Hop limit: the IP TTL on metadata responses defaults to 1, meaning a containerised SSRF cannot bridge a Docker network. The default hop limit is 1; bump to 2 only when running containers that legitimately need IMDS access.
  3. Required mode: set the instance metadata option to HttpTokens=required to refuse all IMDSv1 requests outright. This is the production compute workload security baseline.

Enforce IMDSv2 organisation-wide via three layers:

  • Account-level default in EC2 settings: new instances default to IMDSv2 required.
  • AMI metadata: bake imdsv2 = required into the golden AMI so derived launch templates inherit it.
  • SCP: deny ec2:RunInstances when ec2:MetadataHttpTokens != required. This is the bulletproof compute workload security guardrail at the AWS Organizations level.
SSRF (Server-Side Request Forgery): A vulnerability where an attacker tricks the server into making HTTP requests to internal endpoints. On EC2, the canonical SSRF target is the instance metadata endpoint at 169.254.169.254. IMDSv2 token-based authentication mitigates SSRF by requiring a session token that SSRF GET-only payloads cannot obtain. Compute workload security baselines REQUIRE IMDSv2 in 2026.

Host-Based Firewalls and OS-Level Hardening

Security groups and network ACLs are perimeter controls. Compute workload security defence-in-depth requires a second firewall layer on the host itself: iptables / nftables on Linux, Windows Firewall on Windows. Host-based firewalls cap the blast radius if a security group is misconfigured, and they let you express policies the security group cannot — for example, "this user account can only connect to this local port".

The SCS-C02 expects you to know that:

  • Host-based firewalls are managed by the customer; AWS does not configure iptables for you.
  • Configurations should be baked into the golden AMI via Image Builder components, not configured on every running instance.
  • Host-based firewall logs (auditd, Windows Event Log) ship to CloudWatch Logs via the unified CloudWatch agent.
  • The CloudWatch agent is itself installed via Systems Manager State Manager (AWS-ConfigureAWSPackage) for fleet consistency.
For SCS-C02 exam scenarios, "host-based firewall" almost always points to either Windows Firewall (for Windows fleets) or iptables / firewalld (for Linux fleets) PLUS the CloudWatch agent shipping logs centrally. AWS does NOT have a managed host-based firewall product — do not look for one. Compute workload security at the OS level is shared-responsibility customer-side.

Secret and Credential Delivery to Compute

The SCS-C02 dedicates a full skill bullet to "passing secrets and credentials securely to compute workloads". The compute workload security antipatterns to avoid are:

  • Hardcoding access keys in user data scripts.
  • Hardcoding access keys in environment variables.
  • Baking credentials into AMIs.
  • Storing credentials in plaintext config files.

The AWS-native compute workload security patterns are:

  • For IAM credentials: never deliver them. Use the instance role; the SDK auto-discovers credentials via IMDSv2.
  • For database passwords, third-party API keys, OAuth tokens: store in AWS Secrets Manager. Apps fetch via SDK at startup. Rotate via Lambda rotation function on a schedule.
  • For non-sensitive config (feature flags, region names): use Systems Manager Parameter Store standard parameters.
  • For sensitive config that is too low-volume to justify Secrets Manager pricing: use Parameter Store SecureString parameters, encrypted with a KMS customer-managed key.

The Secrets Manager Lambda extension is a compute workload security upgrade for serverless: a Lambda layer that sidecars the function, fetches the secret once, caches it locally, and refreshes on TTL. Without the extension, every invocation incurs a Secrets Manager API call — expensive at scale.

For Amazon ECS and Amazon EKS workloads, the canonical compute workload security pattern is to inject secrets into the container at runtime via the task definition's secrets block, sourced from Secrets Manager or Parameter Store. The secret never lands on disk and never appears in CloudFormation templates or task definition JSON. Use task IAM roles to grant access — NOT the instance role of the underlying EC2 host. This separation is heavily tested on SCS-C02.

Container Image Scanning with ECR Enhanced Scanning

Amazon ECR offers two scan modes that you must distinguish on SCS-C02:

  • Basic scanning (free): uses the open-source Clair engine. Scans on push only. Limited CVE feed coverage (mostly OS packages).
  • Enhanced scanning (paid, powered by Inspector v2): continuous scanning, broader CVE feeds, application-layer language packages (Python pip, Node npm, Java Maven), Lambda support.

For compute workload security at production scale, enhanced scanning is the right answer. The trade-off is cost: Inspector v2 charges per image scanned per month. Lab environments may stick with basic scanning; production should not.

ECR image scan results flow into Security Hub identically to EC2 findings. The same compute workload security remediation pattern applies: enhanced scan → ASFF → Security Hub → EventBridge → CodeBuild rebuild trigger → push fresh image → ECS / EKS service refresh.

Patch Compliance Dashboards and Reporting

Systems Manager Compliance aggregates patch state across the fleet. Each instance reports per-patch compliance after every Run Command execution; Compliance presents this as a fleet-wide percentage in the SSM console. Compliance data ships to:

  • Security Hub: as Patch Compliance ASFF findings, mixing with Inspector and GuardDuty findings on a single pane of glass.
  • AWS Config: via the managed rule ec2-managedinstance-patch-compliance-status-check, which marks non-compliant instances NON_COMPLIANT and triggers Config remediation actions.
  • Amazon QuickSight or Athena: by exporting Compliance data to S3 for custom dashboards.

For SCS-C02 questions that ask "how do I prove to an auditor that 95% of our fleet has applied the September Microsoft patches?", the answer chain is: Patch Manager runs the baseline → Compliance records per-instance status → Config rule evaluates → Security Hub aggregates → Audit Manager builds the evidence package. Compute workload security is also compute workload provability.

::memorize[Compute workload security service map for SCS-C02:

  • Vulnerability detection: Inspector v2 (EC2, ECR, Lambda)
  • Patching: Systems Manager Patch Manager (+ Run Command, Maintenance Windows)
  • Hardening / immutable images: EC2 Image Builder
  • Identity for workloads: IAM instance roles (with iam:PassRole controls)
  • Metadata protection: IMDSv2 (HttpTokens=required, hop limit 1)
  • Host firewall: customer-managed iptables / Windows Firewall, baked via Image Builder
  • Secrets: Secrets Manager (with rotation Lambda) for sensitive; Parameter Store SecureString for low-volume
  • Aggregation: Systems Manager Compliance → Security Hub → EventBridge

Memorise this seven-row table. It directly maps to every compute workload security skill bullet in Task 3.3.]{href="https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/welcome.html"}

Plain-Language Explanation:

Frequently Asked Questions

Q1: What is the difference between Amazon Inspector v2 and Inspector classic, and which one does the SCS-C02 test?

Inspector classic was an agent-based, on-demand assessment service that ran scheduled scans against EC2 instances. Inspector v2 (rebranded simply as "Amazon Inspector" in 2026) is a continuous, account-wide scanner covering EC2, ECR, and Lambda using the SSM agent for data collection — no separate Inspector agent. SCS-C02 tests Inspector v2 exclusively. If you see "assessment template" or "rules package" in an answer choice, those are Inspector classic vocabulary and almost always distractors. The compute workload security default in 2026 is v2.

Q2: How does Inspector v2 detect vulnerabilities without an Inspector-specific agent?

Inspector v2 piggybacks on the AWS Systems Manager (SSM) agent that is already installed on Amazon Linux, Ubuntu (recent), and Windows Server AMIs by default. The SSM agent gathers a software bill of materials (SBOM) on a schedule and forwards it to Inspector via an internal control plane. For ECR, Inspector reads image manifests directly from the registry. For Lambda, Inspector reads deployment package metadata. The compute workload security implication: ensure the SSM agent has IAM permission (via the instance profile and the AmazonSSMManagedInstanceCore policy) and network reachability to SSM endpoints, otherwise Inspector silently skips the instance.

Q3: What is the minimal IAM policy to let an EC2 instance be scanned by Inspector and patched by Patch Manager?

Attach the AWS-managed policy AmazonSSMManagedInstanceCore to the instance role. This policy grants the SSM agent the API permissions needed to register, send heartbeats, retrieve SSM documents, and report inventory. Patch Manager and Inspector v2 both rely on this same baseline. Optionally, add AmazonInspector2ManagedCisPolicy for CIS benchmark scanning. Keep policies tight: never grant * on ssm:* to an instance role. This is a frequent compute workload security misconfiguration on SCS-C02 scenario questions.

Q4: When should I use Secrets Manager versus Parameter Store SecureString for compute workload security?

Use Secrets Manager when: (1) the secret needs automatic rotation (database passwords, OAuth tokens), (2) the secret is shared cross-account via resource policy, (3) you need built-in integration with RDS, Redshift, or DocumentDB. Use Parameter Store SecureString when: (1) the secret is low-volume (< 10,000 reads / month), (2) rotation is manual or via your own Lambda, (3) cost matters — Parameter Store standard tier is free, Secrets Manager is $0.40 per secret per month plus per-API-call charges. For pure config (non-sensitive), use Parameter Store standard (String) parameters. The compute workload security exam answer typically picks Secrets Manager when "automatic rotation" appears in the requirements.

Q5: How do I enforce IMDSv2 across an entire AWS Organization?

Three layers, in order of strength: (1) Account-level default: in the EC2 console under "Data protection and security", set "Instance metadata defaults" to HttpTokens = required. New instances inherit this. (2) Launch template: bake MetadataOptions: { HttpTokens: required, HttpPutResponseHopLimit: 1 } into every launch template. (3) Service Control Policy: at the Organizations OU level, deny ec2:RunInstances when ec2:MetadataHttpTokens != required — this is the bulletproof guardrail because it stops human and automated bypasses alike. Compute workload security at scale always uses SCPs as the outer ring.

Q6: What does the EC2 Image Builder pipeline produce, and how does it deliver to a fleet?

An Image Builder pipeline runs the recipe, executes build-phase components (install agents, apply CIS hardening), executes test-phase components (verify hardening), produces an AMI (or container image), and applies a distribution configuration that copies the AMI to target Regions, shares it with target accounts, and optionally updates a parameter in Parameter Store with the new AMI ID. To deliver to a fleet, point your Auto Scaling group's launch template at the Parameter Store reference (e.g., resolve:ssm:/golden-ami/amzn2-latest); when Image Builder updates the parameter, ASG instance refresh rolls the fleet to the new AMI. This is the canonical compute workload security pipeline pattern on SCS-C02.

Q7: How does compute workload security tie into AWS Security Hub for centralised visibility?

Security Hub aggregates findings from Inspector v2 (CVEs, network reachability), Systems Manager Compliance (patch compliance), AWS Config (resource compliance rules including ec2-imdsv2-check and ec2-managedinstance-patch-compliance-status-check), GuardDuty (runtime threats), and IAM Access Analyzer. Every finding is normalised into ASFF and visible on a single pane of glass with severity, resource, and remediation guidance. From Security Hub, EventBridge rules fan out automated remediation (Lambda, Step Functions, SSM Automation). For SCS-C02, remember: Security Hub is the aggregator, not the detector — Inspector detects, Patch Manager remediates, Security Hub displays.

Exam-Ready Summary

Compute workload security on the SCS-C02 boils down to a seven-service pattern: Amazon Inspector v2 (continuously scan EC2, ECR, Lambda), AWS Systems Manager Patch Manager (apply patches by group, baseline, window), EC2 Image Builder (bake hardened golden AMIs), IAM (instance roles + iam:PassRole controls), IMDSv2 (token-based metadata, hop limit 1, SCP-enforced), AWS Secrets Manager / Parameter Store (zero-hardcoded credentials), and AWS Security Hub (aggregate, route, automate).

Internalise the compute workload security automation chain: Inspector finding → Security Hub ASFF → EventBridge rule → Step Functions or SSM Automation runbook → patch / rebuild / refresh → Inspector rescan confirms. Internalise the immutable-infrastructure mindset: Image Builder is preferred over in-place patching whenever possible. Internalise the credential delivery rule: never hardcode, always use roles for IAM, Secrets Manager for sensitive, Parameter Store for config. Compute workload security is the connective tissue across all three Security Hub pillars (detection, response, governance) and Domain 3's largest task — give it the study time it deserves.

Continue with Edge Security: CloudFront, WAF, Shield for Task 3.1, Network Security: VPC Controls for Task 3.2, and Threat Detection: GuardDuty + Security Hub for Domain 1 to complete the infrastructure security and detection chain.

Official sources