examhub .cc The most efficient path to the most valuable certifications.
In this note ≈ 41 min

Application Modernization: Serverless, Containers, and Decoupling

8,200 words · ≈ 41 min read

Why Application Modernization Is a Separate Discipline from Migration

The SAP-C02 exam carefully separates task statement 4.3 (migration target architecture) from task statement 4.4 (application modernization). The distinction matters because application modernization is the act of changing the shape of a workload — its runtime, its data model, its deployment unit, its coupling — not merely its location. A rehost migration with AWS MGN delivers operational value in weeks; an application modernization initiative compounds that value over years by unlocking elasticity, deployment velocity, and the ability to evolve the system without rebuilding it. When you read a question that says "the team wants to reduce deployment lead time from 6 weeks to 2 days and cut licensing cost by 70%", that is an application modernization question, not a migration question, and the application modernization toolbox on AWS is distinct. Application modernization is the only AWS task statement where the answer "leave it alone" (Retain from the 7Rs) is sometimes the correct answer — because application modernization has real cost, real risk, and real failure modes that rehost does not.

The canonical SAP-C02 scenario for application modernization is the one this topic focuses on end-to-end: a 15-year-old Java monolith running on Oracle WebLogic, backed by an on-prem Oracle Database, with a 9-month deadline to land on AWS containers plus three independently deployed services. That scenario compresses into a single prompt every lever in the application modernization toolbox: containerization (App2Container), incremental decomposition (Refactor Spaces + Microservices Extractor), platform engineering for the target runtime (Proton), data layer replatforming (Oracle to Aurora PostgreSQL, monolithic schema to domain-driven schema), event-driven decoupling (EventBridge, Kinesis, MSK), stateless migration (externalized session to ElastiCache or DynamoDB), and pragmatic ML/AI infusion (Textract, Bedrock, SageMaker). Every exam prompt on application modernization pulls from this same toolbox; mastering the decision boundaries between the tools is the single highest-value preparation you can do for task 4.4.

Migration vs Modernization: Where the Line Falls

A migration is about where the workload runs; a modernization is about how the workload is structured. The SAP-C02 exam uses the 7Rs to describe migration categories, but application modernization on AWS lives inside Replatform and Refactor — the two Rs that change the workload's shape. If the question frames success as "reduce data-center footprint" or "decommission the colocation facility", it is a migration question. If it frames success as "independent deployments per service" or "eliminate monthly Oracle licensing", it is an application modernization question.

Why SAP-C02 Task 4.4 Has Grown in Weight

Application modernization question volume on SAP-C02 has grown year over year because AWS has released more modernization-specific services (Refactor Spaces, App Runner, Microservices Extractor, Proton) and because real-world customer conversations have moved from "how do we migrate" to "how do we modernize after migration". Expect roughly 10–12% of exam questions to probe modernization-specific AWS tools and patterns, with most of those questions coming from task statement 4.4 directly.

The Modernization Progression: Rehost to Replatform to Refactor

The AWS Cloud Adoption Framework and the 7Rs define a spectrum, but in practice application modernization lives in a three-rung ladder: Rehost, Replatform, and Refactor. Rehost is lift-and-shift with no code change, using MGN to block-replicate the VM into EC2. Replatform is lift-tinker-shift — you swap a component (typically the database from self-managed Oracle to managed Aurora, or the runtime from a VM to a container) but the application source code is untouched or nearly so. Refactor is the act of decomposing a monolith into services, rewriting modules to be stateless, replacing synchronous calls with events, and redesigning the data model around domain boundaries. Each rung up the ladder increases both potential value and execution risk; the application modernization question is rarely "which rung is best" in the abstract, but rather "which rung is justified by the specific constraints in this scenario".

The Cost-Value Curve Across the Three Rungs

The cost asymmetry is what the exam tests most often. Rehost delivers 10–15% infrastructure savings and zero operational change. Replatform (containerize + managed database) unlocks 30–50% operational savings, patching offload, elastic scaling, and license reduction. Refactor can deliver 10x deployment velocity and unlimited horizontal scale but requires dedicated engineering capacity for 6–18 months. A common exam trap is presenting a refactor as the "best practice" answer when the business constraint is a 4-month deadline; in that prompt, Replatform to containers plus managed Aurora is the correct modernization choice and a full refactor is the distractor.

Rehost = 10–15% cost reduction, weeks of effort, zero application change. Replatform = 30–50% cost reduction plus managed service benefits, months of effort, minimal code change. Refactor = unlimited scale and deployment velocity, 6–18 months of effort, significant code change. Pick the rung that matches the business constraint, not the one with the highest theoretical ceiling. https://docs.aws.amazon.com/prescriptive-guidance/latest/strategy-migration/apg-gloss.html

Iterative Modernization Is the Winning Pattern

A second exam-relevant rule is that these rungs are not a one-shot decision. Application modernization on AWS is an iterative program: many successful modernization engagements rehost first (to eliminate data center dependency), replatform within 6 months (to swap the database and containerize), then refactor module-by-module over the following 12–24 months (using Refactor Spaces as the traffic-shifting facade). The exam will reward answers that propose a phased progression and penalize answers that jump straight to a big-bang refactor for a workload that has not even been moved into AWS yet.

The Canonical Scenario: Weblogic Java Monolith to AWS in 9 Months

Before we dive into tools, let us anchor on the scenario so every later section maps back to a concrete decision. The system is a 15-year-old Java EE application deployed on Oracle WebLogic 12c. It runs on four on-prem VMware hosts, backed by an Oracle 12c database with 4 TB of data and 800 GB of stored procedure logic. The application has three implicit domains glued together by shared database tables: catalog, orders, and customer notifications. Session state is stored in WebLogic's in-memory clustered session replication, which means the application is not stateless. Deployments happen via WAR file upload during quarterly maintenance windows. The team has 9 months and wants to land on AWS containers with at least three of the implicit domains decoupled into independently deployable services.

The Three-Phase Plan in Outline

The winning application modernization roadmap for this scenario follows a strict three-phase sequence, and every SAP-C02 prompt that mirrors this shape wants you to recognize the sequence. Phase 1 (months 1–3): containerize the existing monolith with App2Container, land it on ECS Fargate, replace WebLogic session clustering with externalized session in ElastiCache for Redis, migrate Oracle to Aurora PostgreSQL via DMS + SCT. The monolith still runs as one deployable unit, but it now runs on containers, on a managed database, with an externalized session store. Phase 2 (months 4–7): set up Refactor Spaces as the application-level facade, deploy a new "notifications" service built on Lambda + SNS behind an API Gateway route that is shifted incrementally from the monolith. Phase 3 (months 7–9): use Microservices Extractor for .NET or manual domain slicing to extract the "catalog" and "orders" services, each with its own Aurora schema and EventBridge-based event contract. At month 9, three services run alongside a shrunken monolith, Refactor Spaces routes traffic, and the team has a repeatable pattern to continue decomposing the monolith over the next 12 months.

An incremental monolith decomposition strategy where new functionality is built as separate services behind a facade (in AWS, typically an Application Load Balancer or the Refactor Spaces proxy) that routes selected paths to the new service and all other paths to the legacy monolith. Over time the monolith is "strangled" as more routes migrate, until it can be decommissioned. Named after the strangler fig tree that grows around a host tree and eventually replaces it. https://docs.aws.amazon.com/prescriptive-guidance/latest/modernization-decomposing-monoliths/strangler-fig.html

Plain-Language Explanation: (Plain-Language Explanations)

Application modernization is full of jargon that obscures simple mechanics. Here are three independent analogies that each illuminate a different facet of the modernization toolbox, so when you hit an exam question you can translate the scenario into familiar terms before picking the answer.

Analogy 1: The Kitchen Renovation

Picture a 15-year-old restaurant kitchen where the single head chef personally does prep, cooking, plating, and dishwashing, and the walk-in fridge is shared across every station. That is the monolith. A rehost is picking up the entire kitchen and dropping it into a new building — same chef, same fridge, new address. A replatform is keeping the chef but replacing the walk-in fridge with a managed commissary that restocks automatically and replacing the gas stove with a more efficient induction range; same chef, same recipes, but the expensive infrastructure is now somebody else's problem. A refactor is splitting the single head chef into a prep cook, a line cook, a plater, and a dishwasher, each with their own station and their own tools; they communicate through order tickets and hand-offs rather than by elbowing each other in the same walk-in. Refactor Spaces is the waiter at the pass who decides whether to shout "order up" at the old head chef or at the new line cook based on the type of dish ordered. App2Container is the shrink-wrap machine that takes the chef's entire mise-en-place and seals it into a portable container so the chef can work in any kitchen.

The kitchen analogy nails why refactor is expensive: you are not just adding staff, you are redesigning every workflow, renegotiating who owns the fridge, and retraining everyone on a ticket system. It also explains why replatform often wins the exam question with a short deadline: you can swap the fridge next weekend, but turning one chef into four specialized cooks takes six months of hiring and training.

Analogy 2: The Library and the Strangler Fig

Imagine a public library with a single, ancient card catalog system covering every section — history, children's books, periodicals, audiobooks. The librarian wants to modernize to a digital catalog without closing the library for a year. She puts a helpful front-desk assistant (the facade) at the entrance. When a patron asks about audiobooks, the assistant directs them to the new digital kiosk in the audiobooks corner. Everything else still goes to the old card catalog. Next month the children's section gets digitized and the assistant updates her instructions. Over a year, each section migrates, and eventually the card catalog is wheeled out to storage. That is the strangler fig pattern, and that front-desk assistant is exactly what Refactor Spaces provides: an application-level proxy with an API Gateway in front that routes path-by-path between the legacy monolith and the new services.

The library analogy also clarifies the transactional outbox pattern. If the digital audiobooks kiosk needs to tell the card catalog when a checkout happens, writing to both systems from the patron's transaction is brittle — what if the kiosk succeeds but the message to the card catalog fails? Instead, the kiosk writes the checkout and an outbox message in the same local database transaction. A separate worker reads the outbox table and publishes events. The patron never sees a half-completed checkout, and the card catalog always eventually learns about it. Change Data Capture via DMS or Aurora logical replication is the mechanism that makes this outbox worker reliable on AWS.

Analogy 3: The Swiss Army Knife vs the Toolbox

A Swiss Army knife has a blade, a screwdriver, scissors, and a corkscrew all folded into one handle. It is convenient for a picnic but useless for a renovation — you cannot loan the screwdriver to your partner while you use the scissors, and when the corkscrew breaks, you have to replace the whole knife. That is the monolith: one deployable, one language runtime, one database, one release cadence. A toolbox has independent tools on a shared pegboard; the screwdriver gets loaned out, the scissors get sharpened independently, and when the corkscrew breaks, you replace just the corkscrew. That is a microservices architecture on ECS, EKS, or Lambda, coordinated by EventBridge and SQS. The pegboard is the shared event bus and the labels on the pegboard hooks are the event schemas registered in the EventBridge schema registry. Proton is the pegboard manufacturer — it defines the standardized shape of every new hook you add so every tool hangs the same way and nobody invents their own weird mounting bracket.

The Swiss Army knife analogy explains why a premature split hurts: if you pull the scissors off the Swiss Army knife but leave them sharing the same blade pivot with the screwdriver, you have two tools with a hidden shared dependency and you have made your life worse, not better. That is the distributed monolith anti-pattern, and recognizing it is a high-frequency exam skill.

App2Container: Containerizing Java and .NET from Existing VMs

AWS App2Container (A2C) is a command-line tool that inspects a running Java or .NET application on a Linux or Windows VM, captures its runtime dependencies, and produces a Docker container image plus the deployment artifacts for ECS, EKS, or App Runner. It is the single most exam-relevant tool for the containerization phase of application modernization because it turns the "containerize our monolith" story from a six-month engineering project into a multi-week operational task.

A2C Analyze and Containerize Phases

A2C runs in two phases. The analyze phase is executed on the source VM (or on a worker machine with access to the VM) and produces an inventory of applications with their ports, dependencies, configuration files, and WebLogic/Tomcat/IIS/WebSphere-specific metadata. The containerize phase packages the detected application into a Dockerfile, builds the image, pushes it to Amazon ECR, and generates CloudFormation templates for ECS or EKS deployment targets. A2C handles the Java complexity you would rather not handle manually: it preserves JVM flags, embeds required JARs, captures the WebLogic domain configuration, and generates a container entrypoint that reproduces the WebLogic bootstrap. For .NET, A2C supports both .NET Framework on Windows (producing Windows containers) and .NET Core/6/8 on Linux.

A2C turns a VM into a container in days, not months, but the result is a containerized monolith — not microservices. The right use of A2C is phase 1 of the modernization: get the monolith into containers so it runs on Fargate and shares infrastructure with new services, then decompose incrementally with Refactor Spaces. Treating A2C as "the modernization" is a common exam trap. https://docs.aws.amazon.com/app2container/latest/UserGuide/what-is-a2c.html

A2C Limitations That Shape Exam Decisions

A2C's important limitations shape the exam decision: it does not modify the application code, it does not convert session state from in-memory clustering to externalized, it does not migrate the database, and it does not split the monolith. If the exam prompt says "the Java EE application uses WebLogic session replication and the team wants stateless containers", A2C alone is not the answer — A2C plus an externalized session store (ElastiCache for Redis in cluster mode, or DynamoDB with TTL) is. A2C is necessary but not sufficient for true modernization; it is the shrink-wrap, not the redesign.

AWS Migration Hub Refactor Spaces: The Strangler Fig Facade as a Service

AWS Migration Hub Refactor Spaces is a managed implementation of the strangler fig pattern. It creates an application-level proxy consisting of an API Gateway in front of a Network Load Balancer, with routing rules that can send specific URL path patterns to either the legacy monolith (typically running on EC2 or ECS) or to new microservices (Lambda, ECS service, or HTTP endpoint). Refactor Spaces manages the cross-account networking plumbing — VPC peering, Transit Gateway attachment, or PrivateLink — so a new service in a greenfield "modernization" account can receive traffic from the legacy monolith's account without the team wiring up networking manually.

Incremental Path-Based Routing Mechanics

The exam-critical concept is incremental path-based routing. A Refactor Spaces application owns a set of services and routes. A service is either the legacy monolith (type URL) or a new microservice (type LAMBDA or URL). A route binds a URL path pattern to a service, and you can change the binding without redeploying clients because the API Gateway endpoint is the stable customer-facing URL. When the team extracts the "notifications" domain into a Lambda function, they add a route for /notifications/* pointing at the new Lambda, and the monolith never knows anything changed. Customer traffic for /orders/*, /catalog/*, and everything else still hits the monolith. Over six months, as more domains are extracted, more routes point at new services, and eventually the default route (the monolith) handles fewer than 10% of requests and can be scheduled for decommission.

Refactor Spaces creates an environment (a shared network), applications (the strangler fig facade per business application), services (legacy or new), and routes (path-based traffic rules). The environment orchestrates Transit Gateway attachments across accounts so the new modernization account can route to the legacy monolith account without the team hand-wiring the networking. This is the reason to prefer Refactor Spaces over a hand-rolled API Gateway plus ALB setup. https://docs.aws.amazon.com/migrationhub-refactor-spaces/latest/userguide/what-is-mhub-refactor-spaces.html

Refactor Spaces Is a Migration-Era Facade, Not a Service Mesh

Refactor Spaces is not a microservice framework, a service mesh, or a service discovery system. It is specifically a migration-era facade for the period when both the monolith and the new services coexist. Once the monolith is decommissioned, you typically replace Refactor Spaces with a direct API Gateway or ALB because you no longer need the legacy-compatible fallback routing. The exam will test whether you understand this migration-era framing: if the scenario says "the monolith is already decommissioned and we need a service mesh", Refactor Spaces is the wrong answer and App Mesh or VPC Lattice is correct.

Refactor Spaces vs Hand-Rolled Strangler Fig

A hand-rolled strangler fig on AWS is feasible: an Application Load Balancer with path-based listener rules, a default target group pointing at the monolith, and additional target groups per new service. So why pay for Refactor Spaces? The answer is cross-account isolation and governance. In a large enterprise migration, the legacy monolith lives in a "migration" account and the new services are built in "modernization" accounts by different product teams. Refactor Spaces handles the Transit Gateway attachments, the IAM permissions for cross-account route updates, and the API Gateway lifecycle — a hand-rolled ALB approach requires the migration team to wire up all of that themselves, typically consuming a multi-week engineering effort per new service onboarded.

Microservices Extractor for .NET: Monolith Decomposition from Static Analysis

AWS Microservices Extractor for .NET is a standalone desktop tool that ingests a .NET or .NET Framework solution, performs static code analysis to build a call graph of classes and methods, and uses visual graph partitioning to help an architect identify candidate microservice boundaries. The tool does not automatically refactor the code; it produces an extracted project skeleton, a list of required dependencies, and a report of the references to the extracted module that need to be replaced with cross-service calls.

Call Graph Partitioning at Domain Boundaries

The mental model is that a monolith's call graph has dense intra-domain edges (classes in the same domain call each other frequently) and sparse inter-domain edges (classes across domains call each other less often). Domain boundaries are where the graph is naturally sparse, and a good microservice extraction cuts at those sparse points. Microservices Extractor visualizes this graph and lets the architect draw a boundary; the tool reports how many outbound calls and shared objects would need to become cross-service calls, which is the implementation cost of that boundary choice.

The tool's value is in quantifying the cost of a boundary — how many cross-service calls, how many shared data types, how many shared database tables — before the team commits engineering effort. It is a decision-support tool, not a code generator. Treat the output as a prioritization input, not a finished architecture. https://docs.aws.amazon.com/microservice-extractor/latest/userguide/what-is-microservice-extractor.html

Applying the Technique to Java and Non-.NET Monoliths

For the Java monolith in our scenario, Microservices Extractor does not apply directly because it is .NET-specific. However, the analytical technique (call graph partitioning at domain boundaries) generalizes. For Java, equivalent static analysis can be done with tools like Structure101 or with manual dependency graphing from Maven plus Sonargraph. The exam will not test specific Java tooling, but it will test the concept: identify seams before you cut, because cutting at a dense graph edge produces the distributed monolith anti-pattern where two services are chatty over the network and share a database.

AWS Proton: Platform Engineering for Modernized Targets

AWS Proton is the service that most SAP-C02 candidates underestimate. It is a platform-engineering service that lets a central platform team publish environment templates (shared infrastructure like VPCs, clusters, shared databases) and service templates (standardized deployment patterns like "Fargate service behind ALB with CloudWatch alarms and a CI/CD pipeline"). Product teams then provision services by selecting a template and providing a small set of parameters, without ever writing raw CloudFormation or Terraform. The result is that a 50-microservice modernization does not produce 50 snowflake deployments; it produces 50 instances of three or four vetted service templates.

The Microservices-Explosion Failure Mode

Proton matters for application modernization specifically because the microservices-explosion failure mode (every team invents its own logging, its own IAM structure, its own deployment pipeline) is the operational disaster that kills modernization programs in year two. Proton is AWS's answer to that failure mode, and the exam will frame it as "how does the platform team enable self-service without losing governance". The correct answer is Proton environment and service templates, with versioning and a promotion workflow that moves new template versions from dev to prod environments in a controlled way.

A pre-built, opinionated deployment template that captures the organization's best practices for a common architecture pattern (for example, "Fargate service behind ALB with a CI/CD pipeline and CloudWatch alarms"). Product teams consume the golden path instead of hand-rolling, which ensures every service gets the same security baseline, logging configuration, and deployment pipeline. AWS Proton is the AWS-native mechanism for publishing and versioning golden paths. https://docs.aws.amazon.com/proton/latest/userguide/Welcome.html

Proton Is Optional but the Discipline Is Not

Proton is not required for modernization — teams can succeed with hand-written CloudFormation or CDK per service — but the lack of Proton-equivalent discipline is the single most common root cause of second-year modernization failure. The exam tests this by presenting a scenario where "18 teams each built their own deployment pipeline and debugging production incidents now requires knowing 18 different logging conventions"; the correct remediation is to consolidate on Proton templates.

Data Layer Modernization: From Monolithic Schema to Domain-Driven Data

Containerizing the application without modernizing the data layer leaves most of the modernization value on the table. A monolith backed by a single Oracle schema with hundreds of tightly joined tables cannot become microservices because the database is the hidden coupling point. Every microservices architecture ultimately faces the same question: how do we split the data?

Oracle to Aurora PostgreSQL: The Most-Tested Replatform

The replatform move, and the one the exam asks about most often, is Oracle to Aurora PostgreSQL. AWS Database Migration Service (DMS) performs the full-load-plus-CDC replication while AWS Schema Conversion Tool (SCT) converts the Oracle DDL, stored procedures, and PL/SQL to PostgreSQL PL/pgSQL. SCT reports the percentage of objects that convert automatically (typically 70–85% for Oracle-to-Aurora-PostgreSQL) and flags the rest as action items for manual rewrite. The cutover pattern is DMS full load, DMS CDC to keep Aurora synchronized during dual-run, validation via DMS data validation tasks, then DNS cutover with the old database kept in replication-source mode for rollback.

Cassandra to Amazon Keyspaces

For Cassandra workloads, the modernization target is Amazon Keyspaces, which is a serverless Cassandra-compatible database with per-request pricing. The migration path is similar conceptually (data copy, CDC, cutover) but the tooling differs: Keyspaces supports CQL-native migration via cqlsh COPY or via AWS Glue for larger datasets. The exam trap is assuming Keyspaces is a drop-in replacement; it is not — Keyspaces has different secondary index semantics, different counter semantics, and different consistency-performance trade-offs that require application-level validation before cutover.

Splitting a monolith's single Oracle schema into three Aurora PostgreSQL schemas (one per extracted service) does not work if the application still issues joins across the original tables. The application must first be refactored to stop cross-domain joins and start calling the other services' APIs. Splitting the schema without splitting the application produces distributed join queries that are 100x slower than the original, and the team rolls back. Always modernize the application access pattern before splitting the physical schema. https://docs.aws.amazon.com/prescriptive-guidance/latest/modernization-data-persistence/database-per-service.html

Change Data Capture and the Transactional Outbox

Once services are split, they need to communicate state changes without the tight coupling of synchronous calls. The transactional outbox pattern is the exam-correct solution: when a service mutates its own database, it writes an outbox row in the same local transaction capturing the event that should be published. A separate process (typically AWS DMS in CDC mode, or a Lambda reading the Aurora binlog via Database Activity Streams, or MSK Connect with the Debezium connector) reads the outbox table and publishes events to EventBridge, Kinesis, or MSK. This guarantees that every state change that commits also produces an event, and conversely that no event is published for a state change that rolled back — which is the at-least-once semantic that downstream services can deduplicate against.

The transactional outbox replaces the failed pattern of "write to database then call SNS" because that pattern has no atomicity: the database write can succeed and the SNS call can fail, producing silent state divergence. It also replaces the failed pattern of "publish to SNS then write to database" because that pattern can publish events for state that never commits. Outbox is the only at-least-once pattern that survives all failure modes, and the exam will reward answers that identify it.

Picking the Right Purpose-Built Database During Modernization

Modernization is also the moment to question whether the relational model is still the right fit. If the access pattern is key-value with a known primary key, DynamoDB eliminates schema management and scales to millions of requests per second. If the data is hierarchical or graph-shaped (recommendation engines, fraud rings), Neptune is the purpose-built fit. If the data is time-series (telemetry, metrics), Timestream is the fit. Application modernization is the only window when adopting a purpose-built database is cheap; retrofitting later is expensive.

Event-Driven Decoupling: EventBridge, Kinesis, and MSK

Once the monolith is split into services, the glue between them is events. The exam tests three AWS event-delivery services and the decision boundary between them: EventBridge, Kinesis Data Streams, and MSK (Managed Streaming for Apache Kafka).

EventBridge for Discrete Application Events

EventBridge is the right default for application integration events: discrete business events like "order placed", "user registered", "inventory adjusted". EventBridge provides content-based routing rules, schema registry for event contracts, cross-account event buses, and pay-per-event pricing. It shines when you have heterogeneous consumers (some Lambda, some SQS, some HTTP endpoints) and you want content-based filtering so each consumer only sees the events it cares about. EventBridge's limitation is throughput (10,000 events/second per account default, increasable) and the lack of ordered delivery within a partition.

Kinesis Data Streams for Ordered High-Throughput Streams

Kinesis Data Streams is the right choice when you need ordered delivery per partition key and high throughput (millions of records per second). Kinesis shines for clickstream, IoT telemetry, and CDC pipelines where downstream consumers need to process records in strict order per entity. Kinesis also offers replay (up to 365 days of retention) which EventBridge does not, making it valuable for event sourcing patterns where a new service needs to rebuild state from historical events.

MSK for Kafka-Native Workloads

MSK is the right choice when the team has existing Kafka expertise, requires Kafka Connect ecosystem compatibility (Debezium, existing Kafka sinks), or is migrating a self-managed Kafka deployment. MSK offers the full Kafka semantics including exactly-once producers, transactional messages, and tiered storage. MSK Serverless removes the cluster sizing decision for variable workloads. MSK's cost and operational complexity are higher than EventBridge; the exam will not award MSK for a greenfield application integration scenario unless Kafka-specific requirements are explicit.

EventBridge for discrete application events with heterogeneous consumers and content-based routing. Kinesis Data Streams for ordered high-throughput streams with replay (clickstream, telemetry, CDC). MSK when the team has Kafka expertise, needs Kafka Connect, or is migrating self-managed Kafka. If the prompt says "migrate existing Kafka workload", the answer is MSK; if it says "new application integration", the answer is usually EventBridge. https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-what-is.html

EventBridge Pipes for Point-to-Point Integration

EventBridge Pipes is a newer capability that creates a point-to-point connection from a source (SQS, Kinesis, DynamoDB Streams, MSK, Kafka) to a target (Lambda, Step Functions, EventBridge bus, Kinesis) with optional filtering and enrichment. Pipes is the exam-correct answer when the scenario is "forward every CDC change from DynamoDB Streams to EventBridge with filtering" — previously this required a hand-rolled Lambda, now it is configuration.

Stateless Migration and Externalized Session

A modernization question that surfaces repeatedly is the session-state question. The Weblogic monolith in our scenario uses in-memory clustered session replication: when a user logs in, their session is stored in the JVM heap of one WebLogic server and replicated to the other servers in the cluster via WebLogic's replication protocol. This pattern is fundamentally incompatible with modern elastic container deployment. A container can be terminated at any moment (Fargate Spot interruption, ECS rolling deployment, pod eviction), which means session state stored in the container's memory is lost. Auto-scaling cannot add capacity without session loss. Blue-green deployments drop every user's login.

ElastiCache for Redis vs DynamoDB for Session Storage

The modernization target is externalized session storage in a managed service. Two AWS options dominate: ElastiCache for Redis in cluster mode (the default choice for fast session lookup, typically single-digit millisecond latency, supports TTL-based expiration) and DynamoDB with TTL (the choice when the team wants zero operational overhead and per-request billing for a low-traffic application). The application code change is small — replace httpSession.setAttribute with a Redis SET command — but it is a real code change and not something A2C can do automatically. This is a phase-1 modernization task that must precede stateless container deployment.

Why ALB Sticky Sessions Are Not Enough

The exam trap is assuming sticky sessions at the load balancer solve the problem. ALB sticky sessions do preserve affinity for a single user to a single container, but they do not survive container termination, auto-scaling events, or deployment rollouts. Sticky sessions are a partial workaround, not a stateless-session solution. Any prompt that mentions zero-downtime deployment or elastic auto-scaling requires externalized session, not stickiness.

Stateless containers require session state in an external store: ElastiCache for Redis (cluster mode for HA, TTL for automatic expiration) or DynamoDB (TTL, on-demand billing). ALB sticky sessions are not an acceptable substitute — they break on container termination, auto-scaling, and deployment rollouts. This is a phase-1 modernization task that the application team must complete before containerization delivers its value. https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/WhatIs.html

Anti-Patterns: Premature Splitting and Distributed Transaction Pain

Modernization failures follow a small number of repeating patterns, and the exam tests these anti-patterns because they are the real-world failure modes.

Premature Splitting and the Distributed Monolith

Premature splitting is the decision to break a monolith into services before the team has identified the true domain boundaries. The symptom is services that are chatty with each other — service A calls service B three times to complete a single request, and service B cannot complete without calling service C. The root cause is that the "split" did not follow a true domain seam; it followed a convenient module boundary that shared data with the rest of the system. The remediation, when premature splitting is diagnosed in production, is usually to merge the chatty services back into a single service with a shared codebase and redo the decomposition with better seam identification.

Distributed Transactions and the Saga Pattern

Distributed transaction pain is the realization that a business operation that was atomic in the monolith (one database transaction across five tables) becomes non-atomic across services (five separate database writes in five services). The naive fix is two-phase commit, which does not work at microservice scale because it requires every participating service to be available simultaneously. The exam-correct fix is the saga pattern: model the business operation as a sequence of local transactions where each service commits its own state and publishes a compensating action if a later step fails. AWS Step Functions is the managed orchestration service for the orchestrated saga flavor; EventBridge with choreographed handlers is the alternative for the choreographed saga flavor. Either way, the mental model shift is from "one transaction" to "a workflow of local transactions with compensations".

A microservices split that follows arbitrary module boundaries instead of true domain seams produces a distributed monolith: services that cannot deploy independently because they share data or are chatty over the network. The symptom is that your p99 latency gets worse after the split. The fix is almost always merge-and-redo with better seam identification, not more debugging. Use call-graph analysis (Microservices Extractor for .NET, manual dependency analysis for Java) to quantify coupling before committing to a split. https://docs.aws.amazon.com/prescriptive-guidance/latest/modernization-decomposing-monoliths/decomposition-patterns.html

Shared Database Is the Silent Coupling

Shared database anti-pattern is the silent coupling that remains when the team splits the application layer but leaves both services reading and writing the same tables in the same database. Every database schema change now requires coordinating deployments across services, which eliminates the deployment-independence value that motivated the split. The correct pattern is database per service: each extracted service owns its schema, and cross-service data access goes through the owning service's API or through asynchronous events. This is the hardest part of modernization and the most commonly rolled-back decision.

ML/AI Infusion: Pragmatic Modernization with Managed AI Services

A subtle but increasingly common exam theme is ML/AI infusion into modernized applications. The prompt typically describes a manual workflow (customer-service agents reading PDF forms, support staff answering repetitive product questions, a planning team forecasting demand in a spreadsheet) and asks how to modernize it using AWS managed AI services. The exam-correct answers do not require building custom ML models; they use the managed services that deliver the capability as an API.

Amazon Textract for Document Processing

Amazon Textract extracts structured text, tables, and form fields from PDFs, scanned documents, and images. It is the correct modernization target when the scenario mentions "manual keying of invoice data" or "processing paper forms". Textract returns key-value pairs from forms and structured table data; the modernized architecture is typically S3 upload → EventBridge trigger → Textract async job → results to DynamoDB or SNS notification.

Amazon Bedrock for Generative AI Infusion

Amazon Bedrock provides managed foundation models (Anthropic Claude, Amazon Titan, Cohere, Meta Llama, Mistral) via a single API with built-in RAG (Retrieval-Augmented Generation) capabilities through Knowledge Bases. It is the correct modernization target when the scenario mentions "support agents answering repetitive product questions" or "customers want a natural language interface to documentation". The modernized architecture is typically user query → Bedrock Agent or Knowledge Base → LLM response grounded in the team's documentation.

Amazon SageMaker for Custom Models

Amazon SageMaker is the right choice when the team has existing data scientists and custom model requirements (demand forecasting, churn prediction, recommendation). SageMaker provides training infrastructure, model hosting endpoints, and MLOps capabilities (Pipelines, Model Registry). The modernized architecture replaces a manual spreadsheet-based forecast with a SageMaker batch transform job that consumes historical data from the data lake and publishes predictions to a downstream service.

The exam rewards answers that use Textract, Comprehend, Rekognition, or Bedrock as API calls — no model training required — over answers that build custom SageMaker models. Only choose SageMaker when the prompt explicitly mentions custom features, proprietary data that cannot be sent to managed APIs, or an existing data science team with ML pipelines. https://docs.aws.amazon.com/bedrock/latest/userguide/what-is-bedrock.html

Scenario Walkthrough: Java Monolith to AWS in 9 Months

Returning to the canonical scenario — 15-year-old Java monolith on WebLogic, 4 TB Oracle database, 9-month deadline, three decoupled services expected at landing — here is the phased application modernization plan mapped to the tools above.

Phase 1 (Months 1–3): Replatform Foundation

Use App2Container to containerize the WebLogic monolith into a Fargate-compatible image. In parallel, use SCT to assess Oracle-to-Aurora PostgreSQL conversion (expect 75% automatic, 25% manual effort on stored procedures). Use DMS full-load-plus-CDC to replicate Oracle to Aurora PostgreSQL. Replace WebLogic session replication with ElastiCache for Redis in cluster mode and update the application's session-access code to use Redis. Land on ECS Fargate with a fronting Application Load Balancer. At month 3, the monolith runs on Fargate, uses Aurora, and has externalized session; the team has not yet split any services.

Phase 2 (Months 4–7): Strangler Fig and First Service Extraction

Provision Migration Hub Refactor Spaces, pointing the default route at the Fargate monolith. Build the new "notifications" service on Lambda + SES + SNS, with its own event contract. Add a Refactor Spaces route for /notifications/* pointing at the Lambda. Wire the monolith to publish "order-placed" events to EventBridge via the transactional outbox pattern (outbox table in Aurora, DMS CDC task replicating to EventBridge via a Lambda proxy). The notifications service consumes EventBridge events and sends customer emails. At month 7, one service is extracted, the monolith still owns catalog and orders, and the team has a proven pattern for the next extractions.

Phase 3 (Months 7–9): Catalog and Orders Extraction

Identify domain seams by analyzing the Aurora schema for low-coupling cut points (tables with few foreign keys across domains). Extract "catalog" as a new Fargate service with its own Aurora cluster (Aurora Serverless v2 for variable load); migrate the catalog tables via DMS schema-filtered replication; add a Refactor Spaces route for /catalog/*; update the monolith to call the catalog service via internal API. Repeat for "orders". Use AWS Proton templates to ensure each new service inherits the same Fargate-plus-pipeline baseline (CloudWatch alarms, IAM roles, X-Ray tracing). Use the transactional outbox pattern in each new service for cross-service events. At month 9, three services run alongside the shrunken monolith; Refactor Spaces routes traffic; the team has a repeatable template for continued decomposition over the next 12 months.

Why the Sequencing Matters on the Exam

The deliberate sequencing matters: containerize before splitting, externalize session before scaling elastically, migrate the database before introducing services, and introduce the strangler fig facade before extracting the first service. An exam prompt that offers a path that reorders these steps — for example "extract the notifications service as Lambda first, then containerize the monolith" — is the distractor, because extracting a service before the monolith itself is on elastic infrastructure creates a hybrid-sync-call architecture that is worse than either endpoint state.

Measuring Modernization: The ROI Conversation

The SAP-C02 exam occasionally frames application modernization as a business decision requiring measurable outcomes. The metrics that matter, and that the exam rewards in answers, are:

  • Deployment frequency: quarterly (monolith) to daily (microservices) — a 90x improvement is typical
  • Change lead time: weeks to hours — the time from code commit to production
  • Mean time to recovery (MTTR): hours to minutes, because a single service failure does not take down the whole monolith
  • Change failure rate: the percentage of deployments that cause production incidents — typically drops from 15–20% in the monolith era to 5–8% in the microservices era due to smaller blast radius per deployment
  • Infrastructure cost: typically 30–50% reduction from replatform (managed services, elastic scale, license elimination); typically neutral-to-higher in early microservices phase (more moving parts) and 20–40% lower at steady state
  • License cost: often the single largest modernization ROI driver when moving off Oracle Database, WebLogic, WebSphere, or SQL Server Enterprise

The common exam framing is "the CFO wants to see 12-month ROI on the modernization program; which metrics demonstrate value". The correct answer is always the DORA metrics (deployment frequency, lead time, MTTR, change failure rate) plus direct cost reduction, and is never purely technical metrics like "number of services extracted" or "percentage of code refactored".

Common Exam Traps

Several specific traps recur in SAP-C02 modernization questions, and identifying them before reading the distractors is worth 5–10 points on exam day.

The Big-Bang Refactor Trap

The big-bang refactor trap presents a 4-month deadline and asks for the best modernization. The distractor is "refactor the monolith into 12 microservices"; the correct answer is "replatform to containers and managed database, defer refactor to later phase".

The A2C-Is-Enough Trap

The A2C-is-enough trap presents a scenario where A2C successfully containerizes the app, and asks what is next. The distractor is "the modernization is complete"; the correct answer involves externalized session, managed database, and a plan for incremental decomposition.

The Shared-Database Trap

The shared-database trap presents a team that has "split" the monolith into three services but is complaining about deployment coupling. The distractor is "add more database connections"; the correct answer is "split the schema into database-per-service and introduce async events for cross-domain state".

The Two-Phase-Commit Trap

The two-phase-commit trap presents a distributed transaction scenario and offers XA transactions or a distributed lock as a solution. The correct answer is always the saga pattern with Step Functions or choreographed EventBridge.

The Synchronous-Call Chain Trap

The synchronous-call trap presents a chain of three microservices where the user request blocks on all three. The distractor is "add more capacity"; the correct answer is "decouple with SQS or EventBridge so only the first hop is synchronous".

The Premature-Kafka Trap

The premature-Kafka trap presents a greenfield event-driven design and offers MSK as the integration service. Unless the prompt explicitly mentions Kafka migration or Kafka Connect requirements, the correct answer is EventBridge for application events or Kinesis for high-throughput ordered streams.

Frequently Asked Questions

When should I choose Refactor Spaces over a hand-rolled ALB-based strangler fig?

Choose Refactor Spaces when the legacy monolith and the new services will live in different AWS accounts, when the cross-account networking (Transit Gateway, VPC peering) is not already in place, or when you need centralized governance of the migration-era facade. A hand-rolled ALB is acceptable when everything lives in one account and the team has existing CI/CD automation for listener rules. The exam favors Refactor Spaces for multi-account enterprise migrations and favors ALB for single-account greenfield scenarios.

What is the right order: containerize first or migrate the database first?

Do them in parallel when possible because they have independent teams and independent risk profiles. If forced to sequence, containerize the application first because it can still connect to the on-prem Oracle database over Direct Connect or VPN during a transition period. Migrating the database first without containerizing the app forces the on-prem application to route across the internet or Direct Connect for every query, which is unacceptable latency for most production workloads. The winning pattern is: containerize in month 1–2, start DMS CDC in month 2, cut over the database in month 3.

Can AWS App2Container modernize a Weblogic application without any code change?

App2Container containerizes without code change, but the containerized application is not fully modernized. Specifically, WebLogic clustered session replication does not work inside containers because container-to-container multicast is not reliable on Fargate. The team must modify the session-management code to use externalized storage (ElastiCache or DynamoDB) before the containerized application can scale elastically. A2C will produce a running container, but without the session change, that container cannot be safely auto-scaled or rolling-deployed.

Is AWS Proton required for a successful modernization, or is it optional?

Proton is optional but the failure mode it prevents is very real. A modernization that produces 20 microservices, each with its own hand-written CloudFormation, its own logging convention, its own CI/CD pipeline, and its own CloudWatch alarm configuration, is operationally unmaintainable by year two. Proton (or a Terraform-based equivalent golden-path discipline) is the cure. The exam favors Proton answers when the scenario explicitly describes multiple teams deploying services; Proton is not the right answer for a single-team, single-service modernization.

How do I decide between Aurora PostgreSQL, DynamoDB, and Keyspaces as a modernization database target?

The rule of thumb: if the legacy application uses relational semantics (joins, transactions across multiple tables, complex queries), the target is Aurora PostgreSQL or Aurora MySQL. If the access pattern is key-value (lookup by primary key, simple range queries), the target is DynamoDB — but this requires redesigning the data model around access patterns, not just migrating data. If the legacy application uses Cassandra, the target is Keyspaces, with validation that secondary index and counter semantics match. Aurora Serverless v2 is the right subchoice when the relational workload has variable load; provisioned Aurora is the right subchoice for steady-state workloads with Reserved Instance savings.

What is the fastest way to add AI capabilities to a modernized application?

Start with managed AI services that expose capabilities as APIs. Amazon Textract for document extraction, Amazon Comprehend for text analysis, Amazon Rekognition for image and video analysis, and Amazon Bedrock for generative AI. These require no model training, no data science team, and no ML infrastructure. Only escalate to SageMaker when the team has custom features, proprietary training data that cannot be sent to managed services, or regulatory requirements that forbid shared-model APIs. The exam heavily favors managed-API answers for AI infusion scenarios; SageMaker is the distractor unless custom model training is explicitly required.

How do I know when a monolith is ready to be split?

The monolith is ready when it has a stable CI/CD pipeline, observability is in place (CloudWatch, X-Ray, structured logs), the database is in a managed service with room for multiple schemas, and the team has identified at least one domain with clearly bounded data access. If any of these is missing, splitting will fail — usually with the distributed-monolith symptom where the new service is chatty with the monolith and deploys cannot happen independently. Spend phase 1 of modernization making the monolith operationally excellent; only in phase 2 should you extract the first service.

Summary

Application modernization is a distinct discipline from migration, and the SAP-C02 exam rewards candidates who can identify the right rung on the Rehost-Replatform-Refactor ladder for a given scenario. The AWS-native modernization toolbox — App2Container for containerization, Refactor Spaces for incremental strangler fig routing, Microservices Extractor for .NET decomposition analysis, Proton for platform engineering, DMS plus SCT for database modernization, EventBridge plus Kinesis plus MSK for event-driven decoupling, ElastiCache plus DynamoDB for externalized session, and the managed AI services for pragmatic AI infusion — covers every modernization question the exam asks. The canonical 15-year-old Java monolith scenario consolidates these tools into a three-phase 9-month roadmap: replatform foundation in months 1–3, strangler fig plus first service extraction in months 4–7, additional service extractions in months 7–9. Mastering the sequencing, the anti-patterns, and the trap answers — premature splitting, shared database, two-phase commit, big-bang refactor — is the difference between a passing and a distinguished score on task statement 4.4.

When you read an application modernization prompt on exam day, run through the ladder check first (what rung does the business constraint demand), then the toolbox check (which AWS services match the target architecture), then the anti-pattern check (does any distractor advocate a known failure mode). With those three passes, the modernization question becomes a structured decision rather than a memorization exercise, and the path through task 4.4 becomes consistently repeatable.

Official sources