AI use cases on AWS form the practical backbone of the AIF-C01 exam. Task statement 1.2 asks candidates to "Identify practical use cases for AI" and task statement 1.3 demands fluency in "ML development lifecycle" framing — both require you to match a business story to the correct managed AWS AI service. Mastering the catalogue of AI use cases is therefore the highest-leverage preparation you can do for the certification.
This guide walks through every category of AI use case AWS expects you to recognize — computer vision, natural language processing, speech, recommendation systems, forecasting, anomaly detection, personalization, content generation, and code generation — and pins each to the out-of-the-box managed service that solves it. For every AI use case you will learn when to pick a managed AI service, when to lean on Amazon Bedrock foundation models, and when to build a custom model on SageMaker. By the end you will read any AIF-C01 scenario and map it to the correct AWS service within seconds.
What Are AI Use Cases on AWS?
An AI use case on AWS is a concrete business problem that can be solved by applying machine-learning or generative-AI techniques to data using AWS services. The AIF-C01 exam blueprint classifies AI use cases into roughly ten recurring categories, each backed by one or more managed services that remove infrastructure and model-training burden from the customer.
The AWS philosophy for AI use cases is a three-layer stack:
- AI services layer — fully managed APIs (Rekognition, Textract, Comprehend, Polly, Transcribe, Translate, Forecast, Personalize) that solve a single AI use case with one API call and no model training required.
- ML services layer — Amazon SageMaker for building, training, tuning, and deploying custom models when managed AI services do not fit the AI use case.
- Generative AI layer — Amazon Bedrock (foundation-model access), Amazon Q (business and developer assistants), and SageMaker JumpStart for AI use cases that demand text generation, summarization, retrieval-augmented generation, or copilot-style productivity.
The AIF-C01 exam rarely asks abstract definitions. Instead it presents a scenario — "A call centre wants to analyse customer sentiment from 10,000 recorded calls per day" — and asks which AWS service best solves the AI use case. Recognising the AI use case category instantly narrows the answer to one or two services.
An AI use case is a specific business problem solved by applying artificial-intelligence or machine-learning techniques — computer vision, NLP, speech, forecasting, recommendation, anomaly detection, or content generation — to organizational data. On AWS, most AI use cases are first attempted with a managed AI service; customisation happens in SageMaker; generative AI use cases land in Bedrock or Amazon Q. Source ↗
Why the AIF-C01 Exam Obsesses Over AI Use Cases
AIF-C01 is the foundational AWS AI certification. The exam blueprint allocates roughly 20% of questions to Domain 1 "Fundamentals of AI and ML" and another 28% to Domain 2 "Fundamentals of Generative AI." Across both domains, the single most common question shape is: "Given a business scenario describing an AI use case, which AWS service is the most appropriate choice?" Expect 15 to 25 of the 65 scored questions to depend directly on AI use case recognition.
How This Topic Connects to the Rest of AIF-C01
The AI use cases conversation is the hub from which most other AIF-C01 topics branch:
- Each managed AI service (Rekognition, Textract, Comprehend, etc.) has its own deep-dive topic covering inputs, outputs, pricing, and limits.
- Amazon Bedrock is the generative AI use case hub for text, image, and multimodal foundation models.
- Amazon Q is the productivity-assistant hub for enterprise search and developer copilots.
- Amazon SageMaker is the custom-model escape hatch when managed services fall short of the AI use case requirements.
- The ML development lifecycle topic explains how custom AI use cases move from data preparation to deployment.
Plain-Language Explanation: AI Use Cases on AWS
The full catalogue of AI use cases can feel overwhelming on first pass. Three analogies make the managed-service mapping intuitive and memorable.
The Swiss Army Knife Analogy
Imagine a Swiss Army knife with one blade per AI use case. The scissors cut paper, the bottle opener opens bottles, the screwdriver turns screws — each blade is purpose-built for one job and does it extremely well. You do not forge your own blade; you flip open the one that matches the task.
AWS AI services are the Swiss Army knife of AI use cases. Rekognition is the computer-vision blade. Textract is the document-extraction blade. Comprehend is the NLP blade. Polly is the text-to-speech blade. Transcribe is the speech-to-text blade. Translate is the translation blade. Forecast is the time-series blade. Personalize is the recommendation blade. When an AIF-C01 scenario describes an AI use case, your first move is to flip open the right blade. Only when no blade fits — because the use case is novel, highly custom, or needs a foundation model — do you step up to Bedrock or SageMaker.
The Kitchen Appliance Analogy
A modern kitchen has a coffee machine, a blender, a toaster, a rice cooker, and a microwave. Each appliance solves one AI use case: coffee, smoothies, toast, rice, reheating. If you want coffee you do not start by buying raw beans, a roaster, and a grinder — you push the button on the coffee machine.
The AI use cases on AWS map identically. If you want document OCR you do not train a custom CNN — you call Textract. If you want sentiment analysis you do not label a million reviews — you call Comprehend. If you want a chatbot you do not fine-tune a transformer from scratch — you call Amazon Bedrock or Amazon Lex. The managed-service-first philosophy is the kitchen-appliance philosophy: pick the specialised appliance before reaching for the raw ingredients. Only when no appliance exists for the dish — say, sous-vide Kobe beef — do you build a custom workflow on SageMaker.
The Toolbox Analogy
A carpenter's toolbox contains a hammer, a saw, a drill, a plane, and a tape measure. Each tool solves one class of AI use case in woodworking. Some jobs need only one tool; complex furniture requires several tools in sequence. A call-centre analytics pipeline, for example, chains Transcribe (speech-to-text) → Comprehend (sentiment) → Translate (multilingual) → Polly (voice response) — four tools wired together for one business AI use case.
The toolbox analogy clarifies that AI use cases on AWS are composable. The managed services are designed to feed each other: Rekognition's JSON output flows into a Lambda trigger that calls Bedrock for a natural-language summary; Textract's extracted fields become input to a SageMaker custom classifier. Recognising the composition pattern is a frequent AIF-C01 exam cue.
Which Analogy to Use on Exam Day
All three analogies describe the same catalogue of AI use cases from different angles. Pick the one that matches the question wording:
- Scenario about picking a single managed service for a single use case → Swiss Army knife analogy
- Scenario about managed-service-first vs custom model → kitchen appliance analogy
- Scenario about chaining multiple services together → toolbox analogy
Computer Vision AI Use Cases
Computer vision is the largest single family of AI use cases on AWS, covering everything from image classification to video moderation. The dominant managed service is Amazon Rekognition, with Amazon Textract specialised for documents.
Image Classification
AI use case: assign a label to an image — "dog," "beach," "invoice," "X-ray." AWS managed service: Amazon Rekognition (DetectLabels API) returns thousands of pre-trained labels out of the box. For custom labels that Rekognition does not know (a specific brand of machine part, a rare species of plant), use Rekognition Custom Labels, which fine-tunes on as few as 10 labelled images. When neither option fits — for example, medical-grade radiology classification — train a custom convolutional model on Amazon SageMaker.
Object Detection
AI use case: locate and box specific objects inside an image or video frame ("count every car in the parking lot"). Amazon Rekognition returns bounding boxes alongside labels in the same API response. This AI use case is heavily tested when scenarios mention "counting" or "locate within the image."
Facial Analysis and Face Comparison
AI use case: identify whether two photos depict the same person, detect age range, or search a library of faces for a match. Amazon Rekognition offers CompareFaces, SearchFacesByImage, and IndexFaces APIs. Common business scenarios include employee badge verification, photo-library deduplication, and missing-person searches.
Content Moderation
AI use case: flag images or videos containing violence, nudity, weapons, or hate symbols before they reach end users. Amazon Rekognition's DetectModerationLabels and video equivalent return confidence scores across a standard taxonomy. Social media, dating apps, and marketplaces are typical customers.
Optical Character Recognition (OCR) and Document Extraction
AI use case: pull structured data — names, dates, line items — out of scanned documents, PDFs, and photographs. The managed service here is Amazon Textract, which goes beyond raw OCR by understanding forms, tables, and key-value pairs. Combined with Amazon Comprehend for downstream text understanding, Textract covers nearly every invoice, tax form, and mortgage-application AI use case on the exam.
Video Analysis
AI use case: detect scene changes, identify celebrities, moderate content, or extract text across a long video. Amazon Rekognition Video operates asynchronously on S3-hosted videos and publishes results through Amazon SNS. Sports highlight extraction and media archive search are canonical examples.
When an AIF-C01 scenario describes extracting printed or handwritten text from a document, the answer is Amazon Textract, not Amazon Rekognition. Rekognition can detect text in images (DetectText) but Textract is the purpose-built service for forms, tables, invoices, and multi-page PDFs. Picking the wrong one is among the top-cited AI use case traps.
Source ↗
Natural Language Processing AI Use Cases
NLP AI use cases are the second-largest family on AIF-C01, spanning sentiment, entity extraction, translation, summarization, and chatbots. The managed-service story splits between Amazon Comprehend (analysis), Amazon Translate (translation), Amazon Lex (conversational bots), and increasingly Amazon Bedrock (generative text).
Sentiment Analysis
AI use case: determine whether a piece of text is positive, negative, neutral, or mixed. Amazon Comprehend's DetectSentiment API returns the dominant label plus confidence scores. Typical scenarios: customer-review triage, call-centre call-wrap summaries, social-media brand monitoring. For highly domain-specific sentiment (medical, financial), fine-tune a custom model on SageMaker or use a Bedrock foundation model with few-shot prompts.
Entity Recognition and Key Phrase Extraction
AI use case: pull out people, organizations, locations, dates, and quantities from unstructured text — "who" and "what" questions. Amazon Comprehend covers standard entities out of the box; Amazon Comprehend Custom Entity Recognition handles domain-specific entities like drug names or contract clauses.
Topic Modelling
AI use case: discover the themes present across a large corpus — "what topics do customers write to support about?" Amazon Comprehend's StartTopicsDetectionJob performs unsupervised topic modelling across documents in S3.
Language Detection
AI use case: identify which language a piece of text is written in. Amazon Comprehend's DetectDominantLanguage covers 100+ languages and is frequently the first step in a multilingual pipeline before routing to Translate.
Machine Translation
AI use case: translate text between languages at scale. Amazon Translate supports 75+ languages with a single API call and offers real-time and batch modes. Custom terminology lists let brands enforce consistent translation of product names. For document translation preserving formatting, Translate integrates with Textract and S3.
Text Summarization
AI use case: reduce a long article, report, or meeting transcript to a short abstract. For years this required a custom SageMaker model; today the first-choice AWS service for summarization is Amazon Bedrock with a foundation model like Anthropic Claude, Amazon Nova, or Mistral. A single prompt produces extractive or abstractive summaries in any supported language. Summarization is one of the most-tested generative AI use cases on AIF-C01.
Chatbots and Conversational Agents
AI use case: answer user questions in natural language through a messaging interface. The classical AWS service is Amazon Lex — intent-based, slot-filled, tightly integrated with AWS Lambda and Amazon Connect for contact-centre automation. The generative alternative is Amazon Bedrock with agents for Amazon Bedrock or Amazon Q Business for enterprise-knowledge chat. The exam tests both: rule-based intent bots (Lex) vs free-form conversational agents with retrieval-augmented generation (Bedrock / Amazon Q).
Question Answering and Document Search
AI use case: answer free-form questions over a corpus of enterprise documents. Amazon Kendra is the managed intelligent-search service that returns ranked passages from indexed documents. Amazon Q Business layers a generative AI assistant on top of Kendra-style connectors to give conversational answers with citations. Choose Kendra for search-box experiences, Amazon Q for chat-style productivity.
Free-form summarization, open-ended chat, and "explain this in plain English" AI use cases belong to Amazon Bedrock, not Amazon Comprehend. Comprehend is for structured NLP analysis (sentiment, entities, key phrases). If the scenario says "generate" or "summarize," think Bedrock. Source ↗
Speech AI Use Cases
Speech AI use cases split cleanly into two directions: speech-to-text (Amazon Transcribe) and text-to-speech (Amazon Polly). The exam tests both plus the call-analytics extension.
Automatic Speech Recognition (Speech-to-Text)
AI use case: convert recorded or streaming audio into text. Amazon Transcribe supports 100+ languages, real-time streaming, speaker diarization, custom vocabulary, and automatic redaction of personally identifiable information. Typical scenarios: meeting transcription, podcast captions, voicemail indexing, compliance archiving.
Call Analytics
AI use case: derive full call-centre insights — transcript, sentiment, talk-time ratio, silent-time, non-talk time, categories — from customer-agent conversations. Amazon Transcribe Call Analytics is the purpose-built extension, delivering a single JSON bundle per call. Chaining Transcribe Call Analytics → Comprehend → a Bedrock summary is a canonical call-centre AI use case pipeline.
Medical Transcription
AI use case: produce a clinician-grade transcript of a patient encounter with correct medical vocabulary. Amazon Transcribe Medical is HIPAA-eligible and recognises pharmacological and procedural terms out of the box.
Text-to-Speech (TTS)
AI use case: generate natural-sounding spoken audio from written text for IVR prompts, audiobooks, accessibility readers, and virtual assistants. Amazon Polly offers dozens of voices across 40+ languages, including Neural TTS (NTTS) and newer Generative voices. Speech Synthesis Markup Language (SSML) lets developers fine-tune pronunciation, pauses, and emphasis.
Voice Cloning and Brand Voices
AI use case: produce a custom voice that matches a brand personality. Amazon Polly Brand Voice is a managed service for creating an exclusive NTTS voice with a partnered voice actor. On the exam, Brand Voice is the correct answer when the scenario emphasizes "unique brand" and "not a generic voice."
Recommendation and Personalization AI Use Cases
Personalized experiences are a flagship AI use case for modern digital products. AWS offers Amazon Personalize, a fully managed service that builds recommendation models without requiring the customer to understand collaborative filtering.
Product Recommendations
AI use case: recommend the next product a customer is most likely to buy based on browsing and purchase history. Amazon Personalize ingests user-item interaction data, trains a model using recipes like User-Personalization, and serves real-time recommendations through a campaign endpoint.
Personalized Ranking
AI use case: reorder an arbitrary list — search results, category pages, email newsletters — so items most likely to resonate with the specific user appear first. The Personalize Personalized-Ranking recipe is purpose-built for this.
Related Items (Similar Items)
AI use case: show customers "people who viewed this also viewed." The Personalize Similar-Items recipe (formerly SIMS) handles it without requiring a user history.
Content Personalization for Media
AI use case: personalize a streaming homepage, article feed, or video-on-demand catalogue. Personalize handles implicit feedback (plays, skips) as well as explicit feedback (ratings). Many media customers chain it with Amazon Kinesis Data Streams for real-time interaction ingestion.
AI use case → AWS service quick map (part 1):
- Image classification / object detection / moderation → Amazon Rekognition
- Document OCR and form extraction → Amazon Textract
- Sentiment / entities / key phrases → Amazon Comprehend
- Language translation → Amazon Translate
- Speech-to-text / call transcripts → Amazon Transcribe
- Text-to-speech / IVR voices → Amazon Polly
- Recommendations / personalized ranking → Amazon Personalize
Forecasting AI Use Cases
Time-series forecasting is a distinct AI use case family with its own managed service and specific exam cues.
Demand Forecasting
AI use case: predict retail demand for SKUs, travel-booking volume, or energy consumption over future time windows. Amazon Forecast accepts historical target time series plus optional related time series and item metadata, then automatically selects from algorithms including AutoML, DeepAR+, Prophet, ARIMA, and ETS. Forecast returns probabilistic forecasts (P10, P50, P90) so planners can reason about risk.
Financial Forecasting
AI use case: predict revenue, expenses, or headcount. Amazon Forecast is appropriate when the input is structured time-series data; for free-form financial analysis, Bedrock foundation models with tool use become relevant.
Capacity and Resource Forecasting
AI use case: predict cloud infrastructure demand, call-centre staffing needs, or delivery-fleet volume. Forecast is the first-choice managed service; SageMaker custom models apply only when specialised algorithms or exotic seasonalities are required.
Note: AWS has signalled deprecation of some Forecast features in favour of SageMaker Canvas forecasting and Bedrock-based approaches. For AIF-C01, Amazon Forecast remains the canonical named service for time-series AI use cases unless the exam question explicitly scopes to SageMaker Canvas.
Anomaly Detection AI Use Cases
Anomaly detection is a specialised AI use case for spotting outliers in business metrics, logs, infrastructure, and industrial equipment.
Business Metric Anomalies
AI use case: automatically flag unusual drops in revenue, conversion rate, or traffic before a human notices. Amazon Lookout for Metrics (on a maintenance trajectory; check current AWS announcements) was designed for this; modern implementations often use Amazon SageMaker Canvas forecasting with prediction intervals or Amazon CloudWatch Anomaly Detection for operational metrics.
Industrial Equipment Anomalies
AI use case: detect failing motors, pumps, or compressors from sensor data. Amazon Lookout for Equipment consumes multi-sensor time-series data and learns normal operating patterns, alerting when deviations occur. The industrial IoT pipeline pairs it with AWS IoT SiteWise.
Fraud Detection
AI use case: flag suspicious transactions or account takeovers in real time. Amazon Fraud Detector is the managed service that trains a custom model from historical fraud examples and offers purpose-built event types (online fraud, new account fraud, account takeover). For ultra-custom fraud logic, SageMaker with XGBoost or a custom deep-learning model remains the escape hatch.
Vision-Based Anomalies
AI use case: detect defective parts on a production line from images. Amazon Lookout for Vision learns the appearance of good units from as few as 30 images and flags anomalies in new photos. Pair with Rekognition Custom Labels when the defect classes are known ahead of time.
Do not confuse anomaly detection with classification. Anomaly detection learns what "normal" looks like and flags deviations without pre-labelled anomaly examples. Classification requires labelled examples of every class. If the AIF-C01 scenario says "we have no historical examples of failures" or "we want to find rare, unknown events," the answer is an anomaly-detection service, not a classifier. Picking Rekognition Custom Labels for a "detect never-before-seen defects" scenario is a classic trap. Source ↗
Generative AI Use Cases
Generative AI use cases — producing new text, images, code, or other content — are the fastest-growing exam area on AIF-C01. The three flagship services are Amazon Bedrock (foundation-model platform), Amazon Q (ready-made assistants), and SageMaker JumpStart (model hub for custom deployment).
Text Generation
AI use case: draft emails, marketing copy, product descriptions, or narrative reports. Amazon Bedrock provides on-demand access to foundation models from Anthropic (Claude), Amazon (Nova and Titan), Meta (Llama), Mistral, Cohere, and AI21. Customers invoke models via a unified InvokeModel API without managing GPUs.
Summarization and Abstractive Compression
AI use case: compress long documents, meeting transcripts, or research papers into short summaries. Bedrock foundation models handle both extractive and abstractive summarization; prompt templates plus temperature settings tune style and creativity.
Retrieval-Augmented Generation (RAG)
AI use case: answer questions grounded in enterprise documents without fine-tuning a model. Knowledge Bases for Amazon Bedrock automates the RAG workflow: connect to S3, let Bedrock chunk and embed documents into a vector store (Amazon OpenSearch Serverless, Aurora, Pinecone, Redis), and expose a retrieval API. The model generates answers citing source passages. RAG is the single most-tested generative AI use case pattern on AIF-C01.
Conversational Assistants
AI use case: deploy a multi-turn chatbot that can call tools, look up data, and complete tasks. Agents for Amazon Bedrock orchestrate foundation models plus action groups (Lambda functions) and knowledge bases. Amazon Q Business is the turnkey option for internal enterprise chat — connect to SharePoint, Salesforce, Jira, Confluence, and dozens of other connectors.
Image Generation
AI use case: produce images from text prompts for marketing, product design, or creative tools. Amazon Bedrock offers image-generation foundation models including Amazon Titan Image Generator, Stability AI Stable Diffusion, and Amazon Nova Canvas. Common use cases: e-commerce product visualization, advertising creative, localized marketing assets.
Code Generation and Developer Productivity
AI use case: write, explain, debug, and refactor software code; generate unit tests; answer documentation questions; suggest infrastructure-as-code templates. Amazon Q Developer (successor to CodeWhisperer) is the purpose-built managed service, integrating into IDEs (VS Code, JetBrains, Visual Studio), the AWS Management Console, and CLI. It offers inline code suggestions, security scans, and an agentic mode for multi-file feature implementation. Exam scenarios mentioning "developer assistant in the IDE" map to Amazon Q Developer.
Enterprise Knowledge Assistants
AI use case: let employees ask questions about internal company documents, policies, and data. Amazon Q Business aggregates data from connectors, applies user-level permissions (ACL-aware retrieval), and answers via a conversational interface with citations. Amazon Q Business is the correct answer when the scenario emphasizes "internal employees," "respect document-level permissions," and "citations to source documents."
Custom Fine-Tuning and Continued Pre-Training
AI use case: adapt a foundation model to a specific domain style, vocabulary, or task when prompt engineering alone is insufficient. Amazon Bedrock supports custom-model fine-tuning for selected base models; SageMaker JumpStart offers deeper control for teams with their own training data. The decision criterion is usually latency-sensitive, proprietary-vocabulary, or cost-optimized workloads.
For generative AI use cases on AIF-C01, memorize this decision order:
- Amazon Q Business for internal enterprise chat over company data.
- Amazon Q Developer for IDE coding assistance.
- Knowledge Bases + Agents for Amazon Bedrock for custom generative applications.
- Bedrock Fine-Tuning when prompt engineering plus RAG do not meet quality bar.
- SageMaker JumpStart when you need full control over model weights and hosting.
Skipping directly to SageMaker when an Amazon Q service would suffice is a frequent exam trap. Source ↗
When to Use Managed AI Services vs Custom Models
One of the most consequential AI use case decisions is whether to use a managed AI service, a foundation model on Bedrock, or a custom model on SageMaker. AIF-C01 tests this decision repeatedly.
Prefer a Managed AI Service When…
- The AI use case matches a mainstream category (vision, NLP, speech, translation, recommendation, forecasting).
- You do not have a labelled dataset — or you have one but don't want to curate it.
- Time-to-market is days or weeks, not months.
- Your team lacks dedicated ML engineers.
- The accuracy offered by the managed service is "good enough" for the business requirement.
Prefer Amazon Bedrock When…
- The AI use case needs text generation, summarization, chat, or image generation.
- You want to experiment with multiple foundation models without managing GPUs.
- You need RAG over enterprise documents.
- Prompt engineering plus optional fine-tuning will meet quality targets.
Prefer Amazon Q When…
- The AI use case is an end-user productivity assistant (business chat, IDE coding).
- You value a turnkey UI and connectors more than customization depth.
Prefer Amazon SageMaker When…
- No managed AI service covers the AI use case (medical imaging grading, industrial quality control beyond Lookout's scope, proprietary research).
- You have labelled training data and ML expertise in-house.
- You need tight control over model architecture, hyperparameters, and deployment environment.
- Latency, cost, or regulatory constraints require you to host and own the model.
Decision Summary Table
| AI Use Case | Managed AI Service | Generative Option | Custom Fallback |
|---|---|---|---|
| Image labelling | Rekognition | Bedrock multimodal | SageMaker |
| Document extraction | Textract | Bedrock + Textract | SageMaker |
| Sentiment analysis | Comprehend | Bedrock few-shot | SageMaker |
| Translation | Translate | Bedrock | SageMaker |
| Speech-to-text | Transcribe | — | SageMaker |
| Text-to-speech | Polly | Bedrock (some voices) | SageMaker |
| Recommendations | Personalize | — | SageMaker |
| Forecasting | Forecast / SageMaker Canvas | — | SageMaker |
| Anomaly detection | Lookout family, Fraud Detector | — | SageMaker |
| Summarization / chat | — | Bedrock, Amazon Q | SageMaker JumpStart |
| Coding assistance | — | Amazon Q Developer | — |
Key Numbers and Limits to Memorize
AIF-C01 rewards candidates who know a handful of canonical numbers tied to AI use cases.
AIF-C01 cheat numbers for AI use cases:
- 3 — high-level layers of the AWS AI/ML stack: AI services, ML services (SageMaker), generative AI (Bedrock / Amazon Q).
- 10 — mainstream AI use case categories tested: image, document, text, translation, speech-to-text, text-to-speech, recommendation, forecasting, anomaly detection, generative.
- 75+ — languages supported by Amazon Translate.
- 100+ — languages supported by Amazon Transcribe.
- 40+ — languages supported by Amazon Polly.
- 30+ — pre-trained label categories used by Rekognition content moderation.
- AIF-C01 exam — 85 questions, 120 minutes, passing score 700/1000, fee USD 100, validity 3 years.
Common Exam Traps: AI Use Case Service Selection
Misidentifying the correct service is the single biggest mistake AIF-C01 candidates make on AI use case questions. The traps below recur in community exam reports.
Trap 1: Rekognition vs Textract for Documents
Candidates see "extract text from image" and instinctively pick Rekognition. For documents — invoices, forms, contracts, IDs — the correct answer is almost always Amazon Textract. Rekognition's DetectText is appropriate only for short snippets in natural scenes (street signs, product labels in photos).
Trap 2: Comprehend vs Bedrock for Text Tasks
Both services operate on text, but they serve different AI use cases. Comprehend is structured analysis (sentiment label, entity list, topic clusters). Bedrock is generative output (free-form paragraphs, summaries, chat responses). Scenario keywords: "classify / detect / extract" → Comprehend. "Generate / summarize / rewrite" → Bedrock.
Trap 3: Lex vs Bedrock Agents for Chatbots
Amazon Lex is intent-based, slot-filled, and deterministic — ideal for contact-centre IVR flows with clearly defined intents. Agents for Amazon Bedrock are LLM-powered, multi-turn, and tool-using — ideal for open-ended assistants. Scenario keywords: "press 1 for billing / fixed call flow" → Lex. "Answer any question about our policy documents" → Bedrock with Knowledge Bases.
Trap 4: Personalize vs Custom Collaborative Filtering
Candidates with ML backgrounds sometimes default to "train a collaborative filter on SageMaker." For mainstream recommendation AI use cases the correct AIF-C01 answer is Amazon Personalize — the managed service abstracts the entire training and serving pipeline. Pick SageMaker only when the scenario explicitly demands a novel algorithm that Personalize's recipes do not cover.
Trap 5: Forecast vs SageMaker Canvas vs Bedrock for Prediction
Time-series prediction with historical numerical data → Amazon Forecast or SageMaker Canvas. Free-form "predict next quarter's revenue narrative" → Bedrock foundation models. Image-based quality predictions → Lookout for Vision or SageMaker. Picking Bedrock for a structured time-series AI use case is a trap in the opposite direction.
Trap 6: Amazon Q Business vs Amazon Q Developer
Both are "Amazon Q" but target distinct AI use cases. Q Business is the enterprise productivity assistant for all employees over company documents. Q Developer is the coding copilot in the IDE. The exam often offers both as distractors in the same question; read the scenario carefully for "employees" vs "developers."
The "sounds AI-ish therefore SageMaker" trap. Candidates with little AWS exposure see the word "AI" in a scenario and reach for SageMaker. SageMaker is rarely the correct first-choice answer for mainstream AI use cases on AIF-C01 — it is the escape hatch when managed services or Bedrock cannot meet the requirement. For every scenario, walk down the decision order: managed AI service → Amazon Q → Bedrock → SageMaker. Only if each prior layer fails do you step down. Source ↗
Composing AI Use Cases into End-to-End Pipelines
Real AWS workloads often chain multiple AI use cases together. AIF-C01 tests pipeline composition through scenarios that describe a full business workflow.
Call-Centre Analytics Pipeline
A contact-centre scenario typically combines: Amazon Connect (voice capture) → Amazon Transcribe Call Analytics (speech-to-text plus call insights) → Amazon Comprehend (sentiment, entities, PII redaction) → Amazon Translate (multilingual normalization) → Amazon Bedrock (call summary and next-best-action suggestion) → Amazon Polly (optional spoken response). Each hop adds a specific AI use case capability.
Document Processing Pipeline
A loan-application scenario typically chains: Amazon S3 (document intake) → Amazon Textract (OCR and form extraction) → Amazon Comprehend (entity recognition for name, address, income) → Amazon Bedrock (summarization, missing-field detection) → SageMaker (custom credit-scoring model, if required) → Amazon Q Business (analyst chat over the processed case file).
Content Moderation Pipeline
A user-generated-content platform typically chains: Amazon S3 (upload) → Amazon Rekognition (image moderation) / Amazon Transcribe (audio transcription) → Amazon Comprehend (text toxicity) → Amazon Bedrock (context-aware review) → human-in-the-loop via Amazon Augmented AI (A2I) for edge cases.
E-Commerce Personalization Pipeline
A retailer typically chains: Amazon Kinesis Data Streams (click-stream ingestion) → Amazon Personalize (real-time recommendations) → Amazon Bedrock (personalized product description generation) → Amazon Translate (localized output) → Amazon Polly (audio catalogue).
Why Composition Matters on the Exam
AIF-C01 questions often describe a multi-step business workflow and ask which two or three AWS services together deliver the AI use case. Recognising the composition pattern — what each managed service contributes to the chain — is the fastest path to the correct answer.
When an AIF-C01 scenario describes a multi-step workflow, do not look for a single "one-service" answer. Read each step, map each step to one AI service, and pick the answer option whose services collectively cover every step. The composition itself is often the AI use case being tested. Source ↗
AI Use Cases Frequently Asked Questions (FAQ)
What are the main AI use cases on AWS for AIF-C01?
AIF-C01 focuses on ten recurring AI use case categories: image classification and detection, document OCR and form extraction, sentiment and entity analysis, translation, speech-to-text and text-to-speech, recommendation and personalization, forecasting, anomaly and fraud detection, generative text and chat, and code generation. Each category maps to one or two managed AWS AI services plus escape hatches in Bedrock and SageMaker.
When should I choose a managed AI service over Amazon SageMaker?
Choose a managed AI service whenever your AI use case fits a mainstream category and the accuracy of the managed API meets your business requirement. You gain zero infrastructure setup, no training data curation, and per-request pricing. Reach for SageMaker only when no managed service covers the use case, when you need a custom algorithm, or when regulatory / latency / cost constraints demand full model ownership.
When should I use Amazon Bedrock instead of Amazon SageMaker?
Choose Amazon Bedrock when the AI use case is generative — text creation, summarization, chat, RAG over documents, image generation — and you want on-demand access to foundation models without managing GPUs. Choose SageMaker when you need deep customization, proprietary training data, or full control over model hosting. Bedrock fine-tuning covers many middle-ground cases without requiring SageMaker expertise.
What is the difference between Amazon Q Business and Amazon Q Developer?
Amazon Q Business is an enterprise productivity assistant that answers employee questions over company documents, respecting user-level permissions via connectors to SharePoint, Jira, Salesforce, and dozens of SaaS systems. Amazon Q Developer is a coding copilot in the IDE that suggests code, explains functions, writes tests, and helps with AWS console navigation. Both are "Amazon Q" but target distinct AI use cases.
Which AWS service solves the recommendation AI use case?
Amazon Personalize is the purpose-built managed service for recommendation AI use cases. It offers recipes for user personalization, personalized ranking, similar items, and real-time campaigns. Choose SageMaker with a custom collaborative-filtering or deep-learning model only when Personalize's recipes do not fit the business logic.
Which AWS service solves the OCR AI use case — Rekognition or Textract?
Amazon Textract is the correct answer for OCR AI use cases involving documents — invoices, forms, tax filings, IDs, contracts. Textract understands forms, tables, and key-value pairs, not just raw text. Rekognition's DetectText is appropriate only for short text snippets in natural scenes (signs, product labels in photos), not multi-page documents.
Can I combine multiple AWS AI services in one application?
Yes — and composition is a frequent exam pattern. Typical chains include Transcribe → Comprehend → Bedrock for call-centre analytics, and Textract → Comprehend → Bedrock for document processing. Each service contributes one step; the combined pipeline delivers a complex AI use case that no single service solves alone.
Do managed AI services require training data?
Most do not for the default use case — Rekognition, Textract, Comprehend, Translate, Transcribe, and Polly are fully pre-trained. Customization variants (Rekognition Custom Labels, Comprehend Custom Classification, Transcribe Custom Vocabulary) let you add domain-specific labels or terms with small labelled datasets. Personalize and Forecast require your historical interaction or time-series data because the AI use case is inherently customer-specific.
What is retrieval-augmented generation (RAG) and which AI use case does it solve?
RAG is a pattern for grounding generative model responses in specific documents rather than the model's general training data. The AI use case is enterprise question-answering with citations — employees ask free-form questions and the system answers using up-to-date company documents. Knowledge Bases for Amazon Bedrock automates RAG; Amazon Q Business provides a turnkey end-user experience. RAG is heavily tested on AIF-C01 because it is the default pattern for enterprise generative AI.
How does AWS expect me to decide between Bedrock and Amazon Q for chat use cases?
Amazon Q is the turnkey option when the AI use case matches a pre-built assistant (employee chat, developer copilot, QuickSight analytics chat). Amazon Bedrock is the build-your-own option when you need a custom UI, custom tool integrations, custom data sources beyond Amazon Q connectors, or access to specific foundation models. Picking Amazon Q for "internal knowledge chat" and Bedrock for "custom customer-facing assistant" is the reliable exam heuristic.
Further Reading
- AWS AIF-C01 Exam Guide: https://d1.awsstatic.com/training-and-certification/docs-ai-practitioner/AWS-Certified-AI-Practitioner_Exam-Guide.pdf
- AWS AI Services Overview: https://aws.amazon.com/machine-learning/ai-services/
- Amazon Bedrock: https://aws.amazon.com/bedrock/
- Amazon Q: https://aws.amazon.com/q/
- Amazon SageMaker: https://aws.amazon.com/sagemaker/
- Amazon Rekognition: https://aws.amazon.com/rekognition/
- Amazon Textract: https://aws.amazon.com/textract/
- Amazon Comprehend: https://aws.amazon.com/comprehend/
- Amazon Translate: https://aws.amazon.com/translate/
- Amazon Transcribe: https://aws.amazon.com/transcribe/
- Amazon Polly: https://aws.amazon.com/polly/
- Amazon Personalize: https://aws.amazon.com/personalize/
- Amazon Forecast: https://aws.amazon.com/forecast/
Related ExamHub topics: What is AI and ML, Amazon Bedrock, Amazon SageMaker, Amazon Q Business, Amazon Q Developer, Foundation Models, Generative AI Basics, ML Development Lifecycle.