Amazon Q and AWS AI services are the two layers of the AWS AI/ML stack that let customers solve real business problems without training a model from scratch. Amazon Q is the purpose-built generative AI assistant family — Amazon Q Business, Amazon Q Developer, Amazon Q in QuickSight, and Amazon Q in Connect — each wired to a specific user role and data surface. The AWS AI services portfolio (Amazon Rekognition, Amazon Comprehend, Amazon Translate, Amazon Polly, Amazon Transcribe, Amazon Textract, Amazon Kendra, Amazon Personalize, Amazon Forecast, Amazon Fraud Detector, plus the Amazon Augmented AI human-in-the-loop workflow) handles single-task inference behind one API call. On the AWS Certified AI Practitioner (AIF-C01) exam, Task Statement 3.1 asks you to describe the design considerations for applications that use foundation models — and that includes choosing Amazon Q and AWS AI services over Amazon Bedrock plus a custom model when the pre-built option already covers the use case. This is one of the most scenario-heavy question areas on the entire exam, because AIF-C01 repeatedly asks "which AWS AI service matches this use case."
This study guide walks through every Amazon Q product in detail, every AWS AI service in the AIF-C01 blueprint, the decision rubric between Amazon Q and AWS AI services versus Amazon Bedrock plus a custom foundation model, and the human-review pattern with Amazon Augmented AI. You will finish with six FAQ entries, five callouts, and a compact service-to-use-case map you can recall under exam pressure.
What Are Amazon Q and AWS AI Services?
Amazon Q and AWS AI services are the top layer of the three-tier AWS AI/ML stack.
- Tier 1 — AWS AI services are pre-built, task-specific APIs. Send an image, get labels (Amazon Rekognition). Send text, get sentiment (Amazon Comprehend). Send audio, get a transcript (Amazon Transcribe). No model training, no hyperparameter tuning, no GPU quota.
- Tier 2 — Amazon Q and generative AI includes Amazon Q (the assistant family) and Amazon Bedrock (the foundation-model API marketplace). Amazon Q is the end-user product; Amazon Bedrock is the developer platform. Amazon Q and AWS AI services frequently overlap here because Amazon Q is itself built on top of Amazon Bedrock and AWS AI services.
- Tier 3 — Amazon SageMaker is the custom ML platform where you train and deploy your own model when nothing in Tier 1 or Tier 2 fits.
Amazon Q and AWS AI services let a team ship an AI feature in hours, not quarters. They are the correct answer on AIF-C01 whenever the exam says "minimal ML expertise," "no data science team," "fastest path to production," or "pre-trained model." This sentence — that Amazon Q and AWS AI services are the fastest path when a pre-built capability exists — appears in some form on nearly every scenario question.
Why Amazon Q and AWS AI Services Matter for AIF-C01
AIF-C01 Domain 3 (Applications of Foundation Models) weighs 28 percent of the exam, and Task Statement 3.1 covers design considerations for FM-based applications including service selection. The exam explicitly tests whether you can differentiate Amazon Q Business from Amazon Q Developer, Amazon Q from Amazon Bedrock, Amazon Kendra from a vector database, and Amazon Comprehend from a custom SageMaker model. Amazon Q and AWS AI services also map to Task Statement 1.2 (practical use cases) because every noun-to-service mapping lives here. Budget roughly 350 out of the 2200 Domain 3 questions to this topic; expect three to five scenario questions on the real test.
Plain-English Explanation of Amazon Q and AWS AI Services
Amazon Q and AWS AI services sound like a zoo of overlapping products. Three analogies collapse the zoo into a simple map.
Analogy 1 — The Swiss Army Knife (瑞士刀)
Think of Amazon Q and AWS AI services as a Swiss Army knife with labeled blades. Each blade solves exactly one problem and you pick the blade by the job, not the other way around.
- Amazon Q Business is the big assistant blade — "answer any question about company documents."
- Amazon Q Developer is the coding blade — "write, review, and explain code inside my IDE."
- Amazon Q in QuickSight is the BI narration blade — "tell me what this dashboard says in English."
- Amazon Q in Connect is the contact-center blade — "suggest the next best response to the caller."
- Amazon Rekognition is the camera blade — "what is in this image or video?"
- Amazon Comprehend is the reading blade — "what does this text mean?"
- Amazon Translate is the language blade — "say this in Spanish."
- Amazon Polly is the speaker blade — "read this aloud."
- Amazon Transcribe is the microphone blade — "write down what was said."
- Amazon Textract is the scanner blade — "pull the forms and tables out of this PDF."
- Amazon Kendra is the search blade — "find the answer inside my corpus."
- Amazon Personalize is the recommendation blade — "what should I show this user next?"
- Amazon Forecast is the crystal-ball blade — "what will demand look like in six weeks?"
- Amazon Fraud Detector is the alarm blade — "is this transaction fake?"
- Amazon Augmented AI is the supervisor blade — "send low-confidence predictions to a human."
On AIF-C01, the question will hand you a single problem. Do not reach for the molecular-gastronomy lab (Amazon SageMaker) or the raw foundation model (Amazon Bedrock) when the dedicated blade for that exact job already exists. The whole Amazon Q and AWS AI services trick is noun-to-blade mapping.
Analogy 2 — The Kitchen Brigade (廚房)
Picture a restaurant kitchen where every station hands one finished dish through the window.
- Amazon Q Business is the headwaiter who memorized every internal menu, wine pairing, and regular customer. Ask a question, get an answer with a citation.
- Amazon Q Developer is the apprentice cook peeking at the head chef's notes, writing prep steps that the head chef reviews.
- Amazon Q in QuickSight is the sommelier explaining, in plain English, why tonight's wine goes with the dashboard.
- Amazon Q in Connect is the expediter whispering the next response into the agent's earpiece.
- Amazon Rekognition is the quality inspector at the plating station — does this dish look right, is there anything unsafe?
- Amazon Comprehend is the diner feedback reader who categorizes every review.
- Amazon Translate is the menu translator who rewrites the card in fifteen languages.
- Amazon Polly is the announcer at the drive-thru reading specials aloud.
- Amazon Transcribe is the stenographer capturing every customer order verbatim.
- Amazon Textract is the clerk who scans paper receipts and turns them into structured rows.
- Amazon Kendra is the library inside the kitchen — ask "do we have a gluten-free cassoulet recipe?"
- Amazon Personalize is the host who remembers every diner's favorite dish and suggests it on arrival.
- Amazon Forecast is the purchasing manager predicting how many kilos of flour to order next week.
- Amazon Fraud Detector is the cashier spotting counterfeit bills.
- Amazon Augmented AI is the shift supervisor who tastes dishes the robot is unsure about.
If the exam says "classify customer reviews," you call the diner-feedback reader (Amazon Comprehend). You do not open the molecular-gastronomy lab (Amazon SageMaker) to invent a sentiment analyzer from scratch.
Analogy 3 — The Toolbox (工具箱)
Amazon Q and AWS AI services are a labeled toolbox. You reach in with your dominant hand; a hammer comes out when the job is a nail, a saw when the job is wood.
- Q family tools handle conversation — Amazon Q Business for employees, Amazon Q Developer for engineers, Amazon Q in QuickSight for analysts, Amazon Q in Connect for agents.
- Vision tools handle pixels — Amazon Rekognition for images and videos.
- Language tools handle text — Amazon Comprehend, Amazon Translate, Amazon Kendra.
- Voice tools handle audio — Amazon Polly for synthesis, Amazon Transcribe for recognition.
- Document tools handle paper — Amazon Textract.
- Prediction tools handle numbers and rankings — Amazon Personalize, Amazon Forecast, Amazon Fraud Detector.
- Quality tools handle review — Amazon Augmented AI.
When AIF-C01 describes a business scenario, do the toolbox walk: pixels → Rekognition, paper → Textract, audio → Transcribe or Polly, text meaning → Comprehend, text search → Kendra, text conversation → Amazon Q, recommendations → Personalize, time series → Forecast, fraud → Fraud Detector, human review → Augmented AI. Ten walks, ten correct answers.
Amazon Q Family Overview
Amazon Q is an AWS-branded assistant family, not a single product. Every Amazon Q edition shares three traits: it is a generative-AI assistant, it is built on top of Amazon Bedrock foundation models, and it is wired to a specific data surface. The four editions tested on AIF-C01 are Amazon Q Business, Amazon Q Developer, Amazon Q in QuickSight, and Amazon Q in Connect.
Amazon Q Business — Enterprise knowledge assistant
Amazon Q Business is the enterprise-grade generative AI assistant that answers questions about internal company data. It comes with more than forty built-in connectors — Amazon S3, Microsoft SharePoint, Microsoft OneDrive, Confluence, Google Drive, Salesforce, ServiceNow, Jira, Slack, Gmail, Microsoft Exchange, Zendesk, Dropbox, Box, and others — so administrators wire Amazon Q Business to the corpus once and employees ask questions in plain English. Answers come back with citations pointing to the source documents, and access control is enforced per document using the same identities the source systems already use (AWS IAM Identity Center or any SAML-compatible identity provider).
Amazon Q Business also supports Amazon Q Apps, small no-code generative apps that a business user can build by describing the app in natural language. Amazon Q Business keeps customer data private by default — your conversations and documents are not used to train the underlying foundation models.
Amazon Q Developer — Coding and AWS console assistant
Amazon Q Developer (previously Amazon CodeWhisperer) is the generative AI coding assistant that lives inside Visual Studio Code, JetBrains IDEs, AWS Cloud9, the AWS Management Console, the AWS Command Line Interface, and the Amazon EC2 Linux command line. Amazon Q Developer generates functions, explains code, writes unit tests, suggests performance improvements, identifies security vulnerabilities, upgrades Java applications, and troubleshoots AWS issues directly inside the AWS Management Console.
Amazon Q Developer distinguishes itself from Amazon Q Business by target user: Amazon Q Developer is aimed at engineers writing code or operating AWS, while Amazon Q Business is aimed at office workers asking questions about documents.
Amazon Q in QuickSight — BI narratives and dashboards
Amazon Q in QuickSight adds natural-language question answering and narrative generation to Amazon QuickSight dashboards. Instead of building a dashboard by dragging fields, an analyst types "show me revenue by region for the last quarter" and Amazon Q in QuickSight generates the chart. Amazon Q in QuickSight also writes one-paragraph narrative summaries of dashboards for executives who do not want to read charts.
Amazon Q in QuickSight uses author and reader roles. Authors build data stories; readers consume them.
Amazon Q in Connect — Contact-center agent assistant
Amazon Q in Connect (the successor to Amazon Connect Wisdom) listens to live customer conversations in Amazon Connect, detects the customer's intent, and recommends responses, articles, and step-by-step actions in real time. It pulls knowledge from connected repositories — Salesforce, ServiceNow, and internal knowledge bases — so a contact-center agent always has the right answer one click away.
Amazon Q in Connect is the only Amazon Q edition scoped specifically to Amazon Connect contact centers; it does not run outside that surface.
Amazon Q is the family of purpose-built generative AI assistants from AWS. It includes Amazon Q Business (enterprise knowledge), Amazon Q Developer (coding and AWS), Amazon Q in QuickSight (BI narratives), and Amazon Q in Connect (contact-center agent assistance). Each edition is a finished product with a user interface; Amazon Bedrock, by contrast, is a developer API for foundation models. Source ↗
Amazon Q vs Amazon Bedrock plus a Custom Foundation Model
This comparison is the highest-yield exam trap on the Amazon Q and AWS AI services topic. AIF-C01 Task 3.1 repeatedly tests whether you understand that Amazon Q is a finished product and Amazon Bedrock is a platform on which you build your own product.
Pick Amazon Q when:
- The use case is a standard assistant pattern (enterprise Q&A, coding help, BI narratives, contact-center agent assist).
- Business users want a chat UI on day one with no code.
- The organization lacks generative AI engineering capacity.
- Data-source connectors and identity-aware permissions need to work out of the box.
Pick Amazon Bedrock plus a custom application when:
- The product needs a bespoke UX that Amazon Q does not provide.
- The workflow needs custom orchestration, tool use, multi-step agents, or specific model selection (Claude 3.5 Sonnet vs Llama 3 vs Amazon Nova).
- The application must embed generative AI inside an existing product (chatbot in a mobile app, backend LLM worker).
- Fine-grained prompt engineering, Retrieval-Augmented Generation (RAG) tuning, or Bedrock Guardrails configuration is required beyond Amazon Q defaults.
On AIF-C01, if the scenario says "non-technical users, chat UI, answers about company documents with citations," pick Amazon Q Business. If it says "build a custom generative AI feature into our SaaS application using Claude 3 Haiku at the API level," pick Amazon Bedrock. If it says "help developers inside VS Code," pick Amazon Q Developer. Amazon Q and Amazon Bedrock are not competitors — Amazon Q is built on top of Amazon Bedrock — but on the exam they are distinct correct answers for distinct scenarios. Source ↗
Amazon Rekognition — Computer Vision for Images and Video
Amazon Rekognition is the AWS AI services entry for computer vision. It analyzes images and video streams without requiring you to train a model. Amazon Rekognition groups its capabilities into labels, moderation, text-in-image detection, face analysis, and custom labels.
Image and video moderation
Amazon Rekognition Content Moderation detects unsafe content in images and videos — violence, explicit content, weapons, alcohol, tobacco, gambling, hate symbols, rude gestures, and more — returning a hierarchical taxonomy of moderation labels with confidence scores. The same API works for stored images, stored videos, and live video streams coming from Amazon Kinesis Video Streams.
Face analysis and face search
Amazon Rekognition DetectFaces returns face-level attributes like age range, gender, emotions (happy, sad, angry, confused, disgusted, surprised, calm), eye state, smile, and facial landmarks. CompareFaces measures face similarity across two images. Face collections store millions of face vectors, and SearchFacesByImage matches a probe face against a collection — the pattern used for face-based login or identity verification.
Label and text detection
DetectLabels returns thousands of scene and object labels ("Dog," "Beach," "Car") with bounding boxes and a hierarchical taxonomy. DetectText extracts text that appears inside images and video frames — street signs, product labels, license plates.
Amazon Rekognition Custom Labels
Amazon Rekognition Custom Labels lets you train a custom image classifier or object detector with as few as ten labeled images per class. You do not write model code; Amazon Rekognition handles training, hosting, and inference. Use Custom Labels when your labels (a specific company logo, a defective product on your assembly line) are not in the general Amazon Rekognition taxonomy.
Amazon Comprehend — Natural Language Processing
Amazon Comprehend is the pre-built NLP entry in the AWS AI services portfolio. It extracts structured information from unstructured text in dozens of languages.
Sentiment, entities, and key phrases
Amazon Comprehend returns sentiment (Positive, Negative, Neutral, Mixed) for every input document and targeted sentiment (sentiment toward specific entities inside the text — positive toward one product, negative toward another, in the same review). Entity detection labels people, locations, organizations, dates, quantities, commercial items, and events. Key phrase extraction lifts the noun phrases that matter.
PII detection and redaction
Amazon Comprehend PII detection identifies personally identifiable information — names, addresses, credit card numbers, Social Security numbers, bank account numbers, phone numbers, email addresses, dates of birth — and returns their location spans so you can redact them before logging or training.
Topic modeling
Amazon Comprehend topic modeling applies Latent Dirichlet Allocation (LDA) to a corpus and returns a set of topics, each described by its top terms. Use topic modeling on product reviews, support tickets, or research papers to discover themes without labels.
Custom Classifier and Custom Entity Recognition
Amazon Comprehend Custom Classifier trains a text classifier on labels you define (for example, "invoice," "purchase order," "contract"). Custom Entity Recognition trains a named-entity recognizer for entities unique to your domain (policy numbers, part numbers, drug names). Both options require a labeled training set but no model code.
AIF-C01 hides a trap between Amazon Comprehend and Amazon Macie. Amazon Comprehend PII detection runs on text you pass to the API — it is for text analytics pipelines. Amazon Macie PII detection scans objects in Amazon S3 buckets at rest — it is for data-governance discovery. Same word ("PII"), different surfaces. If the question describes scanning a data lake, the answer is Amazon Macie. If the question describes processing a stream of customer chat messages, the answer is Amazon Comprehend. Source ↗
Amazon Translate — Neural Machine Translation
Amazon Translate is the neural machine-translation entry in the AWS AI services catalog. It translates text in real time or in asynchronous batch jobs across 75+ languages. Key features tested on AIF-C01:
- Custom Terminology — upload a CSV of brand-specific translations (product names, trademarks) so the engine never rewrites them.
- Active Custom Translation — supply parallel data (source and target segments in a specific domain) and Amazon Translate adapts output for that domain without a full training job.
- Batch translation — translate large document collections stored in Amazon S3 in one asynchronous job.
- Automatic source-language detection — pass
autoand the service detects the language from the input.
Typical AIF-C01 cue words: "localize product catalog into multiple languages," "real-time chat translation," "multilingual customer support."
Amazon Polly — Text-to-Speech
Amazon Polly turns text into lifelike speech. AIF-C01 tests three Amazon Polly concepts: Neural TTS voices, Speech Synthesis Markup Language (SSML), and lexicons.
Neural TTS voices and long-form voices
Amazon Polly offers three voice engine classes: standard (legacy concatenative voices), neural TTS (NTTS, deep-learning voices that sound dramatically more human), and long-form voices (optimized for long passages like audiobooks and articles). Generative voices (newer) push realism further and are available in selected languages. On AIF-C01, recognize that neural and long-form voices are the correct picks for customer-facing audio; standard voices are legacy.
SSML control
Speech Synthesis Markup Language is an XML-style language that gives you fine-grained control over synthesized speech — insert pauses (<break>), change speaking rate (<prosody rate="slow">), emphasize words (<emphasis>), spell out acronyms (<say-as interpret-as="characters">), pronounce numbers as digits or cardinal values, and more. SSML is the answer whenever the question mentions "fine-grained control over pronunciation, pitch, or pausing."
Lexicons
Amazon Polly lexicons let you override the default pronunciation of specific words — for example, a unique brand name or a domain-specific acronym. Upload a Pronunciation Lexicon Specification (PLS) file and Amazon Polly applies it to all subsequent synthesis requests. Lexicons are the answer when the scenario says "the brand name is mispronounced; how do we fix it for every future request?"
Amazon Transcribe — Speech-to-Text
Amazon Transcribe converts audio to text in batch and real-time streaming modes. AIF-C01 tests the general Amazon Transcribe surface plus two domain variants: Amazon Transcribe Medical and Amazon Transcribe Call Analytics.
Real-time and batch transcription
Real-time (streaming) transcription accepts an HTTP/2 or WebSocket stream and returns partial transcripts with sub-second latency. Batch transcription runs on audio files in Amazon S3 and returns a completed transcript minutes later. Both modes support automatic language identification, custom vocabulary (for domain terminology), custom language models (trained on domain text), speaker diarization (who spoke when), channel identification (left vs right audio channel), and automatic punctuation.
Amazon Transcribe Medical
Amazon Transcribe Medical is tuned for healthcare conversations — clinician-to-patient dictation and clinician-to-clinician dialogue. It recognizes medical terminology (drug names, anatomy, procedures) and supports HIPAA-eligible workloads.
Amazon Transcribe Call Analytics
Amazon Transcribe Call Analytics combines transcription with sentiment analysis, talk-time ratios, issue detection, and post-call summarization for contact-center calls. Real-time Call Analytics streams these signals during the call; post-call Call Analytics processes recordings after the fact. Call Analytics is distinct from Amazon Q in Connect — Call Analytics produces structured analytics, Amazon Q in Connect whispers suggestions to the agent.
Amazon Textract — Document OCR, Forms, Tables, Queries
Amazon Textract goes far beyond flat OCR. It understands the structure of documents — forms, tables, checkboxes, key/value pairs, layout elements — and returns structured JSON you can drop into downstream logic.
Forms, tables, and layout
AnalyzeDocument with the FORMS feature returns key/value pairs exactly as they appear on the page ("Employee Name: Jane Doe"). TABLES returns table cells with their row and column indices. LAYOUT returns document structure elements — titles, headers, paragraphs, footers, page numbers — so you can chunk a document intelligently for downstream RAG pipelines on Amazon Bedrock Knowledge Bases.
Queries and custom queries
The QUERIES feature lets you ask natural-language questions of a document — "What is the total amount due?" "What is the policy number?" — and Amazon Textract extracts the answer directly, skipping the "parse every field then find the right one" step. Custom Queries lets you adapt Queries to document types that the general model struggles with; you provide a small number of labeled examples and Amazon Textract refines its extraction for that template.
AnalyzeExpense, AnalyzeID, and AnalyzeLending
Amazon Textract also ships purpose-built document types: AnalyzeExpense for receipts and invoices (it knows line items, vendor, totals), AnalyzeID for government-issued identity documents (it knows first name, last name, date of birth, document number), and AnalyzeLending for mortgage document packages (it classifies pages and extracts field groups).
These two AWS AI services both return text but they solve different jobs. Amazon Rekognition DetectText extracts text that appears inside images — street signs, product labels, license plates, text overlaid on a scene. Amazon Textract extracts text from document images and PDFs — invoices, forms, IDs, contracts — and preserves structure (tables, key/value pairs). If the input is a scene with incidental text, the answer is Amazon Rekognition. If the input is a document, the answer is Amazon Textract. Source ↗
Amazon Kendra — Intelligent Enterprise Search
Amazon Kendra is the enterprise search AWS AI service. It indexes unstructured documents from your company's data sources and answers natural-language questions with precise snippets and document-level answers.
How Amazon Kendra differs from a vector database
This is a classic AIF-C01 trap. Amazon Kendra uses a hybrid of semantic retrieval and keyword matching, with its own proprietary relevance model. It is not a raw vector database that returns approximate-nearest-neighbor results over embeddings — that would be Amazon OpenSearch Service k-NN, Amazon Aurora PostgreSQL with pgvector, or Amazon Neptune Analytics. Amazon Kendra is a finished search product; a vector database is a component of a retrieval pipeline.
Connectors and FAQs
Amazon Kendra ships connectors for Amazon S3, Microsoft SharePoint, Salesforce, ServiceNow, Confluence, Google Drive, Box, Dropbox, Jira, GitHub, and more. Administrators also upload Frequently Asked Questions (FAQ) lists so Amazon Kendra returns a canonical answer verbatim for common queries.
Amazon Kendra as RAG retriever
Amazon Kendra integrates with Amazon Bedrock Knowledge Bases as a retriever, which means you can build a Retrieval-Augmented Generation pipeline where Amazon Kendra finds relevant passages and Amazon Bedrock foundation models generate the final answer. On AIF-C01, remember that Amazon Kendra is both a standalone search product and a first-class RAG retriever.
Amazon Personalize — Real-Time Recommendations
Amazon Personalize builds the same recommendation technology that Amazon.com uses. Feed it user-item interaction events, an item catalog, and optional user metadata; it trains and hosts a recommendation model for you. You then call a real-time or batch inference endpoint to get recommendations, related items, or personalized rankings.
Recipes and domain datasets
Amazon Personalize ships algorithmic "recipes" — User-Personalization, Similar-Items, Personalized-Ranking, Trending-Now, Next-Best-Action, and others — each tuned for a specific recommendation pattern. Domain datasets add pre-configured schemas for E-commerce and Video-on-Demand so you do not start from scratch.
When Amazon Personalize wins vs custom
Amazon Personalize wins whenever the business needs recommendations within weeks, the data science team is small or nonexistent, and the ranking problem is a classic recommendation pattern. Amazon Personalize loses to a custom SageMaker model only when the ranking problem is unusual (multi-objective optimization across competing business KPIs) or when extreme-scale cost optimization matters. For AIF-C01, the default correct answer for a recommendation use case is Amazon Personalize.
Amazon Forecast — Time-Series Forecasting
Amazon Forecast produces time-series forecasts using AutoML across multiple algorithms (ARIMA, Prophet, ETS, DeepAR+, CNN-QR, NPTS). You provide historical time-series data plus optional related time series and item metadata; Amazon Forecast picks the best algorithm and outputs forecasts with confidence intervals.
Use cases: retail inventory forecasting, workforce planning, financial metric projection, IoT capacity planning. Amazon Forecast is the AIF-C01 answer whenever the scenario mentions "time series," "demand," "capacity planning," or "weekly / monthly / daily prediction" and the team does not want to write forecasting code.
Note: Amazon Forecast is in maintenance mode as of 2024 (new customers are directed to Amazon SageMaker Canvas and purpose-built forecasting recipes), but the service remains in scope on AIF-C01 and AWS continues to support existing customers. Treat it as a valid correct answer whenever the exam lists it.
Amazon Fraud Detector — Online Fraud Detection
Amazon Fraud Detector is the managed AWS AI service for detecting fraudulent online transactions, fake accounts, and promotion abuse. You upload historical events (labeled "fraud" vs "legit"), pick a model template — Online Fraud Insights, Transaction Fraud Insights, or Account Takeover Insights — and Amazon Fraud Detector trains and hosts the model for you. You then evaluate new events through rules you author plus the trained model, and the service returns a fraud score and an outcome.
Amazon Fraud Detector is the exam answer whenever the scenario says "detect fraud" and the team does not want to build an end-to-end ML system. A custom model on Amazon SageMaker is only justified when the fraud pattern is unusual enough to exceed what Amazon Fraud Detector's templates support, or when the team has an established anti-fraud ML practice.
Amazon Augmented AI (A2I) — Human-in-the-Loop
Amazon Augmented AI (Amazon A2I) is the human-in-the-loop workflow service that routes low-confidence ML predictions to human reviewers. A2I is not a model; it is a workflow coordinator.
How A2I works
You define a "flow definition" that specifies:
- The trigger condition — for example, confidence below a threshold on an Amazon Rekognition moderation result.
- The human work team — your private workforce, a vendor workforce, or Amazon Mechanical Turk.
- The UI template — the task page the human reviewer sees.
- The downstream action — write the reviewer's decision back to Amazon S3.
Built-in integrations and custom loops
A2I ships built-in task types for Amazon Rekognition (image moderation review) and Amazon Textract (form key/value review). For any other inference — a SageMaker endpoint, a Bedrock foundation model output — you build a custom A2I loop by calling the StartHumanLoop API when confidence is below your threshold.
When A2I is the correct exam answer
A2I is the correct answer on AIF-C01 whenever the scenario requires:
- Human validation of low-confidence ML predictions before they hit production.
- Ground-truth labeling of edge cases for periodic model retraining.
- Regulatory compliance patterns that require "a human in the loop" for high-impact decisions.
Amazon Augmented AI (A2I) does not predict, classify, or generate. It orchestrates the handoff between ML inference and human reviewers. If the scenario says "route low-confidence predictions to people for review," the answer is Amazon A2I. If the scenario says "predict with higher confidence," the answer is training a better model, not A2I. Source ↗
When a Pre-built AWS AI Service Wins vs a Custom Model
AIF-C01 Task 3.1 asks you to pick between an AWS AI service and a custom model built on Amazon SageMaker or Amazon Bedrock. The decision rubric below holds for most scenarios on the exam.
Pre-built AWS AI services win when:
- The task maps cleanly to an existing service (image labels, sentiment, translation, OCR, transcription, recommendations, forecasting, fraud detection, enterprise search, contact-center assistance).
- The team has no data science or ML engineering resources.
- Time-to-value is measured in days or weeks, not months.
- The labels and output schemas returned by the service are sufficient.
- Data volume is modest to large but unremarkable (no billion-scale training requirement).
- Cost per inference is acceptable at pay-per-use pricing.
Custom models win when:
- The output labels are not in the pre-built taxonomy and Amazon Rekognition Custom Labels or Amazon Comprehend Custom Classifier cannot close the gap with a small labeled set.
- The business KPI is unique (multi-objective recommendation ranking, novel fraud pattern, proprietary signal).
- Extreme scale pushes per-inference cost below what a managed service offers.
- Strict data residency or on-prem requirements exceed what AWS AI services provide.
- The team has mature MLOps practices and explicit model ownership.
The exam default is "prefer the pre-built service." When a question says "the team has no machine learning experience," the answer is almost always an AWS AI service or Amazon Q, not Amazon SageMaker. When a question says "build a custom model on proprietary data," the answer is Amazon SageMaker or Amazon Bedrock plus fine-tuning — but even then, check first whether Custom Labels or Custom Classifier on an AWS AI service already solves the problem.
Service-to-Use-Case Decision Tree
Memorize this noun-to-service map. On AIF-C01 it covers more than eighty percent of Amazon Q and AWS AI services scenario questions.
- "Chat over our company documents with citations" → Amazon Q Business.
- "Code assistance inside VS Code" → Amazon Q Developer.
- "Plain-English questions on a BI dashboard" → Amazon Q in QuickSight.
- "Real-time next-best-response suggestions for contact-center agents" → Amazon Q in Connect.
- "Detect inappropriate content in user-uploaded images and videos" → Amazon Rekognition content moderation.
- "Face-based login" → Amazon Rekognition face collections and SearchFacesByImage.
- "Object and scene labels on photos" → Amazon Rekognition DetectLabels.
- "Custom image classifier for our defect categories" → Amazon Rekognition Custom Labels.
- "Customer review sentiment" → Amazon Comprehend sentiment analysis.
- "Detect PII in chat logs before archival" → Amazon Comprehend PII detection.
- "Find themes across ten thousand support tickets" → Amazon Comprehend topic modeling.
- "Classify incoming emails as invoice, purchase order, or contract" → Amazon Comprehend Custom Classifier.
- "Real-time chat translation" → Amazon Translate.
- "Localize product catalog into fifteen languages with brand names preserved" → Amazon Translate with Custom Terminology.
- "Generate lifelike audiobook narration" → Amazon Polly long-form voice.
- "Fine control over pauses, pitch, and pronunciation" → Amazon Polly with SSML.
- "Override pronunciation of a brand name" → Amazon Polly lexicon.
- "Transcribe a live customer call with sub-second latency" → Amazon Transcribe streaming.
- "Clinician dictation" → Amazon Transcribe Medical.
- "Post-call analytics with sentiment and issue detection" → Amazon Transcribe Call Analytics.
- "Extract tables and key/value pairs from scanned invoices" → Amazon Textract AnalyzeDocument with FORMS + TABLES.
- "Ask natural-language questions of a document directly" → Amazon Textract Queries.
- "Parse driver's licenses" → Amazon Textract AnalyzeID.
- "Intelligent internal search across SharePoint, Confluence, and S3" → Amazon Kendra.
- "E-commerce product recommendations" → Amazon Personalize User-Personalization recipe.
- "Forecast weekly demand for every SKU" → Amazon Forecast.
- "Detect fake transactions during checkout" → Amazon Fraud Detector Transaction Fraud Insights.
- "Route low-confidence Rekognition moderation results to human reviewers" → Amazon Augmented AI (A2I) built-in Rekognition task type.
AIF-C01 does not reward deep feature recall. It rewards fast, correct service selection. Memorize the noun-to-service map above. When you see "image or video" think Amazon Rekognition. When you see "document with fields" think Amazon Textract. When you see "audio in" think Amazon Transcribe. When you see "audio out" think Amazon Polly. When you see "text meaning" think Amazon Comprehend. When you see "search my corpus" think Amazon Kendra. When you see "recommendation" think Amazon Personalize. When you see "time series" think Amazon Forecast. When you see "fraud" think Amazon Fraud Detector. When you see "human review" think Amazon Augmented AI. When you see "chat assistant" think Amazon Q. Source ↗
Common Exam Traps — Amazon Q and AWS AI Services
Every trap below has appeared in community post-exam reports for AIF-C01.
Trap 1 — Amazon Q Business vs Amazon Bedrock
Amazon Q Business is a finished product with a chat UI and built-in connectors. Amazon Bedrock is a developer API. If the question says "non-technical employees ask questions about HR documents with a chat UI and no code," the answer is Amazon Q Business. If the question says "developers build a custom generative AI feature using Claude 3 foundation models," the answer is Amazon Bedrock.
Trap 2 — Amazon Q Developer vs Amazon CodeGuru
Amazon Q Developer is the generative AI coding assistant that writes, explains, reviews, and upgrades code inside IDEs. Amazon CodeGuru Reviewer and CodeGuru Profiler analyze code quality and runtime performance but do not generate code. If the question says "suggest code completions and write unit tests," the answer is Amazon Q Developer. If the question says "identify CPU hotspots in production Java," the answer is Amazon CodeGuru Profiler.
Trap 3 — Amazon Kendra vs a vector database
Amazon Kendra is a finished search product. A vector database (Amazon OpenSearch k-NN, Amazon Aurora pgvector, Amazon Neptune Analytics) is an infrastructure component for storing embeddings. If the question says "managed intelligent search over our documents with connectors," the answer is Amazon Kendra. If the question says "store and query embeddings for a custom RAG application," the answer is a vector database service.
Trap 4 — Amazon Lex vs Amazon Q vs Amazon Bedrock
Amazon Lex builds intent-and-slot chatbots (the Alexa engine — not an LLM). Amazon Q is a generative AI assistant. Amazon Bedrock is a foundation-model API. If the question says "a chatbot with deterministic intent-and-slot flows for booking appointments," the answer is Amazon Lex. If the question says "a free-form conversational assistant over company documents," the answer is Amazon Q Business. If the question says "embed a Claude-powered chatbot into our mobile app," the answer is Amazon Bedrock.
Trap 5 — Amazon Comprehend PII vs Amazon Macie PII
Amazon Comprehend PII detects personally identifiable information in text you pass to the API (streaming pipeline). Amazon Macie discovers PII in Amazon S3 objects at rest (data-governance scan). Same term ("PII"), different surfaces. Stream → Comprehend. Bucket → Macie.
Trap 6 — Amazon Textract Queries vs Amazon Bedrock + RAG
Amazon Textract Queries answer factual questions about a document using built-in extraction. Amazon Bedrock plus RAG answers free-form questions using retrieval plus generation across a corpus. If the question says "extract the invoice total from this single document with one API call," the answer is Amazon Textract Queries. If the question says "answer arbitrary questions across ten thousand documents in plain English," the answer is Amazon Bedrock Knowledge Bases (RAG) or Amazon Q Business.
Trap 7 — Amazon Transcribe Call Analytics vs Amazon Q in Connect
Amazon Transcribe Call Analytics produces structured analytics (sentiment, issue detection, talk time) about calls — batch or real-time. Amazon Q in Connect whispers next-best-response suggestions to agents during live calls. Same domain (contact center), different jobs. Analytics → Call Analytics. Agent assist → Q in Connect.
Trap 8 — Amazon Augmented AI (A2I) is a workflow, not a model
A2I does not predict anything. It routes low-confidence predictions to human reviewers. If the question frames A2I as an inference engine, that's the distractor.
Pricing Model Overview for Amazon Q and AWS AI Services
AIF-C01 expects you to recognize pricing categories, not memorize rate cards.
- Amazon Q Business — per-user per-month subscription (Business Lite and Business Pro tiers).
- Amazon Q Developer — free tier plus Pro per-user per-month.
- Amazon Q in QuickSight — Amazon QuickSight author and reader pricing plus Amazon Q add-on.
- Amazon Q in Connect — per-user per-month inside Amazon Connect.
- Amazon Rekognition — per-image and per-minute-of-video; Custom Labels adds training and inference hours.
- Amazon Comprehend — per-unit (100 characters) for standard APIs, per-hour for training jobs on custom models.
- Amazon Translate — per-character translated.
- Amazon Polly — per-character synthesized; neural and long-form voices cost more than standard voices.
- Amazon Transcribe — per-second of audio; Medical and Call Analytics priced separately.
- Amazon Textract — per-page; AnalyzeDocument with FORMS/TABLES costs more than plain text detection; Queries priced per page per query.
- Amazon Kendra — per-index-per-hour (Developer and Enterprise editions).
- Amazon Personalize — training hours plus inference TPS-hours.
- Amazon Forecast — per-forecast-point generated plus training hours.
- Amazon Fraud Detector — per-prediction plus training and storage costs.
- Amazon Augmented AI (A2I) — per-object reviewed plus the underlying human workforce cost (Mechanical Turk or private workforce).
The AIF-C01 exam will not ask for exact rates, but it will ask "which pricing model applies to Amazon Q Business?" (per-user per-month) or "what drives the cost of Amazon Rekognition Custom Labels?" (training hours plus hosting hours plus inference volume).
Security and Data Privacy Posture
Amazon Q and AWS AI services share a common security posture you should recognize:
- Data you send is encrypted in transit (TLS) and at rest (AWS-managed KMS keys by default, customer-managed KMS keys where supported).
- Data sent to Amazon Q Business, Amazon Q Developer, and AWS AI services is not used to train the underlying foundation models or the AWS-managed models unless you explicitly opt in.
- IAM policies control which identities may call which actions on which resources; Amazon Q Business further supports AWS IAM Identity Center for human identity, preserving per-document access control from the source systems.
- Many AWS AI services support VPC endpoints (AWS PrivateLink) so traffic never crosses the public internet.
- HIPAA eligibility is documented per service — Amazon Transcribe Medical, Amazon Comprehend Medical, and many general AWS AI services are HIPAA-eligible under a Business Associate Agreement.
On AIF-C01, if the question asks "what keeps customer data private when using Amazon Q Business?" the right frame combines "data is not used to train the models," "encryption in transit and at rest," "IAM Identity Center enforces per-document access," and "VPC endpoints are available."
Integration Patterns with Amazon Bedrock and Amazon SageMaker
Amazon Q and AWS AI services do not live in a silo — they compose.
- Amazon Q Business + Amazon Bedrock Knowledge Bases — Amazon Q Business is itself a managed RAG pattern, but teams sometimes supplement it with Amazon Bedrock Knowledge Bases when they need a developer-controlled RAG surface alongside the Amazon Q Business chat UI.
- Amazon Textract + Amazon Bedrock — Amazon Textract extracts structured content (layout, tables, key/value pairs) that feeds an Amazon Bedrock foundation model for summarization, classification, or question answering. This is the canonical "document AI" pattern.
- Amazon Transcribe + Amazon Comprehend + Amazon Bedrock — transcribe the call, extract entities and sentiment, then use a foundation model to generate a call summary. This is the canonical "voice of customer" pattern.
- Amazon Rekognition + Amazon Augmented AI — Amazon Rekognition moderation flags unsafe content; A2I routes low-confidence detections to human reviewers.
- Amazon Kendra + Amazon Bedrock — Amazon Kendra retrieves passages; Amazon Bedrock generates the final answer. This is a first-class RAG pattern supported by Amazon Bedrock Knowledge Bases.
- Amazon Personalize + Amazon SageMaker — start with Amazon Personalize for speed; migrate to a custom SageMaker model only when the ranking problem exceeds Personalize's recipes.
AIF-C01 will occasionally describe a composed pattern. When you see "extract structured data from contracts, then answer arbitrary questions," think Amazon Textract + Amazon Bedrock. When you see "search enterprise documents and generate answers with citations," think Amazon Kendra + Amazon Bedrock, or Amazon Q Business (which packages both).
FAQ — Amazon Q and AWS AI Services Top Questions
What is the difference between Amazon Q Business and Amazon Q Developer on AIF-C01?
Amazon Q Business is the enterprise knowledge assistant aimed at non-technical employees; it connects to document repositories and answers questions with citations. Amazon Q Developer is the coding assistant aimed at software engineers; it lives inside IDEs, the AWS Management Console, and the AWS CLI and writes, explains, reviews, and upgrades code. Both are Amazon Q editions; both are built on Amazon Bedrock; they differ in user persona and surface.
When should I pick Amazon Q over Amazon Bedrock plus a custom foundation model?
Pick Amazon Q when the use case matches a standard assistant pattern (enterprise Q&A, coding help, BI narratives, contact-center assist) and you want a finished product with a UI on day one. Pick Amazon Bedrock when you need a bespoke UX, custom orchestration, fine-grained model selection, or embedded generative AI inside your own application. Amazon Q is the product; Amazon Bedrock is the platform that Amazon Q is built on.
Is Amazon Kendra a vector database?
No. Amazon Kendra is an enterprise search product that combines semantic retrieval with keyword matching and an AWS proprietary relevance model. A vector database (Amazon OpenSearch k-NN, Amazon Aurora pgvector, Amazon Neptune Analytics) stores raw embedding vectors and returns approximate nearest neighbors. Amazon Kendra is a finished service; a vector database is a component you assemble into a custom RAG pipeline. On AIF-C01, do not answer "Amazon Kendra" when the question asks for a vector store.
What is the difference between Amazon Comprehend PII detection and Amazon Macie PII detection?
Amazon Comprehend PII detection runs on text you pass to the API — it is an NLP service for pipelines that process streaming text. Amazon Macie runs on Amazon S3 objects — it is a data-governance service that scans buckets at rest to discover and classify sensitive data. Same noun, different surfaces. The exam trap hinges on whether the scenario describes text-in-transit or data-at-rest.
When does Amazon Rekognition Custom Labels beat a general Amazon Rekognition API?
Use general Amazon Rekognition DetectLabels when your labels are common (dog, car, beach, person). Use Amazon Rekognition Custom Labels when your labels are specific to your domain (a particular defect pattern on an assembly line, a particular company logo) and are not in the general taxonomy. Custom Labels requires as few as ten labeled images per class and still does not require you to write model code. A fully custom Amazon SageMaker vision model is only justified when Custom Labels cannot achieve acceptable accuracy.
What is Amazon Augmented AI (A2I) and when is it the right answer on AIF-C01?
Amazon Augmented AI (A2I) is a human-in-the-loop workflow service that routes low-confidence ML predictions to human reviewers. It is not a model; it is an orchestrator. A2I is the correct AIF-C01 answer when the scenario requires human validation of low-confidence predictions, ground-truth labeling of edge cases, or compliance-mandated human review of high-impact decisions. A2I ships built-in task types for Amazon Rekognition and Amazon Textract and supports custom loops on any inference (SageMaker, Bedrock, or external).
Is Amazon Forecast still a valid exam answer now that it is in maintenance mode?
Yes on AIF-C01. Amazon Forecast remains in the exam blueprint and AWS continues to support existing Amazon Forecast customers. When the exam offers Amazon Forecast as an option for a time-series forecasting scenario, it is a valid correct answer. New AWS customers are directed toward Amazon SageMaker Canvas for forecasting, but you should still recognize Amazon Forecast on the test.
Does Amazon Q Business use my data to train its foundation models?
No. Amazon Q Business does not use customer data (documents, conversations, or any data you connect via connectors) to train the underlying foundation models. Your data stays within your AWS account and is used only to answer your questions. This "your data is not used for training" guarantee extends across most AWS AI services and Amazon Bedrock as well, and it is a frequently tested AIF-C01 fact.