LOCAL PREVIEW View on GitHub

Amazon Chatbot Experience — Interview Pack

{: .fs-9 }

Senior ML Platform / Applied AI Engineer interview preparation, built around a single end-to-end case study (MangaAssist) plus 30+ topic deep-dives that cover the full system-design and operations surface area. {: .fs-5 .fw-300 }

Open MangaAssist case study{: .btn .btn-primary .fs-5 .mb-4 .mb-md-0 .mr-2 } See the running web app{: .btn .fs-5 .mb-4 .mb-md-0 }


How to use this pack

Reading goal Start here
Get the case study cold in 20 min 01-problem-statement.md16-final-summary.md
Stress-test on system design HLD-Questions and LLD-Questions
Prep for ML platform behavioural Leadership-Narrative and POC-to-Production-War-Story
Brush up evaluation methodology Evaluation-Systems-GenAI
Cost / latency tradeoffs Cost-Optimization-Offline-Testing

Topic map

The same content, sliced two ways.

By role expectation

Tier 1 — Core technical depth (read these first)

Tier 2 — Infra / engineering

Tier 3 — Applied + operational nuance

Tier 4 — Foundations

Tier 5 — Domain stories + interview format

By interview competency

Competency Folders that help
System design (LLM-backed) MangaAssist-Architecture-DeepDive, HLD-Questions, RAG-MCP-Integration
System design (data plane) Database-Tradeoffs, DynamoDB, ECS-Fargate-Lambda
Evaluation / quality Evaluation-Systems-GenAI, Offline-Testing-Quality-Strategies, Ground-Truth-Evolution
Cost & performance Cost-Optimization-*, Performance-Optimization-User-Stories, Operational-Efficiency-Optimization
Reliability / SRE Monitoring-GenAI-Systems, Troubleshoot-GenAI-Applications, Debugging
Safety / compliance AI-Safety-Security-Governance, Security-Privacy-Guardrails, Domain1-FM-Integration-Data-Compliance
MLOps / lifecycle LLMOps, MLflow, Fine-Tuning-Foundational-Models, Model-Inference, CI-CD-Pipeline-User-Stories
Behavioural / leadership Leadership-Narrative, POC-to-Production-War-Story, Challenges

The MangaAssist case study

The numbered files at the repo root walk through the entire case end-to-end, the way a real ML platform interview unfolds:

# File Reading time
01 Problem statement 3 min
02 User description 3 min
03 Use cases 4 min
04 Architecture HLD 8 min
04b Architecture LLD 12 min
04c WebSocket prototype design space 6 min
05 Website integration 5 min
06 Detailed workflow 6 min
07 Team size 3 min
08 Senior developer role 4 min
09 Data integrations 5 min
10 AI / LLM design 8 min
11 Scalability & reliability 6 min
12 Security & privacy 6 min
13 Metrics 5 min
14 MVP vs future 4 min
15 Tradeoffs & challenges 6 min
16 Final summary 3 min

Sibling code in this repo (not part of the docs site)

The interview-prep notes live alongside three working prototypes:

  • mangaassist_web/ — Next.js 14 + FastAPI app that implements the case study's customer + admin surfaces. Source of truth for the production-shaped story.
  • streamlit_app/ — Streamlit prototype with 21 pages covering every experiment (RAG bake-off, model arena, prompt studio, voice console, guardrails lab, etc.).
  • mangaassist_3d/ — WebGL atlas exploration.

These folders are excluded from this docs site by config; reference them on GitHub when an interviewer asks "show me the code that does this."


{: .tip }

Every folder page on this site has the same shape: 1-paragraph summary, "interview talking points" (the questions this folder helps you answer), and an auto-generated table of contents over every .md file in the folder. Use the search box (top of every page) when you have a specific term in mind.