LOCAL PREVIEW View on GitHub

MangaAssist Architecture DeepDive

Full HLD + LLD for the MangaAssist case study. Walks the path from anime-finisher → manga reading-order recommendation, through the orchestrator/RAG/tool-use stack, all the way to the cost envelope and scale-out plan.

Interview talking points

  • Walk me through the architecture. Use these notes as your spine — they trace user → API gateway → orchestrator → tools/retrieval → LLM → trace store, with tradeoffs called out at each hop.
  • Why SSE over WebSocket for chat? The transport-choice notes here cite latency, CDN compatibility, and reconnect semantics.
  • One brain, many surfaces. The orchestrator-as-seam pattern is the load-bearing design choice; rehearse why it matters when you swap Streamlit → React without rewriting the brain.
  • Where do you store traces? SQLite for the prototype, swap-pattern to DynamoDB / Postgres outlined here.
  • Cost envelope. Per-request token math + cache-hit math sit in this folder; rehearse the back-of-envelope before the Cost Optimization section.

Files in this folder

File Title
00-the-story.md The Deep Dive: One Query Through MangaAssist
01-orchestrator-agent.md 01 — The Orchestrator Agent
02-product-search-agent.md 02 — ProductSearchAgent
03-order-status-agent.md 03 — OrderStatusAgent
04-recommendation-agent.md 04 — RecommendationAgent
05-manga-qa-agent.md 05 — MangaQAAgent
06-tool-dispatch-and-routing.md 06 — Tool Dispatch & Routing
07-failure-handling.md 07 — Failure Handling
08-memory-architecture.md 08 — Memory Architecture
09-escalation-workflow.md 09 — Escalation Workflow
README.md MangaAssist Architecture — Deep Dive Series

Back to the home page.