LOCAL PREVIEW View on GitHub

Prompt Engineering for MangaAssist

This folder is a standalone deep dive into prompt engineering for the MangaAssist chatbot used in Amazon's JP Manga storefront scenario.

The goal is not to repeat the architecture documents. The goal is to explain how foundational models are actually used, constrained, optimized, tested, and recovered when prompt tuning alone is not enough.

What This Folder Covers

  • Prompt design principles for a hybrid chatbot that uses templates, APIs, RAG, and FM generation together
  • Intent-specific prompt patterns across every major user journey
  • Optimization techniques for foundational models such as Claude-class response models and Titan-class embedding models
  • RAG prompt assembly, grounding, contradiction handling, and token-budget control
  • Guardrails-aware prompting and prompt-injection resistance
  • Failure scenarios where prompt optimization failed and a workaround still achieved the business goal
  • Evaluation, versioning, regression testing, and rollout strategy
  • A dedicated interview-prep pack focused only on prompt engineering in this project

Reading Order

  1. 01-prompt-design-principles.md
  2. 02-intent-specific-prompt-patterns.md
  3. 03-foundational-model-optimization-techniques.md
  4. 04-rag-prompt-integration.md
  5. 05-guardrails-and-prompt-hardening.md
  6. 06-failure-scenarios-and-workarounds.md
  7. 07-prompt-evaluation-versioning-and-regression.md
  8. Interview-Prep/README.md

Design Position

MangaAssist is not a pure chat app. It is a commerce workflow with an LLM inside it.

That changes the prompt-engineering strategy:

  • The model should explain and format, not invent truth
  • Live systems own dynamic facts such as price, availability, order state, and return eligibility
  • RAG owns knowledge grounding for policy and editorial content
  • Guardrails own enforcement, but prompts still need to reduce the chance of unsafe or invalid output
  • The orchestrator owns routing so not every user message becomes a large-model problem

Scenario Coverage

This folder explicitly covers:

  • Recommendation and product discovery
  • Product Q and A
  • FAQ and policy
  • Promotion and checkout help
  • Order tracking and return requests
  • Escalation and negative sentiment
  • Chitchat and lightweight template-friendly interactions
  • Ambiguous, multilingual, and low-confidence requests
  • Structured-output scenarios
  • Latency and cost optimization scenarios
  • Failure cases where prompt tuning alone did not solve the problem

How to Use This Folder

  • Use the numbered docs if you are designing or reviewing prompt behavior
  • Use the failure/workaround document if you want realistic production tradeoffs
  • Use the evaluation document if you want to build a prompt release process
  • Use the interview pack if you want mock questions that stay grounded in MangaAssist instead of generic LLM trivia

Folder Map

File Purpose
01-prompt-design-principles.md Prompt architecture, decision rules, and shared prompt skeleton
02-intent-specific-prompt-patterns.md Scenario-by-scenario prompting patterns
03-foundational-model-optimization-techniques.md Optimization methods for quality, cost, latency, and consistency
04-rag-prompt-integration.md Retrieval-aware prompt assembly and grounding strategy
05-guardrails-and-prompt-hardening.md Safety-aware prompting and injection resistance
06-failure-scenarios-and-workarounds.md Cases where optimization failed and workarounds succeeded
07-prompt-evaluation-versioning-and-regression.md Testing, metrics, versioning, and rollout
Interview-Prep/ Prompt-engineering interview practice at six difficulty levels