Prompt Engineering Interview Pack
This folder contains project-specific interview preparation for prompt engineering in MangaAssist.
The focus is not generic LLM trivia. Every question is grounded in the realities of this repo's design: hybrid routing, RAG grounding, structured outputs, guardrails, prompt hardening, latency, cost, and failure recovery.
How To Use This Pack
- Start with
basicif you want fluency in prompt architecture and scenario selection - Use
mediumto rehearse intent-specific prompting, token budgets, and retrieval-aware prompting - Use
hardand above for failure analysis, guardrail interactions, evaluation design, and production tradeoffs - Use
questions-onlyfor timed practice - Use
with-hintswhen you want answer-shaping guidance
File Map
| Level | Questions Only | With Hints |
|---|---|---|
| Basic | 01-basic-questions-only.md |
01-basic-with-hints.md |
| Medium | 02-medium-questions-only.md |
02-medium-with-hints.md |
| Hard | 03-hard-questions-only.md |
03-hard-with-hints.md |
| Very Hard | 04-very-hard-questions-only.md |
04-very-hard-with-hints.md |
| Super Hard | 05-super-hard-questions-only.md |
05-super-hard-with-hints.md |
| Architect Level | 06-architect-level-questions-only.md |
06-architect-level-with-hints.md |
Coverage Areas
- template vs API vs RAG vs FM prompt selection
- system prompt design and output contracts
- recommendation prompting and retrieval-aware prompting
- structured-output reliability
- prompt injection resistance and guardrail coordination
- token, latency, and cost optimization
- cases where prompt tuning failed but a system-level workaround succeeded
- evaluation, regression, and rollout strategy
Grounding Rule
All questions assume the MangaAssist design documented in this repository. Where a question pushes beyond what is explicitly implemented, it does so as a design challenge and not as a hidden assumption.