LOCAL PREVIEW View on GitHub

Prompt Engineering Interview Pack - Basic With Hints

Level: Basic How to use: Answer out loud first, then use the hints to check whether you covered the project-specific points.

Memory Map

graph TD
    A[Route First] --> B[Choose Truth Source]
    B --> C[Pick Prompt Shape]
    C --> D[Constrain Output]
    D --> E[Validate and Recover]

Interview Questions With Hints

Hiring Manager

  1. What is prompt engineering trying to achieve in MangaAssist beyond "make the answer sound good"?

Hint: Cover trust, grounding, response consistency, structured output, and cost-aware use of the FM path.

  1. Why should MangaAssist not send every user message to a foundational model?

Hint: Mention latency, cost, hallucination risk, and the fact that many flows are better served by templates or APIs.

  1. What is the difference between a prompt that owns tone and a system that owns truth?

Hint: The FM can explain and summarize, but live systems and retrieved sources must own prices, order state, and policy facts.

Product Manager

  1. Why is recommendation prompting different from order-tracking prompting?

Hint: Recommendation needs explanation and taste alignment; order tracking needs strict factual formatting from API data.

  1. Why is grounding important for customer trust in a commerce chatbot?

Hint: Wrong prices, wrong policies, or wrong delivery information break trust quickly.

Senior Engineer

  1. Walk me through the main layers of a MangaAssist prompt from system rules to user message.

Hint: System rules, workflow context, grounding data, conversation state, current message, output contract.

  1. When would you use template, API plus template, RAG plus generation, and a richer FM path?

Hint: Match the answer path to the truth source and error tolerance.

  1. Why should business-critical facts like price or delivery status not be left to free-form generation?

Hint: Even strong prompts cannot make free-form generation a reliable source of truth for live commerce facts.

Applied Scientist

  1. What is the benefit of using intent-specific prompt patterns instead of one giant universal prompt?

Hint: Smaller prompts, less instruction conflict, lower token cost, and easier evaluation.

  1. Why can few-shot prompting improve behavior but still be a bad production choice in some flows?

Hint: Talk about token bloat, latency, context pressure, and diminishing returns.