LOCAL PREVIEW View on GitHub

MangaAssist Interview Pack - Super Hard With Hints

Level: Super Hard
How to use: These hints are intentionally compact. Expand them into platform-level decisions with explicit tradeoffs.

Platform Evolution Map

graph LR
    A[Single Storefront Assistant] --> B[Reusable Orchestration Core]
    B --> C[Shared Safety Layer]
    B --> D[Store-Specific Knowledge]
    B --> E[Store-Specific Integrations]
    C --> F[Central Policy Engine]
    D --> G[Per-domain RAG Indexes]
    E --> H[Capability Registry]
    H --> I[Multi-tenant Retail Assistant Platform]

Interview Questions With Hints

Distinguished Engineer

  1. If this project becomes the template for all retail assistants, what would you extract into a platform layer and what would remain domain-specific to manga?

Hint: Shared orchestration, safety, memory, observability, and rollout logic are platform candidates; taxonomy, KBs, integrations, and UX nuance are domain-specific.

  1. What is the strongest argument against using a pure agentic framework here, and what is the strongest argument in favor of it?

Hint: Against: determinism, cost, and trust for core commerce flows. In favor: faster evolution for multi-step workflows and tool orchestration.

  1. How would you design a capability registry so the orchestrator knows which intents, APIs, and safety rules exist per storefront?

Hint: Think declarative config, versioning, ownership metadata, and runtime lookup rather than hardcoded branches.

Applied Scientist

  1. The project documents intent classification, RAG, and recommendation separately. Where do you expect the highest interaction effects between these systems, and how would you evaluate them jointly?

Hint: The interfaces matter most: misrouting degrades retrieval, bad retrieval weakens explanations, and recommendation quality affects perceived AI usefulness.

  1. How would you detect when the chatbot is technically accurate but still not useful enough to change user behavior?

Hint: Look beyond correctness toward click-through, add-to-cart, repeat usage, abandonment, and qualitative feedback.

Security and Privacy Lead

  1. How would multi-region expansion change your privacy, retention, and deletion design, especially for conversation data and analytics?

Hint: Data residency, deletion workflows, cross-region replication, and regional retention controls become first-class design points.

  1. What is your strategy for proving to leadership that personalization remains privacy-first rather than becoming scope creep?

Hint: Define explicit allowed signals, disallowed data classes, audits, metrics, and approval gates.

Data Platform Lead

  1. How would you attribute revenue impact fairly when the user interacts with search, recommendations, and the chatbot in the same session?

Hint: Use experimental design and multi-touch attribution, not simplistic last-click logic.

  1. How would you separate signal from noise in thumbs-up, thumbs-down, add-to-cart, and escalation data when using them for product improvement?

Hint: Normalize by intent and context, combine explicit and implicit signals, and watch for selection bias.

VP of Engineering

  1. If you had to choose between improving conversion, reducing support cost, or improving customer trust first, which would you optimize for in year one and why?

Hint: Strong answers usually treat trust as the constraint that enables the other two, even if conversion is the headline KPI.

Interaction Surface Recall

graph TD
    A[Customer Intent] --> B[Search / Browse]
    A --> C[Chatbot]
    A --> D[Support Flows]
    B --> E[Shared Catalog]
    C --> E
    D --> F[Order / Returns Systems]
    C --> F
    C --> G[Reco + RAG + LLM]
    G --> H[Metrics + Feedback]
    H --> I[Model / Prompt / UX Improvement Loop]