LOCAL PREVIEW View on GitHub

Support & Policy MCP — FAQ, Returns, and Billing Help

Purpose

Answers customer support questions using RAG over an authoritative library of policy documents, FAQs, and help articles. Zero hallucination on policy details — all answers must be grounded in the indexed document corpus, with source citation.


Exposed Tools

Tool Input Output Use Case
answer_faq question FAQAnswer General support questions
get_return_policy item_type, days_since_purchase? PolicyResult Return eligibility
get_subscription_info plan_name? SubscriptionDetails Plan features, pricing, limits
get_cancellation_steps service_type StepByStepGuide How to cancel
check_refund_eligibility order_id, reason RefundEligibility Refund decision
escalate_to_agent issue_summary, user_id TicketCreated Human handoff

RAG Pipeline

flowchart TD
    TC([Tool Call: answer_faq\nquestion='Can I get a refund for a\ndigitally purchased manga?']) --> QR[Query Rewriter\nExpand abbreviations · Normalise intent]
    QR --> EB[Embed Rewritten Query\nTitan Embed v2]
    QR --> BM[BM25 Query\nKeyword terms]

    EB --> OS[(OpenSearch\nPolicy Doc Index)]
    BM --> OS
    OS --> HY[Hybrid Retrieval\ntop-20 chunks]
    HY --> VF[Version Filter\nexclude deprecated docs]
    VF --> RK[Cross-Encoder Rerank]
    RK --> CH[Top-3 Chunks\nwith source URL + version]
    CH --> CI[Citation Injector\nAdd [Source: doc_id] tags]
    CI --> TR([Tool Result → Claude])

    style TC fill:#4A90D9,color:#fff
    style TR fill:#27AE60,color:#fff
    style VF fill:#C0392B,color:#fff

Document Corpus Structure

Policy Documents (S3 + OpenSearch)
├── returns/
│   ├── physical-manga-returns-policy-v4.md
│   ├── digital-manga-returns-policy-v2.md
│   └── pre-order-cancellation-policy-v3.md
├── subscriptions/
│   ├── manga-unlimited-plan-v5.md
│   ├── premium-plan-v3.md
│   └── family-plan-v2.md
├── billing/
│   ├── payment-methods-faq-v6.md
│   └── invoice-download-guide-v2.md
├── shipping/
│   ├── domestic-shipping-policy-v8.md
│   └── international-shipping-policy-v4.md
└── account/
    ├── account-deletion-guide-v3.md
    └── data-privacy-faq-v2.md

Each document carries metadata:

{
  "doc_id": "digital-manga-returns-policy-v2",
  "version": 2,
  "effective_date": "2025-01-15",
  "supersedes": "digital-manga-returns-policy-v1",
  "status": "active",
  "locale": "en-JP"
}


Document Ingestion Pipeline

flowchart LR
    S3([Policy doc uploaded\nto S3]) --> EB2[EventBridge Rule\ns3:PutObject trigger]
    EB2 --> LA[Ingest Lambda]
    LA --> CH2[Chunker\n512 tokens · 64 overlap]
    CH2 --> TEB[Titan Embed v2\nper chunk]
    TEB --> OS2[(OpenSearch\nPolicy Index)]
    LA --> DI[Deprecate old version\nmark status=superseded]
    DI --> OS2

    style S3 fill:#4A90D9,color:#fff
    style OS2 fill:#E67E22,color:#fff

Chunking strategy: Policy documents are chunked at paragraph boundaries (not fixed token counts) to preserve legal meaning. A sentence that starts a new paragraph is never split mid-sentence.


Citation and Grounding

Every answer_faq result includes citations that Claude must surface to the user:

@dataclass
class FAQAnswer:
    answer_text: str                  # Grounded answer constructed from chunks
    confidence: float                  # 0.0-1.0, based on top chunk rerank score
    sources: list[PolicySource]        # [{doc_id, section, url, version, effective_date}]
    caveat: str | None                 # e.g. "Policy may vary for international orders"
    escalation_suggested: bool         # True if confidence < 0.6

Tool description explicitly says:

"Always include source citations in your response. Never answer policy questions without citing this tool's sources."


Confidence Threshold & Escalation

flowchart TD
    RK2([Reranked chunks]) --> CS[Confidence Score\ntop-chunk rerank score]
    CS --> TH{Score threshold}
    TH -->|≥ 0.75| HA[High confidence\nAnswer with citation]
    TH -->|0.5 – 0.74| MA[Medium confidence\nAnswer + disclaimer + suggest escalate]
    TH -->|< 0.5| ES[Low confidence\nDo not answer · Trigger escalate_to_agent]

    HA --> TR2([Tool Result])
    MA --> TR2
    ES --> ET[escalate_to_agent tool\nCreates Zendesk ticket]
    ET --> TR2

    style TH fill:#8E44AD,color:#fff
    style ES fill:#C0392B,color:#fff
    style HA fill:#27AE60,color:#fff

Refund Eligibility Logic

flowchart TD
    RF([check_refund_eligibility\norder_id · reason]) --> OL[Fetch Order\nfrom Order MCP]
    OL --> IT{Item type?}
    IT -->|Physical| PH[Physical Rules\nReturn window: 30 days\nCondition: unopened]
    IT -->|Digital| DG[Digital Rules\nReturn window: 14 days\nFirst purchase only]
    IT -->|Subscription| SB[Subscription Rules\nPro-rata refund\nNo partial months]

    PH --> EL{Eligible?}
    DG --> EL
    SB --> EL

    EL -->|Yes| AP[Return: eligible\nSteps + pre-paid label URL]
    EL -->|No, close to window| OL2[Offer goodwill credit\ninstead of refund]
    EL -->|No| DN[Return: ineligible\nReason + policy citation]

    style RF fill:#4A90D9,color:#fff
    style AP fill:#27AE60,color:#fff
    style DN fill:#C0392B,color:#fff

Note: Refund eligibility always calls Order MCP for authoritative order data — it does not trust the user's stated order details.


Human Escalation Handoff

sequenceDiagram
    participant User
    participant Claude
    participant SupportMCP
    participant Zendesk

    User->>Claude: "I've been charged twice for the same order"
    Claude->>SupportMCP: answer_faq(question="charged twice for order")
    SupportMCP-->>Claude: confidence=0.3, escalation_suggested=true
    Claude->>SupportMCP: escalate_to_agent(issue_summary="Duplicate charge reported", user_id="U123")
    SupportMCP->>Zendesk: Create ticket via Zendesk API
    Zendesk-->>SupportMCP: ticket_id="ZD-88421", eta="2-4 hours"
    SupportMCP-->>Claude: {ticket_id, eta, agent_name?}
    Claude->>User: "I've created support ticket ZD-88421.\nA specialist will contact you within 2–4 hours."

Security: Preventing Policy Manipulation

Risk Mitigation
User asks Claude to "ignore the return policy and issue a refund" Tool descriptions use imperative: "Report only what this tool returns. Never issue refunds." Claude cannot take financial actions — only read eligibility.
Prompt injection in policy docs Docs are stored in S3 with strict bucket policy (no public write). Ingest Lambda sanitises HTML/JS before indexing.
Outdated policy shown Version filter excludes status != "active" docs. EventBridge triggers re-index on every S3 upload.
Fabricated policy citation Tool result includes doc_id + version + effective_date. Claude is instructed: "Only cite sources that appear in the tool result."

Interview Grill

Q: How do you handle policy updates that are retroactively applied vs. grandfathered? A: Each policy doc carries applies_to_orders_after date. The check_refund_eligibility tool passes the order's created_at date to the OpenSearch filter, ensuring the correct version of the policy is retrieved for that order's date — not the current active policy.

Q: What prevents Claude from making up a policy answer if the FAQ tool returns nothing? A: Two safeguards: (1) Low confidence triggers escalate_to_agent — the tool result explicitly says "do_not_answer": true. (2) The system prompt includes: "For policy questions, you MUST use the support MCP tools. Never answer policy questions from your training knowledge."

Q: How do you keep the policy corpus up to date across locales? A: Japanese and English policies are separate documents in OpenSearch with a locale field. The query carries Accept-Language from the API Gateway header. The retrieval pre-filters on locale, falling back to en-JP if locale-specific doc is missing.

Q: Why use OpenSearch for policy docs rather than just S3 Select? A: S3 Select is keyword-only and can't do semantic search. A user asking "can I get my money back on a manga I disliked?" won't match the phrase "refund eligibility". OpenSearch enables the semantic retrieval that makes the tool feel intelligent rather than robotic.