LOCAL PREVIEW View on GitHub

BMS - AI Platform Core Resume / Intro Tell me about yourself

Why this company?

Why this role?

Any questions before starting?

🧠 Project Deep Dive – RAG Chatbot Walk me through your RAG-based chatbot

What does the low-level RAG pipeline look like?

How are documents processed and indexed into OpenSearch?

What triggers the ingestion worker?

What kind of retrieval strategy did you use?

What was the scale (documents vs chunks)?

🔍 Retrieval & Evaluation How did you validate retrieval techniques?

How did you validate metadata design vs query domain?

What is Hit@K / retrieval hit rate?

What metrics did you use for evaluation?

Did you experiment with non-embedding methods (BM25 etc.)?

⚠️ Failure Cases & Improvements What are scenarios where retrieval fails?

How do you handle multi-intent queries?

🧠 Query Orchestration / Intelligence How do you break complex queries into sub-queries?

How do you decide intent and decomposition logic?

How do you orchestrate parallel retrieval + merge?

🔄 Conversational AI / Context Handling Does your system handle multi-turn conversations?

How do you make queries context-aware?

How does context → query rewrite → retrieval work?

Do you use a sliding window for context?

How do you use LLM for query understanding?

💾 State & Persistence What persistence layer did you use for conversation?

Did you hit DynamoDB size limits? How did you handle it?

📊 Monitoring, Evaluation & Feedback Do you have monitoring pipelines?

How do you collect user feedback?

What does feedback look like (👍 / 👎 system)?

Do users provide the correct answer for 👎?

How do you build an automated feedback → evaluation pipeline?

👥 Leadership / Mentoring Tell me about a project with junior team members

How did you mentor and guide them?

🚀 End-to-End Ownership Tell me about a project that went from idea → production

What were your decisions and trade-offs?

🌍 Tech Awareness / Growth How do you keep up with GenAI trends?

🧑‍💻 Frontend / Full Stack Are you familiar with React and TypeScript?

🙋 Questions for the Team How do you define success for Accelerator → Scale?

What are current gaps in GenAI platform components?

How are pods structured?

How do you handle responsible AI and evaluation?

What does success look like in the first 90 days?

What this set actually tests (important insight) These questions are not random — they are testing: End-to-end ownership (idea → production)

RAG depth (not surface-level)

Platform thinking (reusable systems)

Failure handling + debugging mindset

Context + agentic reasoning

Evaluation + responsible AI

Leadership + mentoring

Real production experience (not just theory)