LOCAL PREVIEW View on GitHub

Interview Scenarios - Multi-Turn Context Poisoning in Long Sessions Follow-Up Answers

Question document: README.md Source document: 09-interview-scenarios.md Reference scenario: 01-prompt-injection-defense.md -> Scenario 4: Multi-Turn Context Poisoning in Long Sessions

Scenario lens: Gradual scope drift across long sessions where no single turn is clearly malicious, but the accumulated context becomes unsafe. Document lens: Interview Scenarios.

Use this file as the answer key for the follow-up questions in README.md.

Easy

Q1

Question: How would you explain multi-turn context poisoning to someone who thinks every attack should be visible in a single prompt? Answer: Explain that each turn can look harmless while the whole conversation becomes dangerous. That makes session-level monitoring feel intuitive.

Q2

Question: Which signal or threshold would you mention first to make session-level drift monitoring feel concrete? Answer: Mention one concrete signal such as drift score, reset threshold, or restricted-topic ratio. A measurable trigger makes the answer stronger.

Medium

Q1

Question: What follow-up questions would you expect from an SRE, a security engineer, and a PM after describing scope drift in long sessions? Answer: An SRE asks about alerts and containment, a security engineer about detection quality, and a PM about user friction. Prepare one sentence for each.

Q2

Question: How would you keep the answer concise while still showing that session management is both a product and security problem? Answer: Keep the answer tight by treating session management as both a product and security problem. One sentence on personalization, one on safety, and one on the tradeoff is enough.

Hard

Q1

Question: How would you defend summarization or periodic resets if the interviewer argues they degrade personalization and user trust? Answer: Defend resets or summarization by saying they trade a little continuity for much better control over accumulated risk. Interviewers usually accept that if you name what context you keep and what you discard.

Q2

Question: What evidence from logging, dashboards, or adversarial test conversations would you cite to show the threat is real and measurable? Answer: Use logs, dashboards, or scripted red-team conversations as proof that the threat is measurable. Without that evidence, the story sounds hypothetical.

Very Hard

Q1

Question: How would you respond if the interviewer asks you to compare session-level defenses with cross-session campaign detection in real time? Answer: Compare session-level defenses and campaign detection as complementary layers, not substitutes. One protects the active conversation; the other tells you whether the system is being systematically probed.

Q2

Question: If you had to turn this into a design interview follow-up, what architecture tradeoff would you put at the center of the discussion? Answer: Center the design discussion on how much memory to keep, where to summarize, and when to escalate. That tradeoff reveals engineering maturity.