LOCAL PREVIEW View on GitHub

Scenario 2 — SageMaker Inference Containers And Cold-Start Elimination

User Story

As a MangaAssist user hitting the system right after a scale-out event — say, right when a new One Piece chapter dropped and thousands of users flooded in — I wanted my first message to stay within the normal SLA instead of waiting 45 to 120 seconds for a cold model container to load my manga recommendation engine.


The Production Problem

The docs show a real container startup chain for MangaAssist's custom fine-tuned inference:

  • Docker container start: ~45 seconds
  • Model weight download and initialization: additional time after that
  • First live request: potentially ~120 seconds after provisioning

That is not a small regression. For a user asking "what should I read next?" that is a broken experience. The issue was compounded during chapter-drop traffic spikes when many containers cold-started simultaneously.


What We Actually Did

  • Added a warmup script that sent synthetic manga-style requests during container startup.
  • Kept /ping unhealthy until warmup completed so the load balancer never routed real users to a cold instance.
  • Set min_instance_count=2 so the endpoint never scaled to zero.
  • Reduced load time with smaller/faster model artifacts: quantization, ONNX/TorchScript where applicable, safetensors format for large-model startup.

Deep-Dive Questions And Answers

Q1. Why was the problem a Docker container problem and not just a model problem? Because the delay included container startup, dependency initialization, model loading, and first-request framework warmup including CUDA graph capture. The model was part of the cost, but the user-facing symptom came from the full container lifecycle. Fixing only the model artifact without addressing container readiness would have still exposed users to partial startup delays.

Q2. Why did you fail health checks until warmup completed? Because a container can be "process alive" but still not be "traffic ready." Returning 503 until warmup finished prevented the load balancer from routing real MangaAssist users to an instance that hadn't yet completed model initialization and kernel warmup. Process-alive is not the same as traffic-ready — that distinction is the core lesson.

Q3. Why did warmup use multiple synthetic prompts instead of a single dummy request? One request shape isn't enough. We warmed the actual traffic shapes that MangaAssist sees: short intent queries ("recommend a manga"), medium recommendation responses, and longer multi-turn genre exploration prompts. That primes caches and CUDA kernel paths much closer to real production traffic.

Q4. Why was min=2 a better choice than scale-to-zero? Scale-to-zero is cheaper only if your SLA tolerates cold starts. Ours did not — users arriving during chapter drops expect immediate responses. Keeping two hot instances meant one could absorb traffic while the other restarted or warmed up after any maintenance or failure.

Q5. What metrics proved the fix worked? Cold-start exposure reduced from ~118–120 seconds on first-hit scenarios to ~22 seconds in the remaining edge case. SLA violations dropped to a very small fraction of traffic once warmup plus minimum-capacity controls were in place. The remaining 22-second edge case is the absolute floor given container start time — not eliminatable, just made rare.

Q6. What is the best short answer if an interviewer asks for the core lesson? Treat readiness and liveness differently. A container should not be considered healthy just because the process is up. It should be considered healthy only after the model and runtime are actually ready for production traffic.


Optimizations We Can Credibly Claim

  • Startup warmup requests matching real MangaAssist traffic shapes
  • Readiness gating via health check — 503 until fully warm
  • Minimum 2 hot instances as floor capacity
  • Smaller/faster-loading model artifacts (quantization, safetensors)
  • Faster storage and serialization choices for large model startup

Better-Than-Naive Explanation

The naive answer is "autoscaling fixed it." It did not. Autoscaling only provisions capacity — it doesn't make that capacity ready for traffic. We fixed the user experience by making container readiness depend on successful warmup, then by keeping a minimum hot pool so the platform rarely exposed a cold path at all. Autoscaling was still there for capacity, but warmup + readiness gating was what actually protected the user.


Decision Table

Dimension Details
Why scale-to-zero was rejected MangaAssist SLA doesn't tolerate 120s cold starts; chapter-drop traffic arrives in bursts that need immediate capacity
Why min=2 not min=1 One instance absorbs traffic while the other restarts/warms — single floor leaves no redundancy
Why multiple warmup prompts CUDA kernel paths are prompt-shape dependent; single dummy request leaves most real traffic paths cold
Liveness vs readiness Liveness: is the process alive? Readiness: is the container ready for production traffic? Both needed, both different
Quantization rationale Smaller model artifacts = faster weight download = shorter startup path = less exposure window
Scale mechanism min=2 warm floor + autoscale above it for spike traffic
Key metric Cold-start exposure: 118–120s → ~22s (remaining floor is container start time, not eliminatable)

Tradeoffs Discussed

Option considered Why rejected or scoped
Scale-to-zero Cheapest option but SLA violation during chapter drops; rejected for serving path
min=1 floor Not enough — single instance failure leaves zero warm capacity
Single dummy warmup request Doesn't prime all CUDA kernel paths for real manga query shapes
Larger GPU instance only More VRAM & faster init, but doesn't solve the readiness-gating gap; adds cost
Purely synchronous startup without health check gating Process starts fast but traffic lands on unready container — wrong failure mode

Scale Planned

Condition Behavior
Normal traffic 2 warm instances serving all requests
Chapter-drop spike Autoscale provisions new instances; warmup script runs; readiness gate holds until ready
Instance failure/restart Second warm instance absorbs traffic during recovery
First-ever cold boot Readiness gate blocks traffic; ~22s minimum startup before any user hit

Intuition From This Scenario

"Healthy" has two definitions and most systems only implement one. Process-alive is trivial to check. Traffic-ready requires knowing that the model loaded, CUDA graphs compiled, and framework warmed the common inference paths. Every inference container you deploy should have both checks, and the load balancer should only see the traffic-ready check. The cost of min=2 and a warmup script is negligible compared to the cost of every user's first request during a chapter drop hitting a 2-minute cold container.