LOCAL PREVIEW View on GitHub

MLflow in MangaAssist

Deep dives on how MLflow supports tracing, experiments, registry workflows, feedback analysis, and Bedrock integrations in MangaAssist.

This folder expands the shorter MLflow observability note in Tech-Stack/03-mlflow-llm-observability.md into scenario-level walkthroughs and implementation details that map directly to the MangaAssist architecture.

Document Index

# Document Focus
1 01-mlflow-deep-dive-scenarios.md Production scenarios where MLflow materially improved debugging, quality, rollout safety, and cost control
2 02-mlflow-bedrock-and-service-integration.md How MLflow integrates with Bedrock, SageMaker, OpenSearch, AppConfig, CloudWatch, and storage systems
3 03-mlflow-low-level-implementation-guide.md Low-level implementation guide with components, span contracts, schemas, rollout steps, and code patterns
  1. Start with the scenarios if you want to understand why MLflow mattered in this chatbot.
  2. Read the service integration document next if you want to see how Bedrock and the rest of the AWS stack fit into the tracing and evaluation design.
  3. Finish with the low-level implementation guide if you want to build the same setup in code.

What MLflow Covers in This Project

  • End-to-end request tracing from user message to final response.
  • Offline and online evaluation runs for prompts, model bundles, and retriever changes.
  • Model and prompt lineage across SageMaker-hosted models and Bedrock-backed generation.
  • Feedback correlation so thumbs-down, escalations, and guardrail blocks can be tied back to exact traces.
  • Cost, latency, and quality analysis using shared IDs and consistent metadata.