Fine Tuning Techniques
Notes on Fine Tuning Techniques for ML platform / Applied AI interview preparation. The file index below shows what's in scope; click through to the individual notes for the depth.
Interview talking points
- This is a sub-topic under Fine-Tuning-Foundational-Models. See the parent for the broader interview framing.
Files in this folder
| File | Title |
|---|---|
| 04-lora-qlora-llm-customization.md | 04. LoRA/QLoRA LLM Customization — Adapting Claude 3.5 Sonnet via Parameter-Efficient Methods |
| 04-lora_qlora_scenarios_mangaassist.md | LoRA and QLoRA Scenarios - MangaAssist |
| 11-prompt-tuning-prefix-tuning.md | 11. Prompt Tuning and Prefix Tuning — Lightweight Alternatives to LoRA |
| 11-prompt_prefix_tuning_scenarios_mangaassist.md | Prompt Tuning and Prefix Tuning Scenarios - MangaAssist |
| 12-quantization-aware-training.md | 12. Quantization-Aware Training — INT8/INT4 Without Quality Loss |
| 12-quantization_aware_training_scenarios_mangaassist.md | Quantization-Aware Training Scenarios - MangaAssist |
Back to the parent.