LOCAL PREVIEW View on GitHub

Alignment RLHF

Notes on Alignment RLHF for ML platform / Applied AI interview preparation. The file index below shows what's in scope; click through to the individual notes for the depth.

Interview talking points

  • This is a sub-topic under Fine-Tuning-Foundational-Models. See the parent for the broader interview framing.

Files in this folder

File Title
08-sentiment-classifier-fine-tuning.md 08. Sentiment Classifier Fine-Tuning — Frustration Detection for Escalation
08-sentiment_classifier_scenarios_mangaassist.md Sentiment Classifier Scenarios - MangaAssist
10-rlhf-dpo-alignment.md 10. RLHF and DPO Alignment — Fine-Tuning LLM Response Quality
10-rlhf_dpo_alignment_scenarios_mangaassist.md RLHF and DPO Alignment Scenarios - MangaAssist

Back to the parent.