Principled Fine-tuning of LLMs from User-Edits: A Medley of Preference, Supervision, and Reward