We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

cs.LG

Change to browse by:

References & Citations

DBLP - CS Bibliography

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Computer Science > Machine Learning

Title: Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-Constraint

Abstract: This paper studies the alignment process of generative models with Reinforcement Learning from Human Feedback (RLHF). We first identify the primary challenges of existing popular methods like offline PPO and offline DPO as lacking in strategical exploration of the environment. Then, to understand the mathematical principle of RLHF, we consider a standard mathematical formulation, the reverse-KL regularized contextual bandit for RLHF. Despite its widespread practical application, a rigorous theoretical analysis of this formulation remains open. We investigate its behavior in three distinct settings -- offline, online, and hybrid -- and propose efficient algorithms with finite-sample theoretical guarantees.
Moving towards practical applications, our framework, with a robust approximation of the information-theoretical policy improvement oracle, naturally gives rise to several novel RLHF algorithms. This includes an iterative version of the Direct Preference Optimization (DPO) algorithm for online settings, and a multi-step rejection sampling strategy for offline scenarios. Our empirical evaluations on real-world alignment experiment of large language model demonstrate that these proposed methods significantly surpass existing strong baselines, such as DPO and Rejection Sampling Optimization (RSO), showcasing the connections between solid theoretical foundations and their potent practical implementations.
Comments: 53 pages; theoretical study and algorithmic design of iterative RLHF and DPO
Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Machine Learning (stat.ML)
Cite as: arXiv:2312.11456 [cs.LG]
  (or arXiv:2312.11456v4 [cs.LG] for this version)

Submission history

From: Wei Xiong [view email]
[v1] Mon, 18 Dec 2023 18:58:42 GMT (3186kb,D)
[v2] Sun, 28 Jan 2024 22:32:48 GMT (3099kb,D)
[v3] Tue, 20 Feb 2024 06:14:42 GMT (3100kb,D)
[v4] Wed, 1 May 2024 14:50:56 GMT (2905kb,D)

Link back to: arXiv, form interface, contact.