We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

cs.LG

Change to browse by:

cs

References & Citations

DBLP - CS Bibliography

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Computer Science > Machine Learning

Title: Conditions on Preference Relations that Guarantee the Existence of Optimal Policies

Abstract: Learning from Preferential Feedback (LfPF) plays an essential role in training Large Language Models, as well as certain types of interactive learning agents. However, a substantial gap exists between the theory and application of LfPF algorithms. Current results guaranteeing the existence of optimal policies in LfPF problems assume that both the preferences and transition dynamics are determined by a Markov Decision Process. We introduce the Direct Preference Process, a new framework for analyzing LfPF problems in partially-observable, non-Markovian environments. Within this framework, we establish conditions that guarantee the existence of optimal policies by considering the ordinal structure of the preferences. We show that a decision-making problem can have optimal policies -- that are characterized by recursive optimality equations -- even when no reward function can express the learning goal. These findings underline the need to explore preference-based learning strategies which do not assume that preferences are generated by reward.
Comments: v2: replaced with accepted AISTATS 2024 version, containing a new summary figure and one extra example. Results and conclusions are unchanged
Subjects: Machine Learning (cs.LG)
Cite as: arXiv:2311.01990 [cs.LG]
  (or arXiv:2311.01990v2 [cs.LG] for this version)

Submission history

From: Jonathan Colaço Carr [view email]
[v1] Fri, 3 Nov 2023 15:42:12 GMT (26kb)
[v2] Wed, 27 Mar 2024 22:03:46 GMT (55kb,D)

Link back to: arXiv, form interface, contact.