We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

cs.RO

Change to browse by:

References & Citations

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Computer Science > Robotics

Title: Learning Reward for Robot Skills Using Large Language Models via Self-Alignment

Abstract: Learning reward functions remains the bottleneck to equip a robot with a broad repertoire of skills. Large Language Models (LLM) contain valuable task-related knowledge that can potentially aid in the learning of reward functions. However, the proposed reward function can be imprecise, thus ineffective which requires to be further grounded with environment information. We proposed a method to learn rewards more efficiently in the absence of humans. Our approach consists of two components: We first use the LLM to propose features and parameterization of the reward, then update the parameters through an iterative self-alignment process. In particular, the process minimizes the ranking inconsistency between the LLM and the learnt reward functions based on the execution feedback. The method was validated on 9 tasks across 2 simulation environments. It demonstrates a consistent improvement over training efficacy and efficiency, meanwhile consuming significantly fewer GPT tokens compared to the alternative mutation-based method.
Comments: ICML 2024
Subjects: Robotics (cs.RO); Artificial Intelligence (cs.AI)
Cite as: arXiv:2405.07162 [cs.RO]
  (or arXiv:2405.07162v3 [cs.RO] for this version)

Submission history

From: Yuwei Zeng [view email]
[v1] Sun, 12 May 2024 04:57:43 GMT (6829kb,D)
[v2] Wed, 15 May 2024 13:59:19 GMT (6829kb,D)
[v3] Thu, 16 May 2024 02:37:29 GMT (6829kb,D)

Link back to: arXiv, form interface, contact.