We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

cs.LG

Change to browse by:

References & Citations

DBLP - CS Bibliography

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Computer Science > Machine Learning

Title: Offline Imitation Learning from Multiple Baselines with Applications to Compiler Optimization

Abstract: This work studies a Reinforcement Learning (RL) problem in which we are given a set of trajectories collected with K baseline policies. Each of these policies can be quite suboptimal in isolation, and have strong performance in complementary parts of the state space. The goal is to learn a policy which performs as well as the best combination of baselines on the entire state space. We propose a simple imitation learning based algorithm, show a sample complexity bound on its accuracy and prove that the the algorithm is minimax optimal by showing a matching lower bound. Further, we apply the algorithm in the setting of machine learning guided compiler optimization to learn policies for inlining programs with the objective of creating a small binary. We demonstrate that we can learn a policy that outperforms an initial policy learned via standard RL through a few iterations of our approach.
Subjects: Machine Learning (cs.LG); Programming Languages (cs.PL)
Cite as: arXiv:2403.19462 [cs.LG]
  (or arXiv:2403.19462v1 [cs.LG] for this version)

Submission history

From: Teodor Vanislavov Marinov [view email]
[v1] Thu, 28 Mar 2024 14:34:02 GMT (225kb,D)

Link back to: arXiv, form interface, contact.