Current browse context:
eess.SY
Change to browse by:
References & Citations
Electrical Engineering and Systems Science > Systems and Control
Title: Sample Complexity of the Linear Quadratic Regulator: A Reinforcement Learning Lens
(Submitted on 16 Apr 2024 (v1), last revised 18 Apr 2024 (this version, v2))
Abstract: We provide the first known algorithm that provably achieves $\varepsilon$-optimality within $\widetilde{\mathcal{O}}(1/\varepsilon)$ function evaluations for the discounted discrete-time LQR problem with unknown parameters, without relying on two-point gradient estimates. These estimates are known to be unrealistic in many settings, as they depend on using the exact same initialization, which is to be selected randomly, for two different policies. Our results substantially improve upon the existing literature outside the realm of two-point gradient estimates, which either leads to $\widetilde{\mathcal{O}}(1/\varepsilon^2)$ rates or heavily relies on stability assumptions.
Submission history
From: Amirreza Neshaei Moghaddam [view email][v1] Tue, 16 Apr 2024 18:54:57 GMT (26kb)
[v2] Thu, 18 Apr 2024 23:38:49 GMT (26kb)
Link back to: arXiv, form interface, contact.