We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

cs.LG

Change to browse by:

References & Citations

DBLP - CS Bibliography

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Computer Science > Machine Learning

Title: Adapting to Mixing Time in Stochastic Optimization with Markovian Data

Abstract: We consider stochastic optimization problems where data is drawn from a Markov chain. Existing methods for this setting crucially rely on knowing the mixing time of the chain, which in real-world applications is usually unknown. We propose the first optimization method that does not require the knowledge of the mixing time, yet obtains the optimal asymptotic convergence rate when applied to convex problems. We further show that our approach can be extended to: (i) finding stationary points in non-convex optimization with Markovian data, and (ii) obtaining better dependence on the mixing time in temporal difference (TD) learning; in both cases, our method is completely oblivious to the mixing time. Our method relies on a novel combination of multi-level Monte Carlo (MLMC) gradient estimation together with an adaptive learning method.
Comments: ICML 2022
Subjects: Machine Learning (cs.LG); Machine Learning (stat.ML)
Cite as: arXiv:2202.04428 [cs.LG]
  (or arXiv:2202.04428v3 [cs.LG] for this version)

Submission history

From: Ron Dorfman [view email]
[v1] Wed, 9 Feb 2022 12:43:11 GMT (1140kb,D)
[v2] Wed, 19 Oct 2022 16:05:15 GMT (2399kb,D)
[v3] Thu, 13 Jul 2023 16:05:28 GMT (2399kb,D)

Link back to: arXiv, form interface, contact.