Current browse context:
cs.LG
Change to browse by:
References & Citations
Computer Science > Machine Learning
Title: High-probability Convergence Bounds for Nonlinear Stochastic Gradient Descent Under Heavy-tailed Noise
(Submitted on 28 Oct 2023 (v1), revised 13 Nov 2023 (this version, v2), latest version 1 May 2024 (v7))
Abstract: Several recent works have studied the convergence \textit{in high probability} of stochastic gradient descent (SGD) and its clipped variant. Compared to vanilla SGD, clipped SGD is practically more stable and has the additional theoretical benefit of logarithmic dependence on the failure probability. However, the convergence of other practical nonlinear variants of SGD, e.g., sign SGD, quantized SGD and normalized SGD, that achieve improved communication efficiency or accelerated convergence is much less understood. In this work, we study the convergence bounds \textit{in high probability} of a broad class of nonlinear SGD methods. For strongly convex loss functions with Lipschitz continuous gradients, we prove a logarithmic dependence on the failure probability, even when the noise is heavy-tailed. Strictly more general than the results for clipped SGD, our results hold for any nonlinearity with bounded (component-wise or joint) outputs, such as clipping, normalization, and quantization. Further, existing results with heavy-tailed noise assume bounded $\eta$-th central moments, with $\eta \in (1,2]$. In contrast, our refined analysis works even for $\eta=1$, strictly relaxing the noise moment assumptions in the literature.
Submission history
From: Aleksandar Armacki [view email][v1] Sat, 28 Oct 2023 18:53:41 GMT (33kb)
[v2] Mon, 13 Nov 2023 20:07:45 GMT (293kb,D)
[v3] Mon, 4 Dec 2023 20:45:39 GMT (406kb,D)
[v4] Thu, 18 Apr 2024 03:53:19 GMT (412kb,D)
[v5] Fri, 19 Apr 2024 04:39:46 GMT (411kb,D)
[v6] Tue, 23 Apr 2024 20:22:30 GMT (412kb,D)
[v7] Wed, 1 May 2024 02:30:23 GMT (412kb,D)
Link back to: arXiv, form interface, contact.