We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

stat.ML

Change to browse by:

References & Citations

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Statistics > Machine Learning

Title: Training-Conditional Coverage Bounds for Uniformly Stable Learning Algorithms

Abstract: The training-conditional coverage performance of the conformal prediction is known to be empirically sound. Recently, there have been efforts to support this observation with theoretical guarantees. The training-conditional coverage bounds for jackknife+ and full-conformal prediction regions have been established via the notion of $(m,n)$-stability by Liang and Barber~[2023]. Although this notion is weaker than uniform stability, it is not clear how to evaluate it for practical models. In this paper, we study the training-conditional coverage bounds of full-conformal, jackknife+, and CV+ prediction regions from a uniform stability perspective which is known to hold for empirical risk minimization over reproducing kernel Hilbert spaces with convex regularization. We derive coverage bounds for finite-dimensional models by a concentration argument for the (estimated) predictor function, and compare the bounds with existing ones under ridge regression.
Comments: Accepted to the ISIT 2024 workshop on Information-Theoretic Methods for Trustworthy Machine Learning (IT-TML)
Subjects: Machine Learning (stat.ML); Machine Learning (cs.LG)
Cite as: arXiv:2404.13731 [stat.ML]
  (or arXiv:2404.13731v1 [stat.ML] for this version)

Submission history

From: Mehrdad Pournaderi [view email]
[v1] Sun, 21 Apr 2024 18:18:34 GMT (29kb)

Link back to: arXiv, form interface, contact.