Current browse context:
cs.LG
Change to browse by:
References & Citations
Computer Science > Machine Learning
Title: Quantifying Aleatoric and Epistemic Uncertainty with Proper Scoring Rules
(Submitted on 18 Apr 2024 (v1), last revised 19 Apr 2024 (this version, v2))
Abstract: Uncertainty representation and quantification are paramount in machine learning and constitute an important prerequisite for safety-critical applications. In this paper, we propose novel measures for the quantification of aleatoric and epistemic uncertainty based on proper scoring rules, which are loss functions with the meaningful property that they incentivize the learner to predict ground-truth (conditional) probabilities. We assume two common representations of (epistemic) uncertainty, namely, in terms of a credal set, i.e. a set of probability distributions, or a second-order distribution, i.e., a distribution over probability distributions. Our framework establishes a natural bridge between these representations. We provide a formal justification of our approach and introduce new measures of epistemic and aleatoric uncertainty as concrete instantiations.
Submission history
From: Paul Hofman [view email][v1] Thu, 18 Apr 2024 14:20:19 GMT (684kb,D)
[v2] Fri, 19 Apr 2024 09:14:28 GMT (684kb,D)
Link back to: arXiv, form interface, contact.