We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

cs.LG

Change to browse by:

cs

References & Citations

DBLP - CS Bibliography

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Computer Science > Machine Learning

Title: Mixture Data for Training Cannot Ensure Out-of-distribution Generalization

Abstract: Deep neural networks often face generalization problems to handle out-of-distribution (OOD) data, and there remains a notable theoretical gap between the contributing factors and their respective impacts. Literature evidence from in-distribution data has suggested that generalization error can shrink if the size of mixture data for training increases. However, when it comes to OOD samples, this conventional understanding does not hold anymore -- Increasing the size of training data does not always lead to a reduction in the test generalization error. In fact, diverse trends of the errors have been found across various shifting scenarios including those decreasing trends under a power-law pattern, initial declines followed by increases, or continuous stable patterns. Previous work has approached OOD data qualitatively, treating them merely as samples unseen during training, which are hard to explain the complicated non-monotonic trends. In this work, we quantitatively redefine OOD data as those situated outside the convex hull of mixed training data and establish novel generalization error bounds to comprehend the counterintuitive observations better. Our proof of the new risk bound agrees that the efficacy of well-trained models can be guaranteed for unseen data within the convex hull; More interestingly, but for OOD data beyond this coverage, the generalization cannot be ensured, which aligns with our observations. Furthermore, we attempted various OOD techniques to underscore that our results not only explain insightful observations in recent OOD generalization work, such as the significance of diverse data and the sensitivity to unseen shifts of existing algorithms, but it also inspires a novel and effective data selection strategy.
Comments: 13 pages, 9 figures
Subjects: Machine Learning (cs.LG)
Cite as: arXiv:2312.16243 [cs.LG]
  (or arXiv:2312.16243v4 [cs.LG] for this version)

Submission history

From: Songming Zhang [view email]
[v1] Mon, 25 Dec 2023 11:00:38 GMT (4305kb,D)
[v2] Tue, 2 Jan 2024 11:50:38 GMT (4306kb,D)
[v3] Fri, 2 Feb 2024 04:45:45 GMT (4146kb,D)
[v4] Tue, 23 Apr 2024 07:43:10 GMT (4955kb,D)

Link back to: arXiv, form interface, contact.