We gratefully acknowledge support from
the Simons Foundation and member institutions.
Full-text links:

Download:

Current browse context:

cs.LG

Change to browse by:

References & Citations

Bookmark

(what is this?)
CiteULike logo BibSonomy logo Mendeley logo del.icio.us logo Digg logo Reddit logo

Computer Science > Machine Learning

Title: Decoupling Feature Extraction and Classification Layers for Calibrated Neural Networks

Abstract: Deep Neural Networks (DNN) have shown great promise in many classification applications, yet are widely known to have poorly calibrated predictions when they are over-parametrized. Improving DNN calibration without comprising on model accuracy is of extreme importance and interest in safety critical applications such as in the health-care sector. In this work, we show that decoupling the training of feature extraction layers and classification layers in over-parametrized DNN architectures such as Wide Residual Networks (WRN) and Visual Transformers (ViT) significantly improves model calibration whilst retaining accuracy, and at a low training cost. In addition, we show that placing a Gaussian prior on the last hidden layer outputs of a DNN, and training the model variationally in the classification training stage, even further improves calibration. We illustrate these methods improve calibration across ViT and WRN architectures for several image classification benchmark datasets.
Comments: Proceedings of the 41 st International Conference on Machine Learning (ICML) 2024
Subjects: Machine Learning (cs.LG); Machine Learning (stat.ML)
Cite as: arXiv:2405.01196 [cs.LG]
  (or arXiv:2405.01196v3 [cs.LG] for this version)

Submission history

From: Mikkel Jordahn [view email]
[v1] Thu, 2 May 2024 11:36:17 GMT (1846kb,D)
[v2] Fri, 3 May 2024 07:36:26 GMT (1846kb,D)
[v3] Mon, 6 May 2024 08:19:20 GMT (1846kb,D)

Link back to: arXiv, form interface, contact.