Adaptive, Distribution-Free Prediction Intervals for Deep Networks

Danijel Kivaranovic, Kory Johnson, Hannes Leeb

Publikation: Beitrag in Buch/KonferenzbandBeitrag in Konferenzband

55 Downloads (Pure)

Abstract

The machine learning literature contains several constructions for prediction intervals thatare intuitively reasonable but ultimately ad-hoc in that they do not come with provableperformance guarantees. We present methods from the statistics literature that can beused efficiently with neural networks underminimal assumptions with guaranteed performance. We propose a neural network thatoutputs three values instead of a single pointestimate and optimizes a loss function motivated by the standard quantile regression loss. We provide two prediction interval methodswith finite sample coverage guarantees solelyunder the assumption that the observations are independent and identically distributed. The first method leverages the conformal in-ference framework and provides average coverage. The second method provides a new, stronger guarantee by conditioning on the observed data. Lastly, our loss function doesnot compromise the predictive accuracy of thenetwork like other prediction interval methods. We demonstrate the ease of use of our procedures as well as its improvements overother methods on both simulated and realdata. As most deep networks can easily be modified by our method to output predictions with valid prediction intervals, its use should become standard practice, much like reporting standard errors along with mean estimates.
OriginalspracheEnglisch
Titel des SammelwerksProceedings of the 23rd International Conference on Artificial Intelligence and Statistics
Herausgeber*innen Silvia Chiappa, Roberto Calandra
ErscheinungsortOnline
Seiten4346 - 4356
PublikationsstatusVeröffentlicht - 2020

Österreichische Systematik der Wissenschaftszweige (ÖFOS)

  • 101018 Statistik

Zitat