Model Comparison and Calibration Assessment: User Guide for Consistent Scoring Functions in Machine Learning and Actuarial Practice

Tobias Fissler, Christian Lorentzen, Michael Mayer

Publication: Working/Discussion PaperWorking Paper/Preprint

Abstract

One of the main tasks of actuaries and data scientists is to build good predictive models for certain phenomena such as the claim size or the number of claims in insurance. These models ideally exploit given feature information to enhance the accuracy of prediction. This user guide revisits and clarifies statistical techniques to assess the calibration or adequacy of a model on the one hand, and to compare and rank different models on the other hand. In doing so, it emphasises the importance of specifying the prediction target at hand a priori and of choosing the scoring function in model comparison in line with this target. Guidance for the practical choice of the scoring function is provided. Striving to bridge the gap between science and daily practice in application, it focuses mainly on the pedagogical presentation of existing results and of best practice. The results are accompanied and illustrated by two real data case studies on workers' compensation and customer churn.
Original languageEnglish
DOIs
Publication statusPublished - 2022

Austrian Classification of Fields of Science and Technology (ÖFOS)

  • 401117 Viticulture
  • 101018 Statistics

Cite this