Measuring the Stability of Supervised Statistical Learning Results

Michel Philipp, Thomas Rusch, Carolin Strobl, Kurt Hornik

Publikation: Wissenschaftliche FachzeitschriftOriginalbeitrag in FachzeitschriftBegutachtung

Abstract

Stability is a major requirement to draw reliable conclusions when interpreting results from supervised statistical learning. In this article, we present a general framework for assessing and comparing the stability of results, which can be used in real-world statistical learning applications as well as in simulation and benchmark studies. We use the framework to show that stability is a property of both the algorithm and the data-generating process. In particular, we demonstrate that unstable algorithms (such as recursive partitioning) can produce stable results when the functional form of the relationship between the predictors and the response matches the algorithm. Typical uses of the framework in practical data analysis would be to compare the stability of results generated by different candidate algorithms for a dataset at hand or to assess the stability of algorithms in a benchmark study. Code to perform the stability analyses is provided in the form of an R package. Supplementary material for this article is available online.
OriginalspracheEnglisch
Seiten (von - bis)685 - 700
FachzeitschriftJournal of Computational and Graphical Statistics
Jahrgang27
Ausgabenummer4
DOIs
PublikationsstatusVeröffentlicht - 2018

Österreichische Systematik der Wissenschaftszweige (ÖFOS)

  • 101018 Statistik
  • 501 not use (Altbestand)
  • 509013 Sozialstatistik
  • 509 not use (Altbestand)

Dieses zitieren