Non-Standard Errors

Fincap Team, Albert J. Menkveld, Anna Dreber, Felix Holzmeister, Juergen Huber, Magnus Johanneson, Michael Kirchler, Michael Razen, Utz Weitzel, David Abad, Menachem (meni) Abudy, Tobias Adrian, Yacine Ait-Sahalia, Olivier Akmansoy, Jamie Alcock, Vitali Alexeev, Arash Aloosh, Livia Amato, Diego Amaya, James J. AngelAmadeus Bach, Edwin Baidoo, Gaetan Bakalli, Andrea Barbon, Oksana Bashchenko, Parampreet Christopher Bindra, Geir Hoidal Bjonnes, Jeffrey R. Black, Bernard S. Black, Santiago Bohorquez, Oleg Bondarenko, Charles S. Bos, Ciril Bosch-Rosa, Elie Bouri, Christian T. Brownlees, Anna Calamia, Viet Nga Cao, Gunther Capelle-Blancard, Laura Capera, Massimiliano Caporin, Allen Carrion, Tolga Caskurlu, Oleg Deev, Thomas Gehrig, Simon Hartmann, Nikolaus Hautsch, Gabriel Kaiser, Thomas Lindner, Giorgia Simion, Stefan Voigt, Patrick Weiss

Publikation: Wissenschaftliche FachzeitschriftOriginalbeitrag in FachzeitschriftBegutachtung

Abstract

In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: Non-standard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for better reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants.
OriginalspracheEnglisch
Seiten (von - bis)2339-2390
FachzeitschriftJournal of Finance
Jahrgang79
Ausgabenummer3
Frühes Online-Datum2024
DOIs
PublikationsstatusVeröffentlicht - Juni 2024

Zitat