The identification of high quality journals often serves as a basis for the assessment of research contributions. In this context rankings have become an increasingly popular vehicle to decide upon incentives for researchers, promotions, tenure or even library budgets. These rankings are typically based on the judgments of peers or domain experts or scientometric methods (e.g., citation frequencies, acceptance rates). Depending on which (combination) of these ranking approaches is followed, the outcome leads to more or less diverging results. This paper addresses the issue on how to construct suitable aggregate (subsets) of these rankings. We present an optimization based consensus ranking approach and apply the proposed method to a subset of marketing-related journals from the Harzing Journal Quality List. Our results show that even though journals are not uniformly ranked it is possible to derive a consensus ranking with considerably high agreement among the individual rankings. In addition, we explore regional differences in consensus rankings.
|Publikationsstatus||Veröffentlicht - 1 Okt. 2010|
|Reihe||Research Report Series / Department of Statistics and Mathematics|
- Research Report Series / Department of Statistics and Mathematics