Despite the increasing popularity of journal rankings to evaluate the quality of research contributions, the individual rankings for journals that ranked below the top tier of publications usually feature only modest agreement. Attempts to merge rankings into meta-rankings suffer from some methodological issues, such as mixed measurement scales and incomplete data. This paper addresses the issue of how to construct suitable aggregates of individual journal rankings, using an optimization-based consensus ranking approach. The authors apply the proposed method to a subset of marketing-related journals from a list of collected journal rankings. Next, the paper studies the stability of the derived consensus solution, and the degeneration effects that occur when excluding journals and/or rankings. Finally, the authors investigate the similarities/dissimilarities of the consensus with a naive meta-ranking and with individual rankings. The results show that, even though journals are not uniformly ranked, one may derive a consensus ranking with considerably high agreement with the individual rankings.