Using Statistics for Computing Joins with MapReduce

Ingo Feinerer, Reinhard Pichler, Emanuel Sallinger, Vadim Savenkov

Publication: Chapter in book/Conference proceedingContribution to conference proceedings

Abstract

To cope with ever-growing amounts of data, the MapReduce model has been designed. It has been successfully applied to various computational problems. In recent years, multiple MapReduce algorithms have also been developed for computing joins -- one of the fundamental problems in managing and querying data. Current MapReduce algorithms for computing joins (in particular the ones based on the HyperCube algorithm of Afrati and Ullman) take into account the size of the tables and so-called "heavy hitters" (i.e., attribute values that occur particularly often) when determining an optimal distribution of computation tasks to the available reducers. However, in contrast to most state-of-the-art database management systems, more elaborate statistics on the distribution of data values are not used for optimization purposes. In this short paper, we initiate the study of the following questions: How is the performance of known MapReduce algorithms for join computation, in particular, for skewed data? Can more fine-grained statistics help to improve these methods? Our initial study shows that such enhancements can indeed be used to improve existing methods.
Original languageEnglish
Title of host publicationUsing Statistics for Computing Joins with MapReduce
Editors Andrea Cali, Maria-Esther Vidal
Place of PublicationLima, Peru
PublisherCEUR Workshop Proceedings
Pages69 - 74
Publication statusPublished - 2015

Austrian Classification of Fields of Science and Technology (ÖFOS)

  • 102

Cite this