Using Statistics for Computing Joins with MapReduce

Ingo Feinerer, Reinhard Pichler, Emanuel Sallinger, Vadim Savenkov

Publikation: Beitrag in Buch/KonferenzbandBeitrag in Konferenzband

Abstract

To cope with ever-growing amounts of data, the MapReduce model has been designed. It has been successfully applied to various computational problems. In recent years, multiple MapReduce algorithms have also been developed for computing joins -- one of the fundamental problems in managing and querying data. Current MapReduce algorithms for computing joins (in particular the ones based on the HyperCube algorithm of Afrati and Ullman) take into account the size of the tables and so-called "heavy hitters" (i.e., attribute values that occur particularly often) when determining an optimal distribution of computation tasks to the available reducers. However, in contrast to most state-of-the-art database management systems, more elaborate statistics on the distribution of data values are not used for optimization purposes. In this short paper, we initiate the study of the following questions: How is the performance of known MapReduce algorithms for join computation, in particular, for skewed data? Can more fine-grained statistics help to improve these methods? Our initial study shows that such enhancements can indeed be used to improve existing methods.
OriginalspracheEnglisch
Titel des SammelwerksUsing Statistics for Computing Joins with MapReduce
Herausgeber*innen Andrea Cali, Maria-Esther Vidal
ErscheinungsortLima, Peru
VerlagCEUR Workshop Proceedings
Seiten69 - 74
PublikationsstatusVeröffentlicht - 2015

Österreichische Systematik der Wissenschaftszweige (ÖFOS)

  • 102

Zitat