Projects per year
Abstract
To cope with ever-growing amounts of data, the MapReduce model has been designed. It has been successfully applied to various computational problems. In recent years, multiple MapReduce algorithms have also been developed for computing joins -- one of the fundamental problems in managing and querying data. Current MapReduce algorithms for computing joins (in particular the ones based on the HyperCube algorithm of Afrati and Ullman) take into account the size of the tables and so-called "heavy hitters" (i.e., attribute values that occur particularly often) when determining an optimal distribution of computation tasks to the available reducers. However, in contrast to most state-of-the-art database management systems, more elaborate statistics on the distribution of data values are not used for optimization purposes. In this short paper, we initiate the study of the following questions: How is the performance of known MapReduce algorithms for join computation, in particular, for skewed data? Can more fine-grained statistics help to improve these methods? Our initial study shows that such enhancements can indeed be used to improve existing methods.
Original language | English |
---|---|
Title of host publication | Using Statistics for Computing Joins with MapReduce |
Editors | Andrea Cali, Maria-Esther Vidal |
Place of Publication | Lima, Peru |
Publisher | CEUR Workshop Proceedings |
Pages | 69 - 74 |
Publication status | Published - 2015 |
Austrian Classification of Fields of Science and Technology (ÖFOS)
- 102
Projects
- 1 Finished
-
SPARQL Evaluations and Extensions (kurz: SEE): Subproject XSPARQL, SPARQL Update & Linked Data
Polleres, A. (PI - Project head), Bischof, S. (Researcher) & Steyskal, S. (Researcher)
1/06/14 → 31/08/15
Project: Research funding