MapReduce and its applications in heterogeneous environment.
Tan, Yu Shyang.
Date of Issue2011
School of Computer Engineering
Parallel and Distributed Computing Centre
As the data growth rate outpace that of the processing capabilities of CPUs, reaching Petascale, technologies and tools that can effectively process such huge datasets become increasingly important. Two major approaches are currently adopted to address this issue: use of specialized hardware accelerators such as GPGPU and developing new data intensive processing tools. In the case of the former, the trend shows an increasing number of GPGPU clusters being used in high performance computing. In the latter, Google introduced a framework coupled programming model called MapReduce for massive distributed parallel processing. In this thesis, I investigated the possibility of leveraging on these two technologies, so as to create an environment where users can harness the potentials of hardware accelerators in processing huge datasets, in a distributed and parallel manner. Hadoop, an open source implementation of MapReduce is first analysed. This initial study looks into the performance of Hadoop when processing small datasets, something which Hadoop is not designed for. The study uses several metrics such as the input file size, the size of dataset and locality of data and looked into some of the parameters that can affect performance of the MapReduce flow with respect to the dataset. The study provided an insight to MapReduce and how data can be decomposed into sub data partitions so that the data can be managed by the accelerators while having minimal negative impact on the performance.
DRNTU::Engineering::Computer science and engineering::Computer systems organization::Computer system implementation