dc.contributor.authorZhong, Jianlong
dc.date.accessioned2014-05-21T05:59:11Z
dc.date.accessioned2017-07-23T08:30:04Z
dc.date.available2014-05-21T05:59:11Z
dc.date.available2017-07-23T08:30:04Z
dc.date.copyright2014en_US
dc.date.issued2014
dc.identifier.citationZhong, J. (2014). Parallel graph processing on graphics processing units. Doctoral thesis, Nanyang Technological University, Singapore.
dc.identifier.urihttp://hdl.handle.net/10356/59975
dc.description.abstractGraphs are de facto data structures for many applications, and efficient graph processing is a must for the application performance. Recently, the GPU (Graphics Processing Unit) has been adopted to accelerate many specific graph processing algorithms such as BFS and shortest path. This is because GPUs have an order of magnitude higher computational power and memory bandwidth compared to CPUs. However, superb hardware power of the GPU does not automatically lead to superb performance of graph processing. We are facing various challenges in efficiency and programmability of building parallel graph processing systems on GPUs. Despite the recently improved programmability of GPUs, it is difficult to write correct and efficient GPU programs and even more difficult for graph processing due to the irregularities of graph structures. Even worse, there lacks an efficient method and runtime system on the GPU to support concurrent graph processing tasks from multiple applications and/or users. To address those challenges, we develop the Medusa system to simplify parallel graph processing on the GPU and to support high-throughput executions of concurrent GPU tasks. This thesis presents the design, implementation and experimental evaluations of Medusa, followed by detailed case studies of Medusa in real-world graph applications. To democratize GPU accelerated graph processing, Medusa proposes a programming framework to enable users to harnessing the power of GPUs by writing sequential C code. Particularly, Medusa offers a small set of APIs for developers to define their application logics, and embraces a runtime system to automatically execute the user-defined functions in parallel on GPUs. We further develop a series of graph-centric optimizations based on the architecture features of GPU for the efficiency of Medusa. With the optimized system on a single GPU, we further extend it to execute on multiple GPUs within a machine and develop optimizations on computation and data transfer. With the improved programmability of Medusa, users can implement their graph processing algorithms and submit multiple tasks to the GPU for concurrent executions. It therefore requires an optimized runtime system for managing concurrent GPU tasks. As a matter of fact, GPUs are usually deployed in shared environments such as clusters and clouds. In such shared environments, many tasks are submitted to GPUs from different users, and throughput is an important metric for performance and total ownership cost. Particularly, we propose Kernelet as the runtime back end system to optimize the throughput of concurrent kernel executions on the GPU. Kernelet embraces transparent memory and PCI-e data transfer management techniques, as well as dynamic slicing and scheduling techniques to support efficient concurrent kernel executions. With slicing, Kernelet divides a GPU kernel into multiple sub-kernels (namely slices). Each slice has tunable occupancy to allow co-scheduling with other slices for high GPU utilization. We develop a novel Markov chain-based performance model to guide the scheduling decision. Our experimental results demonstrate up to 31% and 23% performance improvement on NVIDIA Tesla C2050 and GTX680 GPUs for our benchmark kernels, respectively. Kernelet improves the throughput of concurrent Medusa applications by 29% on C2050. As a case study, we collaborate with our colleagues to adopt Medusa in a real research project. Particularly, we adopt Medusa to implement GPU accelerated simulation of information propagation over large-scale and complex social networks. For a number of applications, Medusa is used as the base infrastructure, which allows our colleagues to develop additional application-specific optimizations for performance improvement. From this case study, we demonstrate that the programmability of Medusa improves the coding productivity of domain specific applications, and Medusa brings significant performance speedups by leveraging the GPUs.en_US
dc.format.extent142 p.en_US
dc.language.isoenen_US
dc.subjectDRNTU::Engineering::Computer science and engineeringen_US
dc.titleParallel graph processing on graphics processing unitsen_US
dc.typeThesis
dc.contributor.schoolSchool of Computer Engineeringen_US
dc.contributor.supervisorHe Bingsheng
dc.description.degreeDOCTOR OF PHILOSOPHY (SCE)en_US


Files in this item

FilesSizeFormatView
main.pdf2.989Mbapplication/pdfView/Open

This item appears in the following Collection(s)

Show simple item record