Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/72092
Title: GPU-based commonsense reasoning for real-time query answering and multimodal analysis
Authors: Tran, Ha Nguyen
Keywords: DRNTU::Engineering::Computer science and engineering::Hardware::Performance and reliability
DRNTU::Engineering::Computer science and engineering::Information systems::Information systems applications
Issue Date: 2017
Source: Tran, H. N. (2017). GPU-based commonsense reasoning for real-time query answering and multimodal analysis. Doctoral thesis, Nanyang Technological University, Singapore.
Abstract: A commonsense knowledge base is a set of facts containing the information possessed by an ordinary person. A commonsense knowledge base is also called a fundamental ontology, as it consists of very general concepts across all domains. In order to represent such a database in practice, different approaches have been proposed in recent years. Most of them fall into either graph-based or rule-based knowledge representations. Reasoning and querying information on such kind of representations present two major implementation issues: performance and scalability, due to the fact that many new concepts (mined from the Web or learned through crowd-sourcing) are continuously integrated into the knowledge base. Some distributed computing based methods have recently been introduced to deal with those very large networks by utilizing parallelism, yet there remains the open problem of high communication costs between the participating machines. In recent years, Graphics Processing Units (GPUs) have become popular computing devices owing to their massive parallel execution power. A typical GPU device consists of hundreds of cores running simultaneously. Modern General Purpose GPUs have been successfully adopted to accelerate heavy workload tasks such as relational database joining operations, fundamental large-scale graph algorithms, and big data analytics. Encouraged by those promising results, the dissertation investigates whether and how GPUs can be leveraged to accelerate the performance of commonsense reasoning and query answering systems on large-scale networks. Firstly, to address the problem of reasoning and querying on large-scale graph-based commonsense knowledge bases, the thesis presents a GPU-friendly method, called GpSense, to solve the subgraph matching problem which is the core function of commonsense reasoning and query answering systems. Our approach is based on a novel filtering-and-joining strategy which is suitable to be implemented on massively parallel architectures. In order to optimize the performance in depth, we utilize a series of optimization techniques which contribute towards increasing GPU occupancy, reducing workload imbalances and in particular speeding up subgraph matching on commonsense graphs. To address the issue of large graphs which cannot fit into the GPU memory, we propose a multiple-level graph compression technique to reduce graph sizes while preserving all subgraph matching results. The graph compression method converts the data graph to a weighted graph which is small enough to be maintained in GPU memory. To highlight the efficiency of our solution, we perform an extensive evaluation of GpSense against state-of-the-art subgraph matching algorithms. Extensive experimental evaluations on both real and synthetic data show that our implementation scales in a linear way and outperforms current optimized CPU-based competitors. Secondly, in order to reason and retrieve information on rule-based knowledge bases, the thesis introduces gSparql, a fast and scalable inference and querying method on mass-storage RDF data with rule-based entailment regimes. Our approach accepts different rulesets and executes the reasoning process at query time when the inferred triples are determined by the set of triple patterns defined in the query. To answer SPARQL queries in parallel, we first present a query rewriting algorithm to extend the queries and also eliminate redundant triple patterns based on the rulesets. Then, we convert the execution plan into a series of primitives such as sort, merge, prefix scan, and compaction which can be efficiently done on GPU devices. To overcome the problem of triple duplication, we utilize a combination of Bloom Filter and sort-merge algorithms on the GPU. Experiment results on the LUBM dataset show that our solution outperforms the state-of-the-art Jena method on the large datasets. Finally, we utilize commonsense knowledge bases to address the problem of real-time multimodal analysis. In particular, we focus on the problem of multimodal sentiment analysis, which consists in the simultaneous analysis of different modalities, e.g., speech and video, for emotion and polarity detection. Our approach takes advantage of the massively parallel processing power of modern GPUs to enhance the performance of feature extraction from different modalities. In addition, in order to extract important textual features from multimodal sources, we generate domain-specific graphs based on commonsense knowledge and apply GPU-based graph traversal for fast feature detection. Then, powerful ELM classifiers are applied to build the sentiment analysis model based on the extracted features. We conduct our experiments on the YouTube dataset and achieve an accuracy of 78% which outperforms all previous systems. In term of processing speed, our method shows improvements of several orders of magnitude for feature extraction compared to CPU-based counterparts.
URI: http://hdl.handle.net/10356/72092
Fulltext Permission: open
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Theses

Files in This Item:
File Description SizeFormat 
Thesis.pdfMain article1.72 MBAdobe PDFThumbnail
View/Open

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.