Please use this identifier to cite or link to this item:
Title: Deep learning for graph structured data
Authors: Dwivedi Vijay Prakash
Keywords: Computer and Information Science
Issue Date: 2024
Publisher: Nanyang Technological University
Source: Dwivedi Vijay Prakash (2024). Deep learning for graph structured data. Doctoral thesis, Nanyang Technological University, Singapore.
Abstract: Graph-structured data is ubiquitous across diverse domains, representing valuable relational information between entities. However, most deep learning techniques like convolutional and recurrent neural networks are tailored for grid-structured data and struggle to handle such graphs. This has led to growing interest in graph representation learning using graph neural networks (GNNs). GNNs integrate graph structure into neural network layers through message-passing in general. However, several challenges exist like the lack of rigorous benchmarks, limitations in model expressiveness, and poor scalability. This thesis aims to advance graph representation learning by tackling these key challenges. First, it develops comprehensive benchmarks for standardized assessment of GNNs. This includes medium-scale tasks covering supervised and semi-supervised node, edge and graph classifications across domains like social networks, computer vision, and combinatorial optimization. The thesis also introduces a novel benchmark specifically designed to test modeling of long-range interactions in larger graphs. Second, the thesis focuses on developing new GNN architectures for learning on graphs with higher expressivity and generalization. It extends Transformer networks to graph domains by introducing graph-based inductive biases like leveraging sparsity and designing Laplacian positional encodings. Another technique learns separate structural and positional representations in GNNs through using informative graph diffusion features. This boosts model capacity significantly. Finally, the thesis addresses the problem of scaling graph models, in particular Graph Transformers, to massive graphs. It investigates design principles like incorporating efficient local and global graph representations. Following this, a scalable Graph Transformer framework is proposed. It uses novel neighborhood sampling and global attention schemes to capture both local structure and global dependencies in very large graphs. Overall, through rigorous benchmarks, expressive architectures, and scalable models, this thesis makes significant contributions towards advancing deep learning on graph- structured data across multiple fronts. The techniques pave the way for adoption of GNNs in real-world applications involving complex relational data.
DOI: 10.32657/10356/175787
Schools: School of Computer Science and Engineering 
Rights: This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0).
Fulltext Permission: open
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Theses

Files in This Item:
File Description SizeFormat 
mythesis.pdf11.34 MBAdobe PDFThumbnail

Page view(s)

Updated on Jul 20, 2024


Updated on Jul 20, 2024

Google ScholarTM




Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.