Please use this identifier to cite or link to this item:
Title: Learning multi-modal scale-aware attentions for efficient and robust road segmentation
Authors: Zhou, Yunjiao
Keywords: Engineering::Electrical and electronic engineering::Computer hardware, software and systems
Issue Date: 2022
Publisher: Nanyang Technological University
Source: Zhou, Y. (2022). Learning multi-modal scale-aware attentions for efficient and robust road segmentation. Master's thesis, Nanyang Technological University, Singapore.
Abstract: Multi-modal fusion has proven to be beneficial to road segmentation in autonomous driving, where depth is commonly used as complementary data for RGB images to provide robust 3D geometry information. Existing methods adopt an encoder-decoder structure to fuse two modalities for segmentation through encoding and concatenating high-level and low-level features. However, this leads to increasing semantic gaps not only among modalities, but also different scales, which are detrimental to road segmentation. To overcome this challenge and obtain robust features, we propose a Multi-modal Scale-aware Attention Network (MSAN), to fuse RGB and depth data effectively via a novel transformer-based cross-attention module, namely Multi-modal Scare-aware Transformer (MST), which fuses RGB-D features across multiple scales at the encoder stage. To better consolidate different scales of feature, we further propose a Scale-aware Attention Module (SAM) that captures channel-wise attention for cross-scale fusion. The two attention-based modules focus on exploring the complementarity of modalities and the different importance of scales to narrow the gaps for road segmentation. Extensive experiments demonstrate that our method achieves competitive segmentation performance at a low computational cost.
Schools: School of Electrical and Electronic Engineering 
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:EEE Theses

Files in This Item:
File Description SizeFormat 
[ZHOU YUNJIAO(G2103567G)]_revised.pdf
  Restricted Access
14.72 MBAdobe PDFView/Open

Page view(s)

Updated on Jun 14, 2024


Updated on Jun 14, 2024

Google ScholarTM


Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.