Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/172661
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Tang, Zhe Jun | en_US |
dc.contributor.author | Cham, Tat-Jen | en_US |
dc.date.accessioned | 2023-12-19T05:23:12Z | - |
dc.date.available | 2023-12-19T05:23:12Z | - |
dc.date.issued | 2022 | - |
dc.identifier.citation | Tang, Z. J. & Cham, T. (2022). MPT-Net: mask point transformer network for large scale point cloud semantic segmentation. 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 10611-10618. https://dx.doi.org/10.1109/IROS47612.2022.9981809 | en_US |
dc.identifier.isbn | 9781665479271 | - |
dc.identifier.uri | https://hdl.handle.net/10356/172661 | - |
dc.description.abstract | Point cloud semantic segmentation is important for road scene perception, a task for driverless vehicles to achieve full fledged autonomy. In this work, we introduce Mask Point Transformer Network (MPT-Net), a novel architecture for point cloud segmentation that is simple to implement. MPT-Net consists of a local and global feature encoder and a transformer based decoder; a 3D Point-Voxel Convolution encoder backbone with voxel self attention to encode features and a Mask Point Transformer module to decode point features and segment the point cloud. Firstly, we introduce the novel MPT designed to specifically handle point cloud segmentation. MPT offers two benefits. It attends to every point in the point cloud using mask tokens to extract class specific features globally with cross attention, and provide inter-class feature information exchange using self attention on the learned mask tokens. Secondly, we design a backbone to use sparse point voxel convolutional blocks and a self attention block using transformers to learn local and global contextual features. We evaluate MPT-Net on large scale outdoor driving scene point cloud datasets, SemanticKITTI and nuScenes. Our experiments show that by replacing the standard segmentation head with MPT, MPT-Net achieves a state-of-the-art performance over our baseline approach by 3.8% in SemanticKITTI and is highly effective in detecting 'stuffs' in point cloud. | en_US |
dc.language.iso | en | en_US |
dc.rights | © 2022 IEEE. All rights reserved. | en_US |
dc.subject | Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision | en_US |
dc.title | MPT-Net: mask point transformer network for large scale point cloud semantic segmentation | en_US |
dc.type | Conference Paper | en |
dc.contributor.school | School of Computer Science and Engineering | en_US |
dc.contributor.conference | 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) | en_US |
dc.identifier.doi | 10.1109/IROS47612.2022.9981809 | - |
dc.identifier.scopus | 2-s2.0-85146358620 | - |
dc.identifier.spage | 10611 | en_US |
dc.identifier.epage | 10618 | en_US |
dc.subject.keywords | Point Cloud Compression | en_US |
dc.subject.keywords | Representation Learning | en_US |
dc.citation.conferencelocation | Kyoto, Japan | en_US |
item.grantfulltext | none | - |
item.fulltext | No Fulltext | - |
Appears in Collections: | SCSE Conference Papers |
SCOPUSTM
Citations
50
2
Updated on Oct 11, 2024
Page view(s)
164
Updated on Oct 9, 2024
Google ScholarTM
Check
Altmetric
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.