Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/143545
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Shi, Hanyu | en_US |
dc.contributor.author | Lin, Guosheng | en_US |
dc.contributor.author | Wang, Hao | en_US |
dc.contributor.author | Hung, Tzu-Yi | en_US |
dc.contributor.author | Wang, Zhenhua | en_US |
dc.date.accessioned | 2020-09-08T06:55:28Z | - |
dc.date.available | 2020-09-08T06:55:28Z | - |
dc.date.issued | 2020 | - |
dc.identifier.citation | Shi, H., Lin, G., Wang, H., Hung, T.-Y., & Wang, Z. (2020). SpSequenceNet : semantic segmentation network on 4D point clouds. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2020. doi:10.1109/CVPR42600.2020.00463 | en_US |
dc.identifier.uri | https://hdl.handle.net/10356/143545 | - |
dc.description.abstract | Point clouds are useful in many applications like autonomous driving and robotics as they provide natural 3D information of the surrounding environments. While there are extensive research on 3D point clouds, scene understanding on 4D point clouds, a series of consecutive 3D point clouds frames, is an emerging topic and yet underinvestigated. With 4D point clouds (3D point cloud videos), robotic systems could enhance their robustness by leveraging the temporal information from previous frames. However, the existing semantic segmentation methods on 4D point clouds suffer from low precision due to the spatial and temporal information loss in their network structures. In this paper, we propose SpSequenceNet to address this problem. The network is designed based on 3D sparse convolution, and it includes two novel modules, a cross-frame global attention module and a cross-frame local interpolation module, to capture spatial and temporal information in 4D point clouds. We conduct extensive experiments on SemanticKITTI, and achieve the state-of-the-art result of 43.1% on mIoU, which is 1.5% higher than the previous best approach. | en_US |
dc.description.sponsorship | Ministry of Education (MOE) | en_US |
dc.description.sponsorship | National Research Foundation (NRF) | en_US |
dc.language.iso | en | en_US |
dc.relation | Delta-NTU Corporate Lab | en_US |
dc.relation | AISG-RP-2018-003 | en_US |
dc.relation | RG22/19 (S) | en_US |
dc.rights | © 2020 The Author(s) (published by IEEE). This is an open-access article distributed under the terms of the Creative Commons Attribution License. | en_US |
dc.subject | Engineering::Computer science and engineering | en_US |
dc.title | SpSequenceNet : semantic segmentation network on 4D point clouds | en_US |
dc.type | Conference Paper | en |
dc.contributor.school | School of Computer Science and Engineering | en_US |
dc.contributor.conference | IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2020 | en_US |
dc.contributor.organization | Zhejiang University of Technology | en_US |
dc.identifier.doi | 10.1109/CVPR42600.2020.00463 | - |
dc.description.version | Published version | en_US |
dc.subject.keywords | Segmentation | en_US |
dc.subject.keywords | Computer Vision | en_US |
dc.citation.conferencelocation | Seattle, Washington, USA. | en_US |
dc.description.acknowledgement | This work is supported by the Delta-NTU Corporate Lab with funding support from Delta Electronics Inc. and the National Research Foundation (NRF) Singapore. This work is also partly supported by the National Research Foundation Singapore under its AI Singapore Programme (Award Number: AISG-RP-2018-003), the MOE Tier-1 research grant: RG22/19 (S), and the National Natural Science Foundation of China (61802348). | en_US |
item.fulltext | With Fulltext | - |
item.grantfulltext | open | - |
Appears in Collections: | SCSE Conference Papers |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
SpSequenceNet___semantic_segmentation_network_on_4D_point_cloud__7_.pdf | 2.34 MB | Adobe PDF | View/Open |
SCOPUSTM
Citations
10
58
Updated on Mar 13, 2024
Web of ScienceTM
Citations
10
40
Updated on Oct 29, 2023
Page view(s)
347
Updated on Mar 28, 2024
Download(s) 20
204
Updated on Mar 28, 2024
Google ScholarTM
Check
Altmetric
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.