Please use this identifier to cite or link to this item:
|Title:||Real-time LiDAR point cloud compression using bi-directional prediction and range-adaptive floating-point coding||Authors:||Zhao, Lili
|Keywords:||Engineering::Electrical and electronic engineering||Issue Date:||2022||Source:||Zhao, L., Ma, K., Lin, X., Wang, W. & Chen, J. (2022). Real-time LiDAR point cloud compression using bi-directional prediction and range-adaptive floating-point coding. IEEE Transactions On Broadcasting, 68(3), 620-635. https://dx.doi.org/10.1109/TBC.2022.3162406||Project:||M4082184||Journal:||IEEE Transactions on Broadcasting||Abstract:||Due to the large amount of data involved in the three-dimensional (3D) LiDAR point clouds, point cloud compression (PCC) becomes indispensable to many real-time applications. In autonomous driving of connected vehicles for example, point clouds are constantly acquired along the time and subjected to be compressed. Among the existing PCC methods, very few of them have effectively removed the temporal redundancy inherited in the point clouds. To address this issue, a novel lossy LiDAR PCC system is proposed in this paper, which consists of the inter-frame coding and the intra-frame coding. For the former, a deep-learning approach is proposed to conduct bi-directional frame prediction using an asymmetric residual module and 3D space-time convolutions; the proposed network is called the bi-directional prediction network (BPNet). For the latter, a novel range-adaptive floating-point coding (RAFC) algorithm is proposed for encoding the reference frames and the B-frame prediction residuals in the 32-bit floating-point precision. Since the pixel-value distribution of these two types of data are quite different, various encoding modes are designed for providing adaptive selection. Extensive simulation experiments have been conducted using multiple point cloud datasets, and the results clearly show that our proposed PCC system consistently outperforms the state-of-the-art MPEG G-PCC in terms of data fidelity and localization, while delivering real-time performance.||URI:||https://hdl.handle.net/10356/163768||ISSN:||0018-9316||DOI:||10.1109/TBC.2022.3162406||Rights:||© 2022 IEEE. All rights reserved.||Fulltext Permission:||none||Fulltext Availability:||No Fulltext|
|Appears in Collections:||EEE Journal Articles|
Updated on Jan 31, 2023
Updated on Feb 7, 2023
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.