Please use this identifier to cite or link to this item:
|Title:||No-reference view synthesis quality prediction for 3-D videos based on color-depth interactions||Authors:||Shao, Feng
|Keywords:||Engineering::Computer science and engineering||Issue Date:||2017||Source:||Shao, F., Yuan, Q., Lin, W., & Jiang, G. (2018). No-reference view synthesis quality prediction for 3-D videos based on color-depth interactions. IEEE Transactions on Multimedia, 20(3), 659-674. doi:10.1109/TMM.2017.2748460||Journal:||IEEE Transactions on Multimedia||Abstract:||In a 3-D video system, automatically predicting the quality of synthesized 3-D video based on the inputs of color and depth videos is an urgent but very difficult task, while the existing full-reference methods usually measure the perceptual quality of the synthesized video. In this paper, a high-efficiency view synthesis quality prediction (HEVSQP) metric for view synthesis is proposed. Based on the derived VSQP model that quantifies the influences of color and depth distortions and their interactions in determining the perceptual quality of 3-D synthesized video, color-involved VSQP and depth-involved VSQP indices are predicted, respectively, and are combined to yield an HEVSQP index. Experimental results on our constructed NBU-3D Synthesized Video Quality Database demonstrate that the proposed HEVSOP has good performance evaluated on the entire synthesized video-quality database, compared with other full-reference and no-reference video-quality assessment metrics.||URI:||https://hdl.handle.net/10356/140031||ISSN:||1520-9210||DOI:||10.1109/TMM.2017.2748460||Rights:||© 2017 IEEE. All rights reserved.||Fulltext Permission:||none||Fulltext Availability:||No Fulltext|
|Appears in Collections:||SCSE Journal Articles|
Updated on Feb 22, 2021
Updated on Feb 25, 2021
Updated on Feb 26, 2021
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.