Please use this identifier to cite or link to this item:
|Title:||Model-based referenceless quality metric of 3D synthesized images using local image description||Authors:||Gu, Ke
|Keywords:||Engineering::Computer science and engineering||Issue Date:||2017||Source:||Gu, K., Jakhetiya, V., Qiao, J.-F., Li, X., Lin, W., & Thalmann, D. (2018). Model-based referenceless quality metric of 3D synthesized images using local image description. IEEE Transactions on Image Processing, 27(1), 394-405. doi:10.1109/TIP.2017.2733164||Journal:||IEEE Transactions on Image Processing||Abstract:||New challenges have been brought out along with the emerging of 3D-related technologies, such as virtual reality, augmented reality (AR), and mixed reality. Free viewpoint video (FVV), due to its applications in remote surveillance, remote education, and so on, based on the flexible selection of direction and viewpoint, has been perceived as the development direction of next-generation video technologies and has drawn a wide range of researchers' attention. Since FVV images are synthesized via a depth image-based rendering (DIBR) procedure in the "blind" environment (without reference images), a reliable real-time blind quality evaluation and monitoring system is urgently required. But existing assessment metrics do not render human judgments faithfully mainly because geometric distortions are generated by DIBR. To this end, this paper proposes a novel referenceless quality metric of DIBR-synthesized images using the autoregression (AR)-based local image description. It was found that, after the AR prediction, the reconstructed error between a DIBR-synthesized image and its AR-predicted image can accurately capture the geometry distortion. The visual saliency is then leveraged to modify the proposed blind quality metric to a sizable margin. Experiments validate the superiority of our no-reference quality method as compared with prevailing full-, reduced-, and no-reference models.||URI:||https://hdl.handle.net/10356/142320||ISSN:||1057-7149||DOI:||10.1109/TIP.2017.2733164||Rights:||© 2017 IEEE. All rights reserved.||Fulltext Permission:||none||Fulltext Availability:||No Fulltext|
|Appears in Collections:||SCSE Journal Articles|
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.