Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/138274
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSong, Guoxianen_US
dc.contributor.authorCai, Jianfeien_US
dc.contributor.authorCham, Tat-Jenen_US
dc.contributor.authorZheng, Jianminen_US
dc.contributor.authorZhang, Juyongen_US
dc.contributor.authorFuchs, Henryen_US
dc.date.accessioned2020-04-30T02:36:32Z-
dc.date.available2020-04-30T02:36:32Z-
dc.date.issued2018-
dc.identifier.citationSong, G., Cai, J., Cham, T.-J., Zheng, J., Zhang, J., & Fuchs, H. (2018). Real-time 3D face-eye performance capture of a person wearing VR headset. Proceedings of the 26th ACM international conference on Multimedia (MM '18), 923-931. doi:10.1145/3240508.3240570en_US
dc.identifier.isbn9781450356657-
dc.identifier.urihttps://hdl.handle.net/10356/138274-
dc.description.abstractTeleconference or telepresence based on virtual reality (VR) head-mount display (HMD) device is a very interesting and promising application since HMD can provide immersive feelings for users. However, in order to facilitate face-to-face communications for HMD users, real-time 3D facial performance capture of a person wearing HMD is needed, which is a very challenging task due to the large occlusion caused by HMD. The existing limited solutions are very complex either in setting or in approach as well as lacking the performance capture of 3D eye gaze movement. In this paper, we propose a convolutional neural network (CNN) based solution for real-time 3D face-eye performance capture of HMD users without complex modification to devices. To address the issue of lacking training data, we generate massive pairs of HMD face-label dataset by data synthesis as well as collecting VR-IR eye dataset from multiple subjects. Then, we train a dense-fitting network for facial region and an eye gaze network to regress 3D eye model parameters. Extensive experimental results demonstrate that our system can efficiently and effectively produce in real time a vivid personalized 3D avatar with the correct identity, pose, expression and eye motion corresponding to the HMD user.en_US
dc.description.sponsorshipNRF (Natl Research Foundation, S’pore)en_US
dc.language.isoenen_US
dc.rights© 2018 Association for Computing Machinery. All rights reserved.en_US
dc.subjectEngineering::Computer science and engineeringen_US
dc.titleReal-time 3D face-eye performance capture of a person wearing VR headseten_US
dc.typeConference Paperen
dc.contributor.schoolSchool of Computer Science and Engineeringen_US
dc.contributor.conference26th ACM international conference on Multimedia (MM '18)en_US
dc.contributor.researchInstitute for Media Innovation (IMI)en_US
dc.identifier.doi10.1145/3240508.3240570-
dc.identifier.scopus2-s2.0-85058230557-
dc.identifier.spage923en_US
dc.identifier.epage931en_US
dc.subject.keywords3D Facial Reconstructionen_US
dc.subject.keywordsGaze Estimationen_US
dc.citation.conferencelocationSeoul Republic of Koreaen_US
item.fulltextNo Fulltext-
item.grantfulltextnone-
Appears in Collections:IMI Conference Papers

SCOPUSTM   
Citations 50

2
Updated on Mar 10, 2021

Page view(s)

46
Updated on Apr 15, 2021

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.