Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/162762
Full metadata record
DC FieldValueLanguage
dc.contributor.authorZhang, Juyongen_US
dc.contributor.authorChen, Keyuen_US
dc.contributor.authorZheng, Jianminen_US
dc.date.accessioned2022-11-08T05:32:15Z-
dc.date.available2022-11-08T05:32:15Z-
dc.date.issued2020-
dc.identifier.citationZhang, J., Chen, K. & Zheng, J. (2020). Facial expression retargeting from human to avatar made easy. IEEE Transactions On Visualization and Computer Graphics, 28(2), 1274-1287. https://dx.doi.org/10.1109/TVCG.2020.3013876en_US
dc.identifier.issn1077-2626en_US
dc.identifier.urihttps://hdl.handle.net/10356/162762-
dc.description.abstractFacial expression retargeting from humans to virtual characters is a useful technique in computer graphics and animation. Traditional methods use markers or blendshapes to construct a mapping between the human and avatar faces. However, these approaches require a tedious 3D modeling process, and the performance relies on the modelers' experience. In this article, we propose a brand-new solution to this cross-domain expression transfer problem via nonlinear expression embedding and expression domain translation. We first build low-dimensional latent spaces for the human and avatar facial expressions with variational autoencoder. Then we construct correspondences between the two latent spaces guided by geometric and perceptual constraints. Specifically, we design geometric correspondences to reflect geometric matching and utilize a triplet data structure to express users' perceptual preference of avatar expressions. A user-friendly method is proposed to automatically generate triplets for a system allowing users to easily and efficiently annotate the correspondences. Using both geometric and perceptual correspondences, we trained a network for expression domain translation from human to avatar. Extensive experimental results and user studies demonstrate that even nonprofessional users can apply our method to generate high-quality facial expression retargeting results with less time and effort.en_US
dc.description.sponsorshipMinistry of Education (MOE)en_US
dc.description.sponsorshipNanyang Technological Universityen_US
dc.language.isoenen_US
dc.relation04INS000518C130en_US
dc.relationMOE 2017-T2-1- 076en_US
dc.relation.ispartofIEEE Transactions on Visualization and Computer Graphicsen_US
dc.rights© 2020 IEEE. All rights reserveden_US
dc.subjectEngineering::Computer science and engineeringen_US
dc.titleFacial expression retargeting from human to avatar made easyen_US
dc.typeJournal Articleen
dc.contributor.schoolSchool of Computer Science and Engineeringen_US
dc.identifier.doi10.1109/TVCG.2020.3013876-
dc.identifier.pmid32746288-
dc.identifier.scopus2-s2.0-85122431287-
dc.identifier.issue2en_US
dc.identifier.volume28en_US
dc.identifier.spage1274en_US
dc.identifier.epage1287en_US
dc.subject.keywordsFacial Expression Retargetingen_US
dc.subject.keywordsVariational Autoencoderen_US
dc.description.acknowledgementThis research was supported in part by the National Natural Science Foundation of China (No. 61672481), Youth Innovation Promotion Association CAS (No. 2018495), Zhejiang Lab (NO. 2019NB0AB03), NTU Data Science and Artificial Intelligence Research Center (DSAIR) (No. 04INS000518C130), and the Ministry of Education, Singapore, under its MoE Tier-2 Grant (MoE 2017-T2-1- 076).en_US
item.grantfulltextnone-
item.fulltextNo Fulltext-
Appears in Collections:SCSE Journal Articles

SCOPUSTM   
Citations 50

6
Updated on Jan 28, 2023

Page view(s)

12
Updated on Feb 3, 2023

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.