Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/162762
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Zhang, Juyong | en_US |
dc.contributor.author | Chen, Keyu | en_US |
dc.contributor.author | Zheng, Jianmin | en_US |
dc.date.accessioned | 2022-11-08T05:32:15Z | - |
dc.date.available | 2022-11-08T05:32:15Z | - |
dc.date.issued | 2020 | - |
dc.identifier.citation | Zhang, J., Chen, K. & Zheng, J. (2020). Facial expression retargeting from human to avatar made easy. IEEE Transactions On Visualization and Computer Graphics, 28(2), 1274-1287. https://dx.doi.org/10.1109/TVCG.2020.3013876 | en_US |
dc.identifier.issn | 1077-2626 | en_US |
dc.identifier.uri | https://hdl.handle.net/10356/162762 | - |
dc.description.abstract | Facial expression retargeting from humans to virtual characters is a useful technique in computer graphics and animation. Traditional methods use markers or blendshapes to construct a mapping between the human and avatar faces. However, these approaches require a tedious 3D modeling process, and the performance relies on the modelers' experience. In this article, we propose a brand-new solution to this cross-domain expression transfer problem via nonlinear expression embedding and expression domain translation. We first build low-dimensional latent spaces for the human and avatar facial expressions with variational autoencoder. Then we construct correspondences between the two latent spaces guided by geometric and perceptual constraints. Specifically, we design geometric correspondences to reflect geometric matching and utilize a triplet data structure to express users' perceptual preference of avatar expressions. A user-friendly method is proposed to automatically generate triplets for a system allowing users to easily and efficiently annotate the correspondences. Using both geometric and perceptual correspondences, we trained a network for expression domain translation from human to avatar. Extensive experimental results and user studies demonstrate that even nonprofessional users can apply our method to generate high-quality facial expression retargeting results with less time and effort. | en_US |
dc.description.sponsorship | Ministry of Education (MOE) | en_US |
dc.description.sponsorship | Nanyang Technological University | en_US |
dc.language.iso | en | en_US |
dc.relation | 04INS000518C130 | en_US |
dc.relation | MOE 2017-T2-1- 076 | en_US |
dc.relation.ispartof | IEEE Transactions on Visualization and Computer Graphics | en_US |
dc.rights | © 2020 IEEE. All rights reserved | en_US |
dc.subject | Engineering::Computer science and engineering | en_US |
dc.title | Facial expression retargeting from human to avatar made easy | en_US |
dc.type | Journal Article | en |
dc.contributor.school | School of Computer Science and Engineering | en_US |
dc.identifier.doi | 10.1109/TVCG.2020.3013876 | - |
dc.identifier.pmid | 32746288 | - |
dc.identifier.scopus | 2-s2.0-85122431287 | - |
dc.identifier.issue | 2 | en_US |
dc.identifier.volume | 28 | en_US |
dc.identifier.spage | 1274 | en_US |
dc.identifier.epage | 1287 | en_US |
dc.subject.keywords | Facial Expression Retargeting | en_US |
dc.subject.keywords | Variational Autoencoder | en_US |
dc.description.acknowledgement | This research was supported in part by the National Natural Science Foundation of China (No. 61672481), Youth Innovation Promotion Association CAS (No. 2018495), Zhejiang Lab (NO. 2019NB0AB03), NTU Data Science and Artificial Intelligence Research Center (DSAIR) (No. 04INS000518C130), and the Ministry of Education, Singapore, under its MoE Tier-2 Grant (MoE 2017-T2-1- 076). | en_US |
item.grantfulltext | none | - |
item.fulltext | No Fulltext | - |
Appears in Collections: | SCSE Journal Articles |
SCOPUSTM
Citations
50
6
Updated on Jan 28, 2023
Page view(s)
12
Updated on Feb 3, 2023
Google ScholarTM
Check
Altmetric
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.