Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/146194
Full metadata record
DC FieldValueLanguage
dc.contributor.authorWu, Rongliangen_US
dc.contributor.authorLu, Shijianen_US
dc.date.accessioned2021-02-01T06:04:51Z-
dc.date.available2021-02-01T06:04:51Z-
dc.date.issued2020-
dc.identifier.citationWu, R., & Lu, S. (2020). LEED : label-free expression editing via disentanglement. Proceedings of the European Conference on Computer Vision, 12357 LNCS, 781-798. doi:10.1007/978-3-030-58610-2_46en_US
dc.identifier.isbn9783030586096-
dc.identifier.urihttps://hdl.handle.net/10356/146194-
dc.description.abstractRecent studies on facial expression editing have obtained very promising progress. On the other hand, existing methods face the constraint of requiring a large amount of expression labels which are often expensive and time-consuming to collect. This paper presents an innovative label-free expression editing via disentanglement (LEED) framework that is capable of editing the expression of both frontal and profile facial images without requiring any expression label. The idea is to disentangle the identity and expression of a facial image in the expression manifold, where the neutral face captures the identity attribute and the displacement between the neutral image and the expressive image captures the expression attribute. Two novel losses are designed for optimal expression disentanglement and consistent synthesis, including a mutual expression information loss that aims to extract pure expression-related features and a siamese loss that aims to enhance the expression similarity between the synthesized image and the reference image. Extensive experiments over two public facial expression datasets show that LEED achieves superior facial expression editing qualitatively and quantitatively.en_US
dc.description.sponsorshipNanyang Technological Universityen_US
dc.language.isoenen_US
dc.relation#001531-00001en_US
dc.rights© 2020 Springer Nature Switzerland AG. This is a post-peer-review, pre-copyedit version of a conference paper published in European Conference on Computer Vision (ECCV). The final authenticated version is available online at: https://doi.org/10.1007/978-3-030-58610-2_46en_US
dc.subjectEngineeringen_US
dc.titleLEED : label-free expression editing via disentanglementen_US
dc.typeConference Paperen
dc.contributor.schoolSchool of Computer Science and Engineeringen_US
dc.contributor.conference2020 European Conference on Computer Vision (ECCV)en_US
dc.contributor.researchData Science and Artificial Intelligence Research Centreen_US
dc.identifier.doi10.1007/978-3-030-58610-2_46-
dc.description.versionAccepted versionen_US
dc.identifier.scopus2-s2.0-85093112998-
dc.identifier.volume12357 LNCSen_US
dc.identifier.spage781en_US
dc.identifier.epage798en_US
dc.subject.keywordsComputer Visionen_US
dc.subject.keywordsImage Synthesisen_US
dc.description.acknowledgementThis work is supported by Data Science & Artificial Intelligence Research Centre, NTU Singapore.en_US
item.grantfulltextembargo_20211014-
item.fulltextWith Fulltext-
Appears in Collections:SCSE Conference Papers
Files in This Item:
File Description SizeFormat 
1742.pdf
  Until 2021-10-14
1.1 MBAdobe PDFUnder embargo until Oct 14, 2021

Page view(s)

42
Updated on May 13, 2021

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.