Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/180238
Full metadata record
DC FieldValueLanguage
dc.contributor.authorZhou, Shangchenen_US
dc.contributor.authorChan, Kelvin C. K.en_US
dc.contributor.authorLi, Chongyien_US
dc.contributor.authorLoy, Chen Changeen_US
dc.date.accessioned2024-09-26T04:05:03Z-
dc.date.available2024-09-26T04:05:03Z-
dc.date.issued2022-
dc.identifier.citationZhou, S., Chan, K. C. K., Li, C. & Loy, C. C. (2022). Towards robust blind face restoration with codebook lookup transformer. 36th Conference on Neural Information Processing Systems (NeurIPS 2022), 2022.en_US
dc.identifier.isbn9781713871088-
dc.identifier.urihttps://hdl.handle.net/10356/180238-
dc.description.abstractBlind face restoration is a highly ill-posed problem that often requires auxiliary guidance to 1) improve the mapping from degraded inputs to desired outputs, or 2) complement high-quality details lost in the inputs. In this paper, we demonstrate that a learned discrete codebook prior in a small proxy space largely reduces the uncertainty and ambiguity of restoration mapping by casting blind face restoration as a code prediction task, while providing rich visual atoms for generating high-quality faces. Under this paradigm, we propose a Transformer-based prediction network, named CodeFormer, to model the global composition and context of the low-quality faces for code prediction, enabling the discovery of natural faces that closely approximate the target faces even when the inputs are severely degraded. To enhance the adaptiveness for different degradation, we also propose a controllable feature transformation module that allows a flexible trade-off between fidelity and quality. Thanks to the expressive codebook prior and global modeling, CodeFormer outperforms the state of the arts in both quality and fidelity, showing superior robustness to degradation. Extensive experimental results on synthetic and real-world datasets verify the effectiveness of our method.en_US
dc.description.sponsorshipNanyang Technological Universityen_US
dc.language.isoenen_US
dc.relationIAF-ICPen_US
dc.relationNTU-NAPen_US
dc.relation.uri10.21979/N9/X3IBKHen_US
dc.rights© 2022 The Author(s). Published by NeurIPS. All rights reserved. This article may be downloaded for personal use only. Any other use requires prior permission of the copyright holder. The Version of Record is available online at https://papers.nips.cc/paper_files/paper/2022/file/c573258c38d0a3919d8c1364053c45df-Paper-Conference.pdf.en_US
dc.subjectComputer and Information Scienceen_US
dc.titleTowards robust blind face restoration with codebook lookup transformeren_US
dc.typeConference Paperen
dc.contributor.schoolCollege of Computing and Data Scienceen_US
dc.contributor.conference36th Conference on Neural Information Processing Systems (NeurIPS 2022)en_US
dc.contributor.researchS-Laben_US
dc.description.versionPublished versionen_US
dc.identifier.urlhttps://papers.nips.cc/paper_files/paper/2022-
dc.identifier.volume2022en_US
dc.subject.keywordsBlind face restorationen_US
dc.subject.keywordsCodebooken_US
dc.citation.conferencelocationNew Orleans, Louisiana, USAen_US
dc.description.acknowledgementThis study is supported under the RIE2020 Industry Alignment Fund – Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s). It is also partially supported by the NTU NAP granten_US
item.fulltextWith Fulltext-
item.grantfulltextopen-
Appears in Collections:CCDS Conference Papers

Page view(s)

88
Updated on Jan 22, 2025

Download(s)

9
Updated on Jan 22, 2025

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.