Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/178460
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Chen, Chaofeng | en_US |
dc.contributor.author | Zhou, Shangchen | en_US |
dc.contributor.author | Liao, Liang | en_US |
dc.contributor.author | Wu, Haoning | en_US |
dc.contributor.author | Sun, Wenxiu | en_US |
dc.contributor.author | Yan, Qiong | en_US |
dc.contributor.author | Lin, Weisi | en_US |
dc.date.accessioned | 2024-06-21T02:04:13Z | - |
dc.date.available | 2024-06-21T02:04:13Z | - |
dc.date.issued | 2024 | - |
dc.identifier.citation | Chen, C., Zhou, S., Liao, L., Wu, H., Sun, W., Yan, Q. & Lin, W. (2024). Iterative token evaluation and refinement for real-world super-resolution. 38th AAAI Conference on Artificial Intelligence (2024), 38, 1010-1018. https://dx.doi.org/10.1609/aaai.v38i2.27861 | en_US |
dc.identifier.uri | https://hdl.handle.net/10356/178460 | - |
dc.description.abstract | Real-world image super-resolution (RWSR) is a longstanding problem as low-quality (LQ) images often have complex and unidentified degradations. Existing methods such as Generative Adversarial Networks (GANs) or continuous diffusion models present their own issues including GANs being difficult to train while continuous diffusion models requiring numerous inference steps. In this paper, we propose an Iterative Token Evaluation and Refinement (ITER) framework for RWSR, which utilizes a discrete diffusion model operating in the discrete token representation space, i.e., indexes of features extracted from a VQGAN codebook pre-trained with high-quality (HQ) images. We show that ITER is easier to train than GANs and more efficient than continuous diffusion models. Specifically, we divide RWSR into two sub-tasks, i.e., distortion removal and texture generation. Distortion removal involves simple HQ token prediction with LQ images, while texture generation uses a discrete diffusion model to iteratively refine the distortion removal output with a token refinement network. In particular, we propose to include a token evaluation network in the discrete diffusion process. It learns to evaluate which tokens are good restorations and helps to improve the iterative refinement results. Moreover, the evaluation network can first check status of the distortion removal output and then adaptively select total refinement steps needed, thereby maintaining a good balance between distortion removal and texture generation. Extensive experimental results show that ITER is easy to train and performs well within just 8 iterative steps. | en_US |
dc.language.iso | en | en_US |
dc.rights | © 2024 Association for the Advancement of Artifcial Intelligence (www.aaai.org). All rights reserved. | en_US |
dc.subject | Computer and Information Science | en_US |
dc.title | Iterative token evaluation and refinement for real-world super-resolution | en_US |
dc.type | Conference Paper | en |
dc.contributor.school | College of Computing and Data Science | en_US |
dc.contributor.school | School of Computer Science and Engineering | en_US |
dc.contributor.conference | 38th AAAI Conference on Artificial Intelligence (2024) | en_US |
dc.contributor.research | S-Lab | en_US |
dc.identifier.doi | 10.1609/aaai.v38i2.27861 | - |
dc.identifier.scopus | 2-s2.0-85189536364 | - |
dc.identifier.url | https://ojs.aaai.org/index.php/AAAI/article/view/27861 | - |
dc.identifier.volume | 38 | en_US |
dc.identifier.spage | 1010 | en_US |
dc.identifier.epage | 1018 | en_US |
dc.subject.keywords | Computational photography | en_US |
dc.subject.keywords | Image & video synthesis | en_US |
dc.citation.conferencelocation | Vancouver, Canada | en_US |
dc.description.acknowledgement | This study is supported under the RIE2020 Industry Alignment Fund – Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s). | en_US |
item.grantfulltext | none | - |
item.fulltext | No Fulltext | - |
Appears in Collections: | CCDS Conference Papers |
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.