Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/160516
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Xu, Rui | en_US |
dc.contributor.author | Guo, Minghao | en_US |
dc.contributor.author | Wang, Jiaqi | en_US |
dc.contributor.author | Li, Xiaoxiao | en_US |
dc.contributor.author | Zhou, Bolei | en_US |
dc.contributor.author | Loy, Chen Change | en_US |
dc.date.accessioned | 2022-07-26T04:23:56Z | - |
dc.date.available | 2022-07-26T04:23:56Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | Xu, R., Guo, M., Wang, J., Li, X., Zhou, B. & Loy, C. C. (2021). Texture memory-augmented deep patch-based image inpainting. IEEE Transactions On Image Processing, 30, 9112-9124. https://dx.doi.org/10.1109/TIP.2021.3122930 | en_US |
dc.identifier.issn | 1057-7149 | en_US |
dc.identifier.uri | https://hdl.handle.net/10356/160516 | - |
dc.description.abstract | Patch-based methods and deep networks have been employed to tackle image inpainting problem, with their own strengths and weaknesses. Patch-based methods are capable of restoring a missing region with high-quality texture through searching nearest neighbor patches from the unmasked regions. However, these methods bring problematic contents when recovering large missing regions. Deep networks, on the other hand, show promising results in completing large regions. Nonetheless, the results often lack faithful and sharp details that resemble the surrounding area. By bringing together the best of both paradigms, we propose a new deep inpainting framework where texture generation is guided by a texture memory of patch samples extracted from unmasked regions. The framework has a novel design that allows texture memory retrieval to be trained end-to-end with the deep inpainting network. In addition, we introduce a patch distribution loss to encourage high-quality patch synthesis. The proposed method shows superior performance both qualitatively and quantitatively on three challenging image benchmarks, i.e., Places, CelebA-HQ, and Paris Street-View datasets (Code will be made publicly available in https://github.com/open-mmlab/mmediting). | en_US |
dc.description.sponsorship | National Research Foundation (NRF) | en_US |
dc.language.iso | en | en_US |
dc.relation.ispartof | IEEE Transactions on Image Processing | en_US |
dc.rights | © 2021 IEEE. All rights reserved. | en_US |
dc.subject | Engineering::Computer science and engineering | en_US |
dc.title | Texture memory-augmented deep patch-based image inpainting | en_US |
dc.type | Journal Article | en |
dc.contributor.school | School of Computer Science and Engineering | en_US |
dc.identifier.doi | 10.1109/TIP.2021.3122930 | - |
dc.identifier.pmid | 34723802 | - |
dc.identifier.scopus | 2-s2.0-85118677035 | - |
dc.identifier.volume | 30 | en_US |
dc.identifier.spage | 9112 | en_US |
dc.identifier.epage | 9124 | en_US |
dc.subject.keywords | Image Reconstruction | en_US |
dc.subject.keywords | Image Restoration | en_US |
dc.description.acknowledgement | This work was supported in part by the RIE2020 Industry Alignment Fund-Industry Collaboration Projects (IAF-ICP) Funding Initiative, in part by the Research Grants Council (RGC) of Hong Kong under ECS Grant 24206219, in part by the General Research Fund (GRF) under Grant 14204521, in part by The Chinese University of Hong Kong (CUHK) Faculty of Engineering (FoE) Research Sustainability of Major RGC Funding Schemes (RSFS) Grant, and in part by SenseTime Collaborative Grant. | en_US |
item.grantfulltext | none | - |
item.fulltext | No Fulltext | - |
Appears in Collections: | SCSE Journal Articles |
SCOPUSTM
Citations
20
15
Updated on Nov 30, 2023
Web of ScienceTM
Citations
20
9
Updated on Oct 27, 2023
Page view(s)
47
Updated on Nov 30, 2023
Google ScholarTM
Check
Altmetric
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.