Please use this identifier to cite or link to this item:
Full metadata record
DC FieldValueLanguage
dc.contributor.authorXu, Ruien_US
dc.contributor.authorGuo, Minghaoen_US
dc.contributor.authorWang, Jiaqien_US
dc.contributor.authorLi, Xiaoxiaoen_US
dc.contributor.authorZhou, Boleien_US
dc.contributor.authorLoy, Chen Changeen_US
dc.identifier.citationXu, R., Guo, M., Wang, J., Li, X., Zhou, B. & Loy, C. C. (2021). Texture memory-augmented deep patch-based image inpainting. IEEE Transactions On Image Processing, 30, 9112-9124.
dc.description.abstractPatch-based methods and deep networks have been employed to tackle image inpainting problem, with their own strengths and weaknesses. Patch-based methods are capable of restoring a missing region with high-quality texture through searching nearest neighbor patches from the unmasked regions. However, these methods bring problematic contents when recovering large missing regions. Deep networks, on the other hand, show promising results in completing large regions. Nonetheless, the results often lack faithful and sharp details that resemble the surrounding area. By bringing together the best of both paradigms, we propose a new deep inpainting framework where texture generation is guided by a texture memory of patch samples extracted from unmasked regions. The framework has a novel design that allows texture memory retrieval to be trained end-to-end with the deep inpainting network. In addition, we introduce a patch distribution loss to encourage high-quality patch synthesis. The proposed method shows superior performance both qualitatively and quantitatively on three challenging image benchmarks, i.e., Places, CelebA-HQ, and Paris Street-View datasets (Code will be made publicly available in
dc.description.sponsorshipNational Research Foundation (NRF)en_US
dc.relation.ispartofIEEE Transactions on Image Processingen_US
dc.rights© 2021 IEEE. All rights reserved.en_US
dc.subjectEngineering::Computer science and engineeringen_US
dc.titleTexture memory-augmented deep patch-based image inpaintingen_US
dc.typeJournal Articleen
dc.contributor.schoolSchool of Computer Science and Engineeringen_US
dc.subject.keywordsImage Reconstructionen_US
dc.subject.keywordsImage Restorationen_US
dc.description.acknowledgementThis work was supported in part by the RIE2020 Industry Alignment Fund-Industry Collaboration Projects (IAF-ICP) Funding Initiative, in part by the Research Grants Council (RGC) of Hong Kong under ECS Grant 24206219, in part by the General Research Fund (GRF) under Grant 14204521, in part by The Chinese University of Hong Kong (CUHK) Faculty of Engineering (FoE) Research Sustainability of Major RGC Funding Schemes (RSFS) Grant, and in part by SenseTime Collaborative Grant.en_US
item.fulltextNo Fulltext-
Appears in Collections:SCSE Journal Articles

Citations 20

Updated on Nov 30, 2023

Web of ScienceTM
Citations 20

Updated on Oct 27, 2023

Page view(s)

Updated on Nov 30, 2023

Google ScholarTM




Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.