Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/151225
Title: The spatially-correlative loss for various image translation tasks
Authors: Zheng, Chuanxia
Cham, Tat-Jen
Cai, Jianfei
Keywords: Engineering::Computer science and engineering::Computing methodologies::Pattern recognition
Issue Date: 2021
Source: Zheng, C., Cham, T. & Cai, J. (2021). The spatially-correlative loss for various image translation tasks. IEEE Conference on Computer Vision and Pattern Recognition.
Abstract: We propose a novel spatially-correlative loss that is simple, efficient and yet effective for preserving scene structure consistency while supporting large appearance changes during unpaired image-to-image (I2I) translation. Previous methods attempt this by using pixel-level cycle-consistency or feature-level matching losses, but the domain-specific nature of these losses hinder translation across large domain gaps. To address this, we exploit the spatial patterns of self-similarity as a means of defining scene structure. Our spatially-correlative loss is geared towards only capturing spatial relationships within an image rather than domain appearance. We also introduce a new self-supervised learning method to explicitly learn spatially-correlative maps for each specific translation task. We show distinct improvement over baseline models in all three modes of unpaired I2I translation: single-modal, multi-modal, and even single-image translation. This new loss can easily be integrated into existing network architectures and thus allows wide applicability.
URI: https://hdl.handle.net/10356/151225
Rights: © 2021 Institute of Electrical and Electronics Engineers (IEEE). All rights reserved.
Fulltext Permission: none
Fulltext Availability: No Fulltext
Appears in Collections:SCSE Conference Papers

Page view(s)

32
Updated on Jun 23, 2021

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.