Please use this identifier to cite or link to this item:
Title: FakeLocator: robust localization of GAN-based face manipulations
Authors: Huang, Yihao
Xu, Felix Juefei
Guo, Qing
Liu, Yang
Pu, Geguang
Keywords: Engineering::Computer science and engineering
Issue Date: 2022
Source: Huang, Y., Xu, F. J., Guo, Q., Liu, Y. & Pu, G. (2022). FakeLocator: robust localization of GAN-based face manipulations. IEEE Transactions On Information Forensics and Security, 17, 2657-2672.
Project: AISG2-RP-2020-019 
Journal: IEEE Transactions on Information Forensics and Security 
Abstract: Full face synthesis and partial face manipulation by virtue of the generative adversarial networks (GANs) and its variants have raised wide public concerns. In the multi-media forensics area, detecting and ultimately locating the image forgery has become an imperative task. In this work, we investigate the architecture of existing GAN-based face manipulation methods and observe that the imperfection of upsampling methods therewithin could be served as an important asset for GAN-synthesized fake image detection and forgery localization. Based on this basic observation, we have proposed a novel approach, termed FakeLocator, to obtain high localization accuracy, at full resolution, on manipulated facial images. To the best of our knowledge, this is the very first attempt to solve the GAN-based fake localization problem with a gray-scale fakeness map that preserves more information of fake regions. To improve the universality of FakeLocator across multifarious facial attributes, we introduce an attention mechanism to guide the training of the model. To improve the universality of FakeLocator across different DeepFake methods, we propose partial data augmentation and single sample clustering on the training images. Experimental results on popular FaceForensics++, DFFD datasets and seven different state-of-the-art GAN-based face generation methods have shown the effectiveness of our method. Compared with the baselines, our method performs better on various metrics. Moreover, the proposed method is robust against various real-world facial image degradations such as JPEG compression, low-resolution, noise, and blur.
ISSN: 1556-6013
DOI: 10.1109/TIFS.2022.3141262
Rights: © 2022 IEEE. All rights reserved.
Fulltext Permission: none
Fulltext Availability: No Fulltext
Appears in Collections:SCSE Journal Articles

Citations 50

Updated on Nov 23, 2022

Web of ScienceTM
Citations 20

Updated on Nov 15, 2022

Page view(s)

Updated on Nov 26, 2022

Google ScholarTM




Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.