Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/180384
Title: | SPFL: a self-purified federated learning method against poisoning attacks | Authors: | Liu, Zizhen He, Weiyang Chang, Chip Hong Ye, Jing Li, Huawei Li, Xiaowei |
Keywords: | Computer and Information Science | Issue Date: | 2024 | Source: | Liu, Z., He, W., Chang, C. H., Ye, J., Li, H. & Li, X. (2024). SPFL: a self-purified federated learning method against poisoning attacks. IEEE Transactions On Information Forensics and Security, 19, 6604-6619. https://dx.doi.org/10.1109/TIFS.2024.3420135 | Project: | NRF2018NCR-NCR009-0001 MOE-T2EP20121-0008 |
Journal: | IEEE Transactions on Information Forensics and Security | Abstract: | While Federated learning (FL) is attractive for pulling privacy-preserving distributed training data, the credibility of participating clients and non-inspectable data pose new security threats, of which poisoning attacks are particularly rampant and hard to defend without compromising privacy, performance or other desirable properties. In this paper, we propose a selfpurified FL (SPFL) method that enables benign clients to exploit trusted historical features of locally purified model to supervise the training of aggregated model in each iteration. The purification is performed by an attention-guided self-knowledge distillation where the teacher and student models are optimized locally for task loss, distillation loss and attention loss simultaneously. SPFL imposes no restriction on the communication protocol and aggregator at the server. It can work in tandem with any existing secure aggregation algorithms and protocols for augmented security and privacy guarantee. We experimentally demonstrate that SPFL outperforms state-of-the-art FL defenses against poisoning attacks. The attack success rate of SPFL trained model remains the lowest among all defense methods in comparison, even if the poisoning attack is launched in every iteration with all but one malicious clients in the system. Meantime, it improves the model quality on normal inputs compared to FedAvg, either under attack or in the absence of an attack. | URI: | https://hdl.handle.net/10356/180384 | ISSN: | 1556-6013 | DOI: | 10.1109/TIFS.2024.3420135 | Schools: | School of Electrical and Electronic Engineering | Rights: | © 2024 IEEE. All rights reserved. This article may be downloaded for personal use only. Any other use requires prior permission of the copyright holder. The Version of Record is available online at http://doi.org/10.1109/TIFS.2024.3420135. | Fulltext Permission: | open | Fulltext Availability: | With Fulltext |
Appears in Collections: | EEE Journal Articles |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
SPFL_TIFS.pdf | 7.04 MB | Adobe PDF | ![]() View/Open |
SCOPUSTM
Citations
50
1
Updated on Mar 7, 2025
Page view(s)
63
Updated on Mar 24, 2025
Download(s)
19
Updated on Mar 24, 2025
Google ScholarTM
Check
Altmetric
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.