Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/181061
Title: Joint client-and-sample selection for federated learning via bi-level optimization
Authors: Li, Anran
Wang, Guangjing
Hu, Ming
Sun, Jianfei
Zhang, Lan
Tuan, Luu Anh
Yu, Han
Keywords: Computer and Information Science
Issue Date: 2024
Source: Li, A., Wang, G., Hu, M., Sun, J., Zhang, L., Tuan, L. A. & Yu, H. (2024). Joint client-and-sample selection for federated learning via bi-level optimization. IEEE Transactions On Mobile Computing, 23(12), 15196-15209. https://dx.doi.org/10.1109/TMC.2024.3455331
Project: 020724-00001 
I2301E0026 
AISG2-RP-2020-019 
Journal: IEEE Transactions on Mobile Computing
Abstract: Federated Learning (FL) enables massive local data owners to collaboratively train a deep learning model without disclosing their private data. The importance of local data samples from various data owners to FL models varies widely. This is exacerbated by the presence of noisy data that exhibit large losses similar to important (hard) samples. Currently, there lacks an FL approach that can effectively distinguish hard samples (which are beneficial) from noisy samples (which are harmful). To bridge this gap, we propose the joint Federated Meta-Weighting based Client and Sample Selection (FedMW-CSS) approach to simultaneously mitigate label noise and hard sample selection. It is a bilevel optimization approach for FL client-and-sample selection and global model construction to achieve hard sample-aware noise-robust learning in a privacy preserving manner. It performs meta-learning based online approximation to iteratively update global FL models, select the most positively influential samples and deal with training data noise. To utilize both the instance-level information and class-level information for better performance improvements, FedMW-CSS efficiently learns a class-level weight by manipulating gradients at the class level, e.g., it performs a gradient descent step on class-level weights, which only relies on intermediate gradients. Theoretically, we analyze the privacy guarantees and convergence of FedMW-CSS. Extensive experiments comparison against eight state-of-the-art baselines on six real-world datasets in the presence of data noise and heterogeneity shows that FedMW-CSS achieves up to 28.5% higher test accuracy, while saving communication and computation costs by at least 49.3% and 1.2%, respectively.
URI: https://hdl.handle.net/10356/181061
ISSN: 1536-1233
DOI: 10.1109/TMC.2024.3455331
Schools: School of Computer Science and Engineering 
Rights: © 2024 IEEE. All rights reserved.
Fulltext Permission: none
Fulltext Availability: No Fulltext
Appears in Collections:SCSE Journal Articles

Page view(s)

44
Updated on Jan 16, 2025

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.