Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/173926
Full metadata record
DC FieldValueLanguage
dc.contributor.authorQiu, Hongyuen_US
dc.contributor.authorWang, Yongweien_US
dc.contributor.authorXu, Yonghuien_US
dc.contributor.authorCui, Lizhenen_US
dc.contributor.authorShen, Zhiqien_US
dc.date.accessioned2024-03-07T02:11:59Z-
dc.date.available2024-03-07T02:11:59Z-
dc.date.issued2023-
dc.identifier.citationQiu, H., Wang, Y., Xu, Y., Cui, L. & Shen, Z. (2023). FedCIO: efficient exact federated unlearning with clustering, isolation, and one-shot aggregation. 2023 IEEE International Conference on Big Data (BigData), 5559-5568. https://dx.doi.org/10.1109/BigData59044.2023.10386788en_US
dc.identifier.isbn9798350324457-
dc.identifier.urihttps://hdl.handle.net/10356/173926-
dc.description.abstractData are invaluable in machine learning (ML), yet they raise significant privacy concerns. In the real world, data are often distributed across isolated silos, challenging conventional ML methods that centralize data. Federated learning (FL) offers a privacy-preserving solution that enables learning without direct data transfer. Meanwhile, the 'right to be forgotten' sparks privacy-preserving methods from another viewpoint as machine unlearning, enabling data owners to erase specific data contributions from ML models. However, the invisibility of data in FL scenarios complicates effective local data removal, necessitating tailored unlearning algorithms for FL. Existing federated unlearning methods fall into approximate unlearning, leaving residual memorization of target data, consequently diminishing user trust. To bridge this gap, we propose FedCIO, a novel framework for exact federated unlearning, designed to efficiently manage precise data removal requests in FL scenarios. Specifically, the framework involves client clustering, isolation among clusters, and one-shot aggregation of cluster models. This framework facilitates efficient unlearning by retraining only a relevant model subset rather than from scratch. To enhance the capability to handle Non-Independent and Identically Distributed (Non-IID) data, we further introduce an advanced spectral clustering implementation based on model similarity for better cluster partitioning. Comprehensive evaluation across common FL datasets with varied distributions demonstrates the superior performance of our proposed framework.en_US
dc.description.sponsorshipNanyang Technological Universityen_US
dc.language.isoenen_US
dc.rights© 2023 IEEE. All rights reserved.en_US
dc.subjectComputer and Information Scienceen_US
dc.titleFedCIO: efficient exact federated unlearning with clustering, isolation, and one-shot aggregationen_US
dc.typeConference Paperen
dc.contributor.schoolSchool of Computer Science and Engineeringen_US
dc.contributor.conference2023 IEEE International Conference on Big Data (BigData)en_US
dc.identifier.doi10.1109/BigData59044.2023.10386788-
dc.identifier.scopus2-s2.0-85184978680-
dc.identifier.spage5559en_US
dc.identifier.epage5568en_US
dc.subject.keywordsFederated learningen_US
dc.subject.keywordsMachine unlearningen_US
dc.citation.conferencelocationSorrento, Italyen_US
dc.description.acknowledgementThis research is supported, in part, by the Joint NTUWeBank Research Centre on FinTech, Nanyang Technological University, Singapore. This work is also supported, in part, by the Joint SDU-NTU Centre for Artificial Intelligence Research (C-FAIR), Shandong University, China, the National Key R&D Program of China (No. 2021YFF0900800), and the Shandong Provincial Key Research and Development Program (Major Scientific and Technological Innovation Project) (NO.2021CXGC010108).en_US
item.grantfulltextnone-
item.fulltextNo Fulltext-
Appears in Collections:SCSE Conference Papers

Page view(s)

108
Updated on Oct 8, 2024

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.