Please use this identifier to cite or link to this item:
|Title:||Efficient dropout-resilient aggregation for privacy-preserving machine learning||Authors:||Liu, Ziyao
|Keywords:||Engineering::Computer science and engineering||Issue Date:||2022||Source:||Liu, Z., Guo, J., Lam, K. & Zhao, J. (2022). Efficient dropout-resilient aggregation for privacy-preserving machine learning. IEEE Transactions On Information Forensics and Security, 14(8), 3163592-. https://dx.doi.org/10.1109/TIFS.2022.3163592||Journal:||IEEE Transactions on Information Forensics and Security||Abstract:||Machine learning (ML) has been widely recognized as an enabler of the global trend of digital transformation. With the increasing adoption of data-hungry machine learning algorithms, personal data privacy has emerged as one of the key concerns that could hinder the success of digital transformation. As such, Privacy-Preserving Machine Learning (PPML) has received much attention of the machine learning community, from academic researchers to industry practitioners to government regulators. However, organizations are faced with the dilemma that, on the one hand, they are encouraged to share data to enhance ML performance, but on the other hand, they could potentially be breaching the relevant data privacy regulations. Practical PPML typically allows multiple participants to individually train their ML models, which are then aggregated to construct a global model in a privacy-preserving manner, e.g., based on multi-party computation or homomorphic encryption. Nevertheless, in most important applications of large-scale PPML, e.g., by aggregating clients’ gradients to update a global model for federated learning, such as consumer behavior modeling of mobile application services, some participants are inevitably resource-constrained mobile devices, which may drop out of the PPML system due to their mobility nature . Therefore, the resilience of privacy-preserving aggregation has become an important problem to be tackled because of its real-world application potential and impacts. In this paper, we propose a scalable privacy-preserving aggregation scheme that can tolerate dropout by participants at any time, and is secure against both semi-honest and active malicious adversaries by setting proper system parameters. By replacing communication-intensive building blocks with a seed homomorphic pseudo-random generator, and relying on the additive homomorphic property of Shamir secret sharing scheme, our scheme outperforms state-of-the-art schemes by up to 6.37× in runtime and provides a stronger dropout-resilience. The simplicity of our scheme makes it attractive both for implementation and for further improvements.||URI:||https://hdl.handle.net/10356/162985||ISSN:||1556-6013||DOI:||10.1109/TIFS.2022.3163592||Rights:||© 2021 IEEE. All rights reserved.||Fulltext Permission:||none||Fulltext Availability:||No Fulltext|
|Appears in Collections:||SCSE Journal Articles|
Updated on Nov 30, 2022
Updated on Dec 3, 2022
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.