Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/173390
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Luqman, Alka | en_US |
dc.contributor.author | Chattopadhyay, Anupam | en_US |
dc.contributor.author | Lam Kwok-Yan | en_US |
dc.date.accessioned | 2024-02-02T05:12:44Z | - |
dc.date.available | 2024-02-02T05:12:44Z | - |
dc.date.issued | 2023 | - |
dc.identifier.citation | Luqman, A., Chattopadhyay, A. & Lam Kwok-Yan (2023). Membership inference vulnerabilities in peer-to-peer federated learning. 2023 Secure and Trustworthy Deep Learning Systems Workshop (SecTL '23), July 2023, 6-. https://dx.doi.org/10.1145/3591197.3593638 | en_US |
dc.identifier.isbn | 9798400701818 | - |
dc.identifier.uri | https://hdl.handle.net/10356/173390 | - |
dc.description.abstract | Federated learning is emerging as an efficient approach to exploit data silos that form due to regulations about data sharing and usage, thereby leveraging distributed resources to improve the learning of ML models. It is a fitting technology for cyber physical systems in applications like connected autonomous vehicles, smart farming, IoT surveillance etc. By design, every participant in federated learning has access to the latest ML model. In such a scenario, it becomes all the more important to protect the model's knowledge, and to keep the training data and its properties private. In this paper, we survey the literature of ML attacks to assess the risks that apply in a peer-to-peer (P2P) federated learning setup. We perform membership inference attacks specifically in a P2P federated learning setting with colluding adversaries to evaluate the privacy-accuracy trade offs in a deep neural network thus demonstrating the extent of data leakage possible. | en_US |
dc.description.sponsorship | National Research Foundation (NRF) | en_US |
dc.language.iso | en | en_US |
dc.rights | © 2023 Copyright held by the owner/author(s). This work is licensed under a Creative Commons Attribution-NonCommercial International 4.0 License. | en_US |
dc.subject | Computer and Information Science | en_US |
dc.title | Membership inference vulnerabilities in peer-to-peer federated learning | en_US |
dc.type | Conference Paper | en |
dc.contributor.school | School of Computer Science and Engineering | en_US |
dc.contributor.conference | 2023 Secure and Trustworthy Deep Learning Systems Workshop (SecTL '23) | en_US |
dc.contributor.research | Strategic Centre for Research in Privacy-Preserving Technologies & Systems (SCRIPTS) | en_US |
dc.identifier.doi | 10.1145/3591197.3593638 | - |
dc.description.version | Published version | en_US |
dc.identifier.scopus | 2-s2.0-85168559744 | - |
dc.identifier.volume | July 2023 | en_US |
dc.identifier.spage | 6 | en_US |
dc.subject.keywords | Federated Learning | en_US |
dc.subject.keywords | Neural Networks | en_US |
dc.citation.conferencelocation | Melbourne, Australia | en_US |
dc.description.acknowledgement | This research is supported by the National Research Foundation, Singapore under its Strategic Capability Research Centres Funding Initiative. | en_US |
item.fulltext | With Fulltext | - |
item.grantfulltext | open | - |
Appears in Collections: | SCSE Conference Papers |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
3591197.3593638.pdf | 963.04 kB | Adobe PDF | View/Open |
Page view(s)
119
Updated on Oct 2, 2024
Download(s) 50
68
Updated on Oct 2, 2024
Google ScholarTM
Check
Altmetric
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.