Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/182243
Full metadata record
DC FieldValueLanguage
dc.contributor.authorZhang, Tinghaoen_US
dc.date.accessioned2025-01-23T03:21:32Z-
dc.date.available2025-01-23T03:21:32Z-
dc.date.issued2025-
dc.identifier.citationZhang, T. (2025). Optimization strategies for federated learning. Doctoral thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/182243en_US
dc.identifier.urihttps://hdl.handle.net/10356/182243-
dc.description.abstractFederated Learning (FL) has emerged as a prominent approach for training collaborative machine learning models within wireless communication networks. FL offers significant privacy advantages since sensitive data remains on the devices to reduce the risk of data breaches. Additionally, FL can improve the speed of model training as it allows for parallel training on multiple local devices without transferring large volumes of data to a central server. However, the practical deployment of FL faces challenges due to the limited bandwidth resources of remote servers and the constrained computational capabilities of wireless devices. Therefore, optimization strategies are necessary to enhance the efficiency of FL. Device scheduling has become as a critical aspect of optimization strategies for FL. It focuses on selecting a subset of devices to alleviate network congestion by considering factors such as device heterogeneity, channel conditions, and learning efficiency. Along with device scheduling, resource allocation can improve FL efficiency by distributing communication and computation resources among local devices to minimize the time delay or the energy consumption for FL training. However, due to intractable interaction among multiple variables, stringent constraints, and the necessity to optimize multiple objectives concurrently, developing effective device scheduling and resource allocation algorithms for FL is challenging. This thesis proposes three frameworks to effectively handle the optimization aspect of FL. The major contributions of this thesis include: Firstly, address the challenge of device scheduling within the framework of spectrum allocation, we propose a weight-divergence-based device selection method coupled with an energy-efficient spectrum allocation optimization technique. Experiments demonstrate that these approaches significantly accelerate FL training and improve convergence compared to benchmark methods. The second contribution lies in the domain of device scheduling for bandwidth allocation. We achieve this through a deep reinforcement learning-based scheduling strategy and an optimized bandwidth allocation method, enabling FL to achieve target accuracy with reduced system costs. Lastly, to further explores device scheduling in hierarchical Federated Learning (HFL), we propose an HFL framework integrates effective device scheduling and assignment techniques, which expedite convergence and minimize costs, making FL more efficient and practical for real-world deployment. Together, these contributions form a cohesive strategy to advance FL by addressing its key challenges in efficiency, scalability, and resource management.en_US
dc.language.isoenen_US
dc.publisherNanyang Technological Universityen_US
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0).en_US
dc.subjectComputer and Information Scienceen_US
dc.titleOptimization strategies for federated learningen_US
dc.typeThesis-Doctor of Philosophyen_US
dc.contributor.supervisorLam Kwok Yanen_US
dc.contributor.schoolCollege of Computing and Data Scienceen_US
dc.description.degreeDoctor of Philosophyen_US
dc.identifier.doi10.32657/10356/182243-
dc.contributor.supervisoremailkwokyan.lam@ntu.edu.sgen_US
item.grantfulltextopen-
item.fulltextWith Fulltext-
Appears in Collections:CCDS Theses
Files in This Item:
File Description SizeFormat 
my_thesis_revised.pdf7.38 MBAdobe PDFView/Open

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.