Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/182243
Title: Optimization strategies for federated learning
Authors: Zhang, Tinghao
Keywords: Computer and Information Science
Issue Date: 2025
Publisher: Nanyang Technological University
Source: Zhang, T. (2025). Optimization strategies for federated learning. Doctoral thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/182243
Abstract: Federated Learning (FL) has emerged as a prominent approach for training collaborative machine learning models within wireless communication networks. FL offers significant privacy advantages since sensitive data remains on the devices to reduce the risk of data breaches. Additionally, FL can improve the speed of model training as it allows for parallel training on multiple local devices without transferring large volumes of data to a central server. However, the practical deployment of FL faces challenges due to the limited bandwidth resources of remote servers and the constrained computational capabilities of wireless devices. Therefore, optimization strategies are necessary to enhance the efficiency of FL. Device scheduling has become as a critical aspect of optimization strategies for FL. It focuses on selecting a subset of devices to alleviate network congestion by considering factors such as device heterogeneity, channel conditions, and learning efficiency. Along with device scheduling, resource allocation can improve FL efficiency by distributing communication and computation resources among local devices to minimize the time delay or the energy consumption for FL training. However, due to intractable interaction among multiple variables, stringent constraints, and the necessity to optimize multiple objectives concurrently, developing effective device scheduling and resource allocation algorithms for FL is challenging. This thesis proposes three frameworks to effectively handle the optimization aspect of FL. The major contributions of this thesis include: Firstly, address the challenge of device scheduling within the framework of spectrum allocation, we propose a weight-divergence-based device selection method coupled with an energy-efficient spectrum allocation optimization technique. Experiments demonstrate that these approaches significantly accelerate FL training and improve convergence compared to benchmark methods. The second contribution lies in the domain of device scheduling for bandwidth allocation. We achieve this through a deep reinforcement learning-based scheduling strategy and an optimized bandwidth allocation method, enabling FL to achieve target accuracy with reduced system costs. Lastly, to further explores device scheduling in hierarchical Federated Learning (HFL), we propose an HFL framework integrates effective device scheduling and assignment techniques, which expedite convergence and minimize costs, making FL more efficient and practical for real-world deployment. Together, these contributions form a cohesive strategy to advance FL by addressing its key challenges in efficiency, scalability, and resource management.
URI: https://hdl.handle.net/10356/182243
DOI: 10.32657/10356/182243
Schools: College of Computing and Data Science 
Rights: This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0).
Fulltext Permission: open
Fulltext Availability: With Fulltext
Appears in Collections:CCDS Theses

Files in This Item:
File Description SizeFormat 
my_thesis_revised.pdf7.38 MBAdobe PDFView/Open

Page view(s)

116
Updated on Mar 20, 2025

Download(s) 50

64
Updated on Mar 20, 2025

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.