Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/68805
Full metadata record
DC FieldValueLanguage
dc.contributor.authorXu, Jinmingen
dc.date.accessioned2016-06-01T08:44:11Zen
dc.date.available2016-06-01T08:44:11Zen
dc.date.issued2016en
dc.identifier.urihttps://hdl.handle.net/10356/68805en
dc.description.abstractThe dissertation develops several new schemes and algorithms for solving distributed optimization problems in large-scale networked systems in which a group of agents are to collaboratively seek the global optimum through peer-to-peer communication networks. The problem arises in various application domains, such as coordinated control, resource allocation and sensor fusion. Common features to these areas are that the system in question typically has a large number of agents involved without any centralized coordinator and that resources, such as sensing, communication and computation, are usually scattered throughout the network. As a result, the agents have to coordinate their behaviors with each other through only local information exchange to achieve a desired network (system) objective. For coordinated control of large-scale networked systems, we propose a novel distributed simultaneous perturbation approach (D-SPA) to solve the distributed optimization problem based on simultaneous perturbation techniques as well as consensus strategies. The proposed method is model-free and requires little knowledge on the coupling structure of the problem to be optimized. Using singular perturbation and averaging theory, we show that the proposed scheme will converge to the neighborhood of the Pareto optimum of the problem so long as the energy of perturbation signals is sufficiently small. To illustrate its effectiveness, we apply the proposed approach to a simulated offshore wind farm for energy maximization and make a comprehensive comparison with the existing state-of-the-art technique. On the other hand, for coordinated estimation in large-scale statistical signal processing, most existing distributed algorithms usually require a perfect synchronization mechanism and decaying stepsize for achieving the exact optimum, restricting it from being asynchronously implemented and resulting in slower convergence rates. In addition, the assumption of the boundedness of (sub)gradient is often made for convergence analysis, which is quite restrictive in unconstrained problems. To overcome these issues, we propose two augmented distributed algorithms both of which involve an extra step of consensus in each iteration. Specifically, a general efficient distributed algorithm, termed Distributed Forward-Backward Bregman Splitting (D-FBBS), is proposed to simultaneously solve the primal problem as well as its dual based on the Bregman method and operator splitting. The proposed algorithm allows agents to communicate asynchronously and thus lends itself to stochastic networks. This algorithm belongs to the family of general proximal point algorithms and is shown to have close connections with some existing well-known algorithms for fixed networks but generally different from them in handling stochastic networks. To further tackle the asynchronous issues in computation, we propose a new augmented distributed gradient method (AugDGM) based on the existing well-known distributed gradient method. Both algorithms are able to converge to the exact optimum even with constant stepsize over stochastic networks without the assumption of boundedness of (sub)gradient. With proper assumptions, we establish a non-ergodic convergence rate of o(1/k) in terms of fixed point residual for fixed networks and an ergodic convergence rate of O(1/k) for stochastic networks respectively for the D-FBBS algorithm. For the asynchronous version of AugDGM (AsynDGM), we obtain an ergodic convergence rate of O(1/k^1/2) in terms of the objective error for strongly convex functions with Lipschitz gradients over both fixed and stochastic networks. Some examples of sensor fusion problems are provided to illustrate the effectiveness of the proposed algorithms.en
dc.format.extent150 p.en
dc.language.isoenen
dc.subjectDRNTU::Science::Mathematics::Applied mathematics::Optimizationen
dc.subjectDRNTU::Engineering::Electrical and electronic engineering::Control and instrumentation::Control engineeringen
dc.titleAugmented distributed optimization for networked systemsen
dc.typeThesisen
dc.contributor.supervisorSoh Yeng Chaien
dc.contributor.schoolSchool of Electrical and Electronic Engineeringen
dc.description.degreeDOCTOR OF PHILOSOPHY (EEE)en
dc.contributor.researchCentre for Modelling and Control of Complex Systemsen
dc.identifier.doi10.32657/10356/68805en
item.fulltextWith Fulltext-
item.grantfulltextopen-
Appears in Collections:EEE Theses
Files in This Item:
File Description SizeFormat 
PhDthesis_augdopt4nsys_final__Jinming Xu_G1102295C.pdfPhD Dissertation2.81 MBAdobe PDFThumbnail
View/Open

Page view(s) 50

588
Updated on Mar 28, 2024

Download(s) 10

443
Updated on Mar 28, 2024

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.