Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/49509
Full metadata record
DC FieldValueLanguage
dc.contributor.authorAjay Gautamen
dc.date.accessioned2012-05-21T04:54:01Zen
dc.date.available2012-05-21T04:54:01Zen
dc.date.copyright2012en
dc.date.issued2012en
dc.identifier.citationGautam, Ajay. (2012). Optimized dynamic policy for robust receding horizon control. Doctoral thesis, Nanyang Technological University, Singapore.en
dc.identifier.urihttps://hdl.handle.net/10356/49509en
dc.description.abstractAs an on-line-optimization-based control technique, receding horizon control (RHC) has been a prominent control method for real-time control applications. Since this control approach relies on a model of the system being controlled, the presence of uncertainties in the system description has to be addressed with robust algorithms which, if designed naively, may lead to conservative results even with complex on-line computations, thus limiting the wider applicability of the method. The research in this thesis is aimed at developing RHC algorithms that allow to achieve a suitable tradeoff among control performance, applicability and on-line computational complexity, for control problems that require a systematic handling of uncertainties and constraints with low-complexity on-line computations. With a focus on (possibly uncertain) linear time-varying systems with a polytopic system description and with (possibly unmeasurable) bounded additive disturbances, this thesis studies a class of admissible controller dynamics, and proposes a dynamic control policy that is computationally attractive and offers reduced conservativeness. The proposed policy uses time-varying controller dynamics with controller matrices that need not be explicitly determined on-line but only assumed to follow the same convex combination as the plant matrices, and with a disturbance feedforward term that does not require the disturbance to be measured. Essentially, the proposed policy incorporates all the 'uncertain' information into the controller dynamics and this reduces the conservativeness in the assessment of feasible control inputs and hence the feasible invariant set for the controlled system. Furthermore, this policy allows the control optimization problem to be split into two separate problems: one to determine the convex hull of the controller matrices and the other to compute the controller initial state. With the former carried out off-line, the on-line computations involving the latter part are considerably simplified. The dynamics of the proposed policy can also be optimized such that the resulting RHC law ensures a control performance with a suitable H_infinity performance bound.en
dc.format.extent223 p.en
dc.language.isoenen
dc.subjectDRNTU::Engineering::Electrical and electronic engineering::Control and instrumentationen
dc.titleOptimized dynamic policy for robust receding horizon controlen
dc.typeThesisen
dc.contributor.supervisorSoh Yeng Chaien
dc.contributor.schoolSchool of Electrical and Electronic Engineeringen
dc.description.degreeDOCTOR OF PHILOSOPHY (EEE)en
dc.identifier.doi10.32657/10356/49509en
item.grantfulltextopen-
item.fulltextWith Fulltext-
Appears in Collections:EEE Theses
Files in This Item:
File Description SizeFormat 
TeG0603230G.pdf2.35 MBAdobe PDFThumbnail
View/Open

Page view(s) 50

431
Updated on Sep 22, 2021

Download(s) 10

279
Updated on Sep 22, 2021

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.