dc.contributor.authorAjay Gautam
dc.date.accessioned2012-05-21T04:54:01Z
dc.date.accessioned2017-07-23T08:34:19Z
dc.date.available2012-05-21T04:54:01Z
dc.date.available2017-07-23T08:34:19Z
dc.date.copyright2012en_US
dc.date.issued2012
dc.identifier.citationGautam, Ajay. (2012). Optimized dynamic policy for robust receding horizon control. Doctoral thesis, Nanyang Technological University, Singapore.
dc.identifier.urihttp://hdl.handle.net/10356/49509
dc.description.abstractAs an on-line-optimization-based control technique, receding horizon control (RHC) has been a prominent control method for real-time control applications. Since this control approach relies on a model of the system being controlled, the presence of uncertainties in the system description has to be addressed with robust algorithms which, if designed naively, may lead to conservative results even with complex on-line computations, thus limiting the wider applicability of the method. The research in this thesis is aimed at developing RHC algorithms that allow to achieve a suitable tradeoff among control performance, applicability and on-line computational complexity, for control problems that require a systematic handling of uncertainties and constraints with low-complexity on-line computations. With a focus on (possibly uncertain) linear time-varying systems with a polytopic system description and with (possibly unmeasurable) bounded additive disturbances, this thesis studies a class of admissible controller dynamics, and proposes a dynamic control policy that is computationally attractive and offers reduced conservativeness. The proposed policy uses time-varying controller dynamics with controller matrices that need not be explicitly determined on-line but only assumed to follow the same convex combination as the plant matrices, and with a disturbance feedforward term that does not require the disturbance to be measured. Essentially, the proposed policy incorporates all the 'uncertain' information into the controller dynamics and this reduces the conservativeness in the assessment of feasible control inputs and hence the feasible invariant set for the controlled system. Furthermore, this policy allows the control optimization problem to be split into two separate problems: one to determine the convex hull of the controller matrices and the other to compute the controller initial state. With the former carried out off-line, the on-line computations involving the latter part are considerably simplified. The dynamics of the proposed policy can also be optimized such that the resulting RHC law ensures a control performance with a suitable H_infinity performance bound.en_US
dc.format.extent223 p.en_US
dc.language.isoenen_US
dc.subjectDRNTU::Engineering::Electrical and electronic engineering::Control and instrumentationen_US
dc.titleOptimized dynamic policy for robust receding horizon controlen_US
dc.typeThesis
dc.contributor.schoolSchool of Electrical and Electronic Engineeringen_US
dc.contributor.supervisorSoh Yeng Chaien_US
dc.description.degreeDOCTOR OF PHILOSOPHY (EEE)en_US


Files in this item

FilesSizeFormatView
TeG0603230G.pdf2.408Mbapplication/pdfView/Open

This item appears in the following Collection(s)

Show simple item record