Please use this identifier to cite or link to this item:
|Title:||Optimized dynamic policy for robust receding horizon control||Authors:||Ajay Gautam||Keywords:||DRNTU::Engineering::Electrical and electronic engineering::Control and instrumentation||Issue Date:||2012||Source:||Gautam, Ajay. (2012). Optimized dynamic policy for robust receding horizon control. Doctoral thesis, Nanyang Technological University, Singapore.||Abstract:||As an on-line-optimization-based control technique, receding horizon control (RHC) has been a prominent control method for real-time control applications. Since this control approach relies on a model of the system being controlled, the presence of uncertainties in the system description has to be addressed with robust algorithms which, if designed naively, may lead to conservative results even with complex on-line computations, thus limiting the wider applicability of the method. The research in this thesis is aimed at developing RHC algorithms that allow to achieve a suitable tradeoff among control performance, applicability and on-line computational complexity, for control problems that require a systematic handling of uncertainties and constraints with low-complexity on-line computations. With a focus on (possibly uncertain) linear time-varying systems with a polytopic system description and with (possibly unmeasurable) bounded additive disturbances, this thesis studies a class of admissible controller dynamics, and proposes a dynamic control policy that is computationally attractive and offers reduced conservativeness. The proposed policy uses time-varying controller dynamics with controller matrices that need not be explicitly determined on-line but only assumed to follow the same convex combination as the plant matrices, and with a disturbance feedforward term that does not require the disturbance to be measured. Essentially, the proposed policy incorporates all the 'uncertain' information into the controller dynamics and this reduces the conservativeness in the assessment of feasible control inputs and hence the feasible invariant set for the controlled system. Furthermore, this policy allows the control optimization problem to be split into two separate problems: one to determine the convex hull of the controller matrices and the other to compute the controller initial state. With the former carried out off-line, the on-line computations involving the latter part are considerably simplified. The dynamics of the proposed policy can also be optimized such that the resulting RHC law ensures a control performance with a suitable H_infinity performance bound.||URI:||https://hdl.handle.net/10356/49509||DOI:||10.32657/10356/49509||Fulltext Permission:||open||Fulltext Availability:||With Fulltext|
|Appears in Collections:||EEE Theses|
Page view(s) 20427
Updated on Aug 3, 2021
Updated on Aug 3, 2021
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.