Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/162585
Title: Game-theoretic inverse reinforcement learning: a differential pontryagin's maximum principle approach
Authors: Cao, Kun
Xie, Lihua
Keywords: Engineering::Electrical and electronic engineering
Issue Date: 2022
Source: Cao, K. & Xie, L. (2022). Game-theoretic inverse reinforcement learning: a differential pontryagin's maximum principle approach. IEEE Transactions On Neural Networks and Learning Systems. https://dx.doi.org/10.1109/TNNLS.2022.3148376
Project: 2019-T1- 001-088 (RG72/19)
Journal: IEEE Transactions on Neural Networks and Learning Systems
Abstract: This paper proposes a game-theoretic inverse reinforcement learning (GT-IRL) framework, which aims to learn the parameters in both the dynamic system and individual cost function of multistage games from demonstrated trajectories. Different from the probabilistic approaches in computer science community and residual minimization solutions in control community, our framework addresses the problem in a deterministic setting by differentiating Pontryagin’s Maximum Principle (PMP) equations of open-loop Nash equilibrium (OLNE), which is inspired by [1]. The differentiated equations for a multi-player nonzero-sum multistage game are shown to be equivalent to the PMP equations for another affine-quadratic nonzero-sum multistage game and can be solved by some explicit recursions. A similar result is established for 2-player zero-sum games. Simulation examples are presented to demonstrate the effectiveness of our proposed algorithms.
URI: https://hdl.handle.net/10356/162585
ISSN: 2162-237X
DOI: 10.1109/TNNLS.2022.3148376
Rights: © 2022 IEEE. All rights reserved.
Fulltext Permission: none
Fulltext Availability: No Fulltext
Appears in Collections:EEE Journal Articles

Page view(s)

10
Updated on Dec 5, 2022

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.