Please use this identifier to cite or link to this item:
Title: A review of inverse reinforcement learning theory and recent advances
Authors: Shao, Zhifei
Er, Meng Joo
Keywords: DRNTU::Engineering::Electrical and electronic engineering
Issue Date: 2012
Source: Shao, Z., & Er, M. J. (2012). A review of inverse reinforcement learning theory and recent advances. 2012 IEEE Congress on Evolutionary Computation (CEC).
Abstract: A major challenge faced by machine learning community is the decision making problems under uncertainty. Reinforcement Learning (RL) techniques provide a powerful solution for it. An agent used by RL interacts with a dynamic environment and finds a policy through a reward function, without using target labels like Supervised Learning (SL). However, one fundamental assumption of existing RL algorithms is that reward function, the most succinct representation of the designer's intention, needs to be provided beforehand. In practice, the reward function can be very hard to specify and exhaustive to tune for large and complex problems, and this inspires the development of Inverse Reinforcement Learning (IRL), an extension of RL, which directly tackles this problem by learning the reward function through expert demonstrations. IRL introduces a new way of learning policies by deriving expert's intentions, in contrast to directly learning policies, which can be redundant and have poor generalization ability. In this paper, the original IRL algorithms and its close variants, as well as their recent advances are reviewed and compared.
DOI: 10.1109/CEC.2012.6256507
Rights: © 2012 IEEE.
Fulltext Permission: none
Fulltext Availability: No Fulltext
Appears in Collections:EEE Conference Papers

Citations 10

Updated on Feb 4, 2023

Page view(s) 20

Updated on Feb 4, 2023

Google ScholarTM




Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.