Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/166091
Title: Effects of action masking on deep reinforcement learning for inventory management
Authors: Goh, Bryan Zheng Ting
Keywords: Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Engineering::Industrial engineering::Supply chain
Issue Date: 2023
Publisher: Nanyang Technological University
Source: Goh, B. Z. T. (2023). Effects of action masking on deep reinforcement learning for inventory management. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/166091
Abstract: Inventory Management has always been a crucial part of Supply Chain Management, and not managing it carefully would lead to unnecessary inventory costs such as lost sales and holding cost. Over the years, many researchers have investigated solutions and systems in the field of operations research to better manage inventory and optimize it by lowering the inventory cost as much as possible. Due to recent advancement in reinforcement learning and the advancement of deep neural network, there has been rising interest in making use of Deep Reinforcement Learning to train an artificial agent that would be able to manage inventory and minimize inventory costs. Through this report, a solution for a single retailer, single item Inventory Management Environment with stochastic demand would be developed using Deep Q-Network (DQN). Moreover, even though there are recent works of using DQN in Inventory Management, not many have investigated the effects of action masking on this problem domain. Thus, this report will attempt to focus on investigating different methods of action masking and analyze their effects on the speed of convergence during the training phase and additional metric such as mean reward, fill rate and service level during the inference phase. Furthermore, this report will also analyze the effects of different demand distribution and whether that will affect the training of a DQN agent.
URI: https://hdl.handle.net/10356/166091
Schools: School of Computer Science and Engineering 
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
FYP_Amended.pdf
  Restricted Access
1.4 MBAdobe PDFView/Open

Page view(s)

334
Updated on Mar 17, 2025

Download(s)

13
Updated on Mar 17, 2025

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.