Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/150183
Title: Smart building with intelligent indoor lighting control system using reinforcement learning simulations
Authors: Chan, Keno Jia Nuo
Keywords: Engineering::Environmental engineering
Issue Date: 2021
Publisher: Nanyang Technological University
Source: Chan, K. J. N. (2021). Smart building with intelligent indoor lighting control system using reinforcement learning simulations. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/150183
Project: EN-41
Abstract: In most conventional indoor environments today, such as households or office buildings, the lighting control systems deployed incorporates digitalization through motion sensors or wireless communication technology to operate or control the lighting’s operation and brightness. Such utilization often leads to low user satisfaction rates as energy minimalization is more optimized to reduce energy wastage without consideration of users’ lighting preferences. Although the presence of wireless controls allows users to manually adjust the lighting intensity based on their preferences, this becomes a problem when there is more than one occupant in the environment due differences in lighting preferences. Hence, the most preferrable lighting control system is one that can consider the lighting preferences of all the occupants in the room and to some extent, minimize the energy wastage for lighting. This study aims to propose a reinforcement learning (RL) control system that can obtain a balance between lighting preferences of occupants and energy efficiency within the environment. Through this control system, the RL agents will optimize the lighting comfort based on the lighting preferences profiles of all occupants within the environment while the negotiator will maximize the lighting comfort between different occupants with different lighting preferences while minimizing energy consumption. The control agents will be trained by Q-learning, which is a model-free reinforcement learning algorithm, and simulated with three different lighting preference profiles from three different occupants. To obtain the ideal learning parameter of the proposed control system, different learning parameters, such as ε – greedy value, max steps per episode, learning rate α, and discount rate γ, will be used to test the ideal learning conditions for this study. In addition, to test the adaptability of the proposed control system, changes will be made to the environment, such as a change in the number of occupants in the environment, different starting lighting state of the environment, and accounting for energy efficiency. From the results, the proposed control system was able to reach the optimum lighting comfort after 116 simulation runs. This shows that the proposed control system was able to achieve good lighting comfort optimization performance and has a relatively efficient learning speed even with the introduction of environmental changes.
URI: https://hdl.handle.net/10356/150183
Schools: School of Civil and Environmental Engineering 
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:CEE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
FYP Final Report - Keno - Upload version.pdf
  Restricted Access
884.2 kBAdobe PDFView/Open

Page view(s)

292
Updated on Mar 29, 2024

Download(s)

1
Updated on Mar 29, 2024

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.