Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/159038
Title: Traffic signal control for optimized urban mobility
Authors: Damani, Mehul
Keywords: Engineering::Mechanical engineering
Issue Date: 2022
Publisher: Nanyang Technological University
Source: Damani, M. (2022). Traffic signal control for optimized urban mobility. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/159038
Abstract: The aim of traffic signal control (TSC) is to optimize vehicle traffic in urban road networks, via the control of traffic lights at intersections. Efficient traffic signal control can significantly reduce the detrimental impacts of traffic congestion, such as environmental pollution, passenger frustration and economic losses due to wasted time (e.g., surrounding delivery or emergency vehicles). At present, fixed-time controllers, which use offline data to fix the duration of traffic signal phases, remain the most widespread. However, urban traffic exhibits complex spatio-temporal patterns, such as peak congestion during the end of a workday. Fixed-time controllers, which have a pre-defined control rule, are unable to account for such dynamic patterns and as a result, there has been a recent push for adaptive traffic signal control methods which can dynamically adjust their control rule based on locally-sensed real-time traffic conditions. Reinforcement learning (RL) is one such adaptive and versatile data-driven method which has shown great promise in a variety of decision-making problems. Combined with deep learning, RL can be leveraged to learn powerful control policies for highly complex tasks. This work focuses on decentralized adaptive TSC and proposes a distributed multi-agent reinforcement learning (MARL) framework, where each agent in the system is a traffic intersection tasked to select the traffic phase of that intersection, based on locally-sensed traffic conditions and communication with its neighbors. However, due to the highly connected and interdependent nature between the intersections/agents, cooperation among these intersections is key to achieving the type of bottom-up, network-wide traffic optimization desired. To achieve this, this work proposes a novel social intrinsic reward mechanism to learn locally-cooperative traffic signal control policies. Counterfactually-predicted states, obtained using a learned dynamics model, are used to compute an intrinsic reward that captures the impact an agent's immediate actions has on its neighbouring agents's future state, thus encouraging locally-selfless behaviors. In contrast to simply sharing rewards among neighbors, which usually results in increased reward noise, our proposed intrinsic reward allows agents to explicitly assign credit to each other, leading to more stable, faster convergence to enhanced-cooperation policies. We present extensive comparisons results against state-of-the-art methods on the Manhattan 5x5 traffic network using the standard traffic simulator, SUMO. There, our results show that our proposed framework exhibits comparable or improved performance over state-of-the-art TSC baselines.
URI: https://hdl.handle.net/10356/159038
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:MAE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
MehulDamani-FYP.pdf
  Restricted Access
22.72 MBAdobe PDFView/Open

Page view(s)

28
Updated on Dec 9, 2022

Download(s)

1
Updated on Dec 9, 2022

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.