Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/173381
Title: Lane perception in rainy conditions for autonomous vehicles
Authors: Mahendran, Prabhu Shankar
Keywords: Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
Engineering::Computer science and engineering::Computing methodologies::Pattern recognition
Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Issue Date: 2024
Publisher: Nanyang Technological University
Source: Mahendran, P. S. (2024). Lane perception in rainy conditions for autonomous vehicles. Doctoral thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/173381
Abstract: One of the critical tasks engaged by Autonomous Vehicles (AV) and Advanced Driver Assistance Systems (ADAS) is the detection and localization of lanes. To effectively automate lane keeping, route planning, and lane changing, it is essential to know the number of lanes, their orientations, and trajectories. As a result, lane detection has been of immense research interest in recent years. However, lane detection in rainy conditions remains underexplored. Rain particles interact with the visual sensors in numerous ways making the extraction of strong discriminative features even more difficult and antagonizing the detection task. De-raining has been intensively studied in the past to combat these disruptive effects. However, recent studies suggest that de-raining may further destroy some discriminative features. There is also a shortage of research work that focuses on lane detection in rainy conditions. Together with this, there is a lack of annotated lane detection datasets in rainy weather. Inspired by this, this thesis presents a novel lane detection framework that targets lane enhancement, segmentation, and detection in rainy conditions. First, a unique enhancement method, Deep Residual Enhancement Network (DRENet), is developed, which amplifies lanes over the noisy signal. A multiplicative and an additive factors are learnt instead of direct manipulation of raw pixel values. This enables better focus on the large and connected structures of lanes and helps to avoid non-lane sites. An auxiliary foreground segmentation task is introduced to further cement attention on lane regions. As it is challenging and costly to build an applicable lane detection dataset in rainy weather, a unique training framework is adopted. This process utilizes unannotated traffic scenes in rain and an established lane detection dataset in non-rainy weather. To further benefit lane segmentation capabilities in rain, the Joint Enhancement and Segmentation Network (JESNet) is presented next. Inspired by weight sharing in multi-task networks, JESNet enhances and segments lane regions in a single network. To improve the generalizability of the lane enhancement process, adversarial training is introduced with a specialized multi-receptive field discriminator. This audist semantic properties at different scales and depths in the lane-enhanced image without downsampling operations. Attention to global structures is induced through a unique combination of multi-scale segmentation and dilation operations. As a result, the vanishing region of the road may be implicitly learnt without a manual vanishing point label. Lastly, adaptive postprocessing is necessary to account for false positive and false negative predictions that are common during rainy conditions. To achieve this, a novel lane construction module is designed. The proposed module first generates sparse lane points using the statistics of local neighborhoods of the segmentation maps. Missed segments and false positives are compensated using local spatial statistics cues. Subsequently, lanes are constructed by fitting a polynomial curve over the predicted points. As such, the number of lanes in the rainy scene, their orientations, and their trajectories may be defined. As only segmentation maps are used, this module is effectively weather agnostic. To evaluate our lane detection framework in rainy conditions, tests were conducted on the proposed rain-translated dataset, which is based on the TuSimple dataset. Several evaluations were also conducted on our own unannotated rainy RainSG dataset, the clear TuSimple dataset, as well as the original clear and several rain-translated portions of the CULane database. Numerical results have shown comparable results with the state-of-the-art in clear settings, while up to 5\% and 14\% improvements to lane detection accuracy and false positive rates were obtained by the combined JESNet and lane construction module, against a leading detection method trained on our rain-translated TuSimple dataset. The proposed method achieved good performance under several common scenes in the rainy settings. Similarly, good performance was observed in challenging conditions from the clear CULane dataset.
URI: https://hdl.handle.net/10356/173381
DOI: 10.32657/10356/173381
Schools: School of Electrical and Electronic Engineering 
Rights: This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0).
Fulltext Permission: embargo_20260131
Fulltext Availability: With Fulltext
Appears in Collections:EEE Theses

Files in This Item:
File Description SizeFormat 
PhD_Thesis_Jan2024-Upload_Embargo_signed.pdf
  Until 2026-01-31
Thesis49.22 MBAdobe PDFUnder embargo until Jan 31, 2026

Page view(s)

288
Updated on Apr 21, 2025

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.