Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/178505
Title: | Traffic data augmentation with deep learning | Authors: | Xu, Qianxiong | Keywords: | Computer and Information Science | Issue Date: | 2024 | Publisher: | Nanyang Technological University | Source: | Xu, Q. (2024). Traffic data augmentation with deep learning. Doctoral thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/178505 | Abstract: | The transition to smart cities entails the integration of advanced technologies, data-driven solutions, and innovative urban planning strategies. By harnessing the potential of Information and Communication Technology, Machine Learning, and sensor networks, smart cities aim at providing residents with much convenience in various aspects including clothing, food, shelter, and transportation, so as to improve residents' quality of life. Among these aspects, smart transportation plays a fundamental role in shaping the travelling efficiency in urban spaces, e.g., providing navigation services, estimating travel time cost, route planning, etc. These services highly rely on complete road networks around the world, as well as the deployed road-aware sensors to timely sense the traffic situation. For instance, the navigated routes provided by map service providers (e.g., Google Map) should be exactly aligned with the physical roads. Besides, loop detectors are usually installed on roads to record the vehicle flow and speed on these roads, and then, the average vehicle flow and speed can be calculated for a specific time period. These statistics are important for downstream tasks such as travel time estimation. Unfortunately, the aforementioned traffic data including roads data, as well as the vehicle speeds and flows data on each road, are often outdated or incomplete. For road networks, some less popular or newly built areas may not be handled by providers. In addition, the changes in physical road infrastructure may not be timely updated in extracted road networks, which affects services like navigation. For traffic speeds and flows which are usually captured by sensors installed on roads, the data is often incomplete for various reasons, including high cost of sensors, sensor errors, network transmission issues, etc. Towards the goal of making the aforementioned data more complete, we present two works for extracting road networks from satellite images, and another two works for imputing the incomplete road-aware traffic speeds and flows data. The main contributions of this thesis are listed as follows. First, we observe that in some public platforms like OpenStreetMap, there already exist massive incomplete road networks. Therefore, instead of extracting new road networks from scratch, we propose to augment the existing incomplete road maps with the help of satellite image data. Specifically, we propose a two branch model called Partial to Complete Network (P2CNet), where the two branches are responsible for extracting features from satellite images and incomplete road maps, respectively. P2CNet has two prominent components including Gated Self-Attention Module (GSAM) and Missing Part (MP) loss. GSAM is proposed to fuse the features of two different data sources, in this case, the model can regard the image features of known roads as reference, trying to find other similar parts from the remaining image features. Moreover, MP loss aims at encouraging the model to focus less on the known roads. Extensive experiments conducted on two public datasets validate the rationality of our new setting and model design. Second, different areas/cities have different features including road networks and all other things (summed up to be the background class). For instance, (1) roads might be built with different materials; (2) road networks might have diverse shapes and structures, in terms of curvature, width, etc.; (3) the background views of urban and rural areas are completely different. If some of these features are not witnessed by a road extraction model during training, then the extracted road networks of these areas would have poor quality. To make the learned model robust, a large and diverse enough training dataset is required. Unfortunately, annotating such (satellite image, mask) pairs requires domain expert knowledge and would take massive time for annotation, which is infeasible in practice. We propose to adopt a trained Few-Shot Segmentation (FSS) model (on existing large natural image datasets) to extract roads for a new area, with the help of 1 to 5 annotated samples of this new area, which is convenient. Moreover, we design a novel Self-Calibrated Cross Attention Network (SCCAN) to handle the background (BG) mismatch issue and foreground-background (FG-BG) entanglement issue of existing FSS methods. Third, once the road networks are built, the next step is to install sensors on them to sense the traffic conditions, e.g., loop detectors are installed to record the vehicle flow and speed data. Nevertheless, the data is often incomplete due to issues of sensor errors, data transmission errors, etc., which heavily hinder the downstream tasks. To overcome this, we aim to perform data imputation to complete the missing values. Specifically, we design a novel framework STCPA to deal with the data sparsity issue, capture complex spatio-temporal correlations, and construct reliable pseudo labels for missing positions to boost the performance. Extensive experiments are conducted and the considerable margins can validate the superiority of the proposed modules. Fourth, for those roads without sensors, their traffic speed and flow values would be missing across all the time, and the task of imputing their values is called Traffic Data Kriging. Naturally, the graph used during training is inevitably a sparser subgraph of the inference graph, as the target nodes are unknown in advance, and they are inserted only at the test time. Existing kriging methods all perform training on the sparser subgraph, regardless of the risk of the inconsistencies between training and inference patterns. We propose a new strategy called Increment Training Strategy, which constructs virtual nodes and connect them to the training graph for better consistency between the updated training graph and the inference graph. In this way, the learned pattern can be safely applied to the test data and improve the performance. Moreover, we tailor a Spatio-Temporal Graph Convolution (STGC) module to capture spatio-temporal correlations, and design a Reference-based Feature Fusion (RFF) module, as well as a Node-aware Cycle Regulation, to mitigate the overfitting phenomenon of existing methods. The superiority of our model is proven by the extensive experiments conducted on eight datasets. | URI: | https://hdl.handle.net/10356/178505 | DOI: | 10.32657/10356/178505 | Schools: | College of Computing and Data Science | Rights: | This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). | Fulltext Permission: | open | Fulltext Availability: | With Fulltext |
Appears in Collections: | CCDS Theses |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
NTU_Thesis___Qianxiong.pdf | 22.32 MB | Adobe PDF | View/Open |
Page view(s)
157
Updated on Oct 9, 2024
Download(s) 50
83
Updated on Oct 9, 2024
Google ScholarTM
Check
Altmetric
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.