Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/153942
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lee, Dongkyu | en_US |
dc.contributor.author | Tay, Wee Peng | en_US |
dc.contributor.author | Kee, Seok-Cheol | en_US |
dc.date.accessioned | 2022-01-10T08:42:59Z | - |
dc.date.available | 2022-01-10T08:42:59Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | Lee, D., Tay, W. P. & Kee, S. (2021). Birds eye view look-up table estimation with semantic segmentation. Applied Sciences, 11(17), 8047-. https://dx.doi.org/10.3390/app11178047 | en_US |
dc.identifier.issn | 2076-3417 | en_US |
dc.identifier.uri | https://hdl.handle.net/10356/153942 | - |
dc.description.abstract | In this work, a study was carried out to estimate a look-up table (LUT) that converts a camera image plane to a birds eye view (BEV) plane using a single camera. The traditional camera pose estimation fields require high costs in researching and manufacturing autonomous vehicles for the future and may require pre-configured infra. This paper proposes an autonomous vehicle driving camera calibration system that is low cost and utilizes low infra. A network that outputs an image in the form of an LUT that converts the image into a BEV by estimating the camera pose under urban road driving conditions using a single camera was studied. We propose a network that predicts human-like poses from a single image. We collected synthetic data using a simulator, made BEV and LUT as ground truth, and utilized the proposed network and ground truth to train pose estimation function. In the progress, it predicts the pose by deciphering the semantic segmentation feature and increases its performance by attaching a layer that handles the overall direction of the network. The network outputs camera angle (roll/pitch/yaw) on the 3D coordinate system so that the user can monitor learning. Since the network's output is a LUT, there is no need for additional calculation, and real-time performance is improved. | en_US |
dc.language.iso | en | en_US |
dc.relation.ispartof | Applied Sciences | en_US |
dc.rights | © 2021 The Author(s). Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). | en_US |
dc.subject | Engineering::Computer science and engineering | en_US |
dc.title | Birds eye view look-up table estimation with semantic segmentation | en_US |
dc.type | Journal Article | en |
dc.contributor.school | School of Electrical and Electronic Engineering | en_US |
dc.identifier.doi | 10.3390/app11178047 | - |
dc.description.version | Published version | en_US |
dc.identifier.scopus | 2-s2.0-85114263699 | - |
dc.identifier.issue | 17 | en_US |
dc.identifier.volume | 11 | en_US |
dc.identifier.spage | 8047 | en_US |
dc.subject.keywords | Birds Eye View | en_US |
dc.subject.keywords | Look-Up Table | en_US |
dc.description.acknowledgement | This research was supported by the MOTIE (Ministry of Trade, Industry, and Energy) in Korea, under the Fostering Global Talents for Innovative Growth Program (P0008751) supervised by the Korea Institute for Advancement of Technology (KIAT). This research was supported by the MSIT (Ministry of Science and ICT), Korea, under the Grand Information Technology Research Center support program (IITP-2021-2020-0-01462) supervised by the IITP (Institute for Information & communications Technology Planning & Evaluation. | en_US |
item.grantfulltext | open | - |
item.fulltext | With Fulltext | - |
Appears in Collections: | EEE Journal Articles |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
applsci-11-08047-v2.pdf | 39.9 MB | Adobe PDF | View/Open |
Page view(s)
24
Updated on Jun 30, 2022
Download(s)
5
Updated on Jun 30, 2022
Google ScholarTM
Check
Altmetric
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.