Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/153942
Full metadata record
DC FieldValueLanguage
dc.contributor.authorLee, Dongkyuen_US
dc.contributor.authorTay, Wee Pengen_US
dc.contributor.authorKee, Seok-Cheolen_US
dc.date.accessioned2022-01-10T08:42:59Z-
dc.date.available2022-01-10T08:42:59Z-
dc.date.issued2021-
dc.identifier.citationLee, D., Tay, W. P. & Kee, S. (2021). Birds eye view look-up table estimation with semantic segmentation. Applied Sciences, 11(17), 8047-. https://dx.doi.org/10.3390/app11178047en_US
dc.identifier.issn2076-3417en_US
dc.identifier.urihttps://hdl.handle.net/10356/153942-
dc.description.abstractIn this work, a study was carried out to estimate a look-up table (LUT) that converts a camera image plane to a birds eye view (BEV) plane using a single camera. The traditional camera pose estimation fields require high costs in researching and manufacturing autonomous vehicles for the future and may require pre-configured infra. This paper proposes an autonomous vehicle driving camera calibration system that is low cost and utilizes low infra. A network that outputs an image in the form of an LUT that converts the image into a BEV by estimating the camera pose under urban road driving conditions using a single camera was studied. We propose a network that predicts human-like poses from a single image. We collected synthetic data using a simulator, made BEV and LUT as ground truth, and utilized the proposed network and ground truth to train pose estimation function. In the progress, it predicts the pose by deciphering the semantic segmentation feature and increases its performance by attaching a layer that handles the overall direction of the network. The network outputs camera angle (roll/pitch/yaw) on the 3D coordinate system so that the user can monitor learning. Since the network's output is a LUT, there is no need for additional calculation, and real-time performance is improved.en_US
dc.language.isoenen_US
dc.relation.ispartofApplied Sciencesen_US
dc.rights© 2021 The Author(s). Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).en_US
dc.subjectEngineering::Computer science and engineeringen_US
dc.titleBirds eye view look-up table estimation with semantic segmentationen_US
dc.typeJournal Articleen
dc.contributor.schoolSchool of Electrical and Electronic Engineeringen_US
dc.identifier.doi10.3390/app11178047-
dc.description.versionPublished versionen_US
dc.identifier.scopus2-s2.0-85114263699-
dc.identifier.issue17en_US
dc.identifier.volume11en_US
dc.identifier.spage8047en_US
dc.subject.keywordsBirds Eye Viewen_US
dc.subject.keywordsLook-Up Tableen_US
dc.description.acknowledgementThis research was supported by the MOTIE (Ministry of Trade, Industry, and Energy) in Korea, under the Fostering Global Talents for Innovative Growth Program (P0008751) supervised by the Korea Institute for Advancement of Technology (KIAT). This research was supported by the MSIT (Ministry of Science and ICT), Korea, under the Grand Information Technology Research Center support program (IITP-2021-2020-0-01462) supervised by the IITP (Institute for Information & communications Technology Planning & Evaluation.en_US
item.grantfulltextopen-
item.fulltextWith Fulltext-
Appears in Collections:EEE Journal Articles
Files in This Item:
File Description SizeFormat 
applsci-11-08047-v2.pdf39.9 MBAdobe PDFView/Open

Page view(s)

24
Updated on Jun 30, 2022

Download(s)

5
Updated on Jun 30, 2022

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.