Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/162468
Full metadata record
DC FieldValueLanguage
dc.contributor.authorZhang, Yuhangen_US
dc.contributor.authorYu, Qingen_US
dc.contributor.authorLow Kin Huaten_US
dc.contributor.authorLv, Chenen_US
dc.date.accessioned2022-11-04T01:03:44Z-
dc.date.available2022-11-04T01:03:44Z-
dc.date.issued2022-
dc.identifier.citationZhang, Y., Yu, Q., Low Kin Huat & Lv, C. (2022). A self-supervised monocular depth estimation approach based on UAV aerial images. 2022 IEEE/AIAA 41st Digital Avionics Systems Conference (DASC). https://dx.doi.org/10.1109/DASC55683.2022.9925733en_US
dc.identifier.issn2155-7209-
dc.identifier.urihttps://hdl.handle.net/10356/162468-
dc.description.abstractThe Unmanned Aerial Vehicles (UAVs) have gained increasing attention recently, and depth estimation is one of the essential tasks for the safe operation of UAVs, especially for drones at low altitudes. Considering the limitations of UAVs’ size and payload, innovative methods combined with deep learning techniques have taken the place of traditional sensors to become the mainstream for predicting per-pixel depth information. Since supervised depth estimation methods require a massive amount of depth ground truth as the supervisory signal. This article proposes an unsupervised framework to tackle the issue of predicting the depth map given a sequence of monocular images. Our model can solve the problem of scale ambiguity by training the depth subnetwork jointly with the pose subnetwork. Moreover, we introduce a modified loss function that utilizes a weighted photometric loss combined with the edge-aware smoothness loss to optimize the training. The evaluation results are compared with the model without weighted loss and other unsupervised monocular depth estimation models (Monodepth and Monodepth2). Our model shows better performance than the others, indicating potential assistance in enhancing the capability of UAVs to estimate distance with the surrounding environment.en_US
dc.description.sponsorshipCivil Aviation Authority of Singapore (CAAS)en_US
dc.language.isoenen_US
dc.rights© 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: https://doi.org/10.1109/DASC55683.2022.9925733.en_US
dc.subjectEngineering::Mechanical engineeringen_US
dc.titleA self-supervised monocular depth estimation approach based on UAV aerial imagesen_US
dc.typeConference Paperen
dc.contributor.schoolSchool of Mechanical and Aerospace Engineeringen_US
dc.contributor.conference2022 IEEE/AIAA 41st Digital Avionics Systems Conference (DASC)en_US
dc.contributor.researchAir Traffic Management Research Instituteen_US
dc.identifier.doi10.1109/DASC55683.2022.9925733-
dc.description.versionSubmitted/Accepted versionen_US
dc.subject.keywordsUnmanned Aerial Vehiclesen_US
dc.subject.keywordsSelf-Supervised Learningen_US
dc.subject.keywordsMonocular Depth Estimationen_US
dc.subject.keywordsAerial Imagesen_US
dc.subject.keywordsMulti-Scale Upsamplingen_US
dc.citation.conferencelocationPortsmouth, VA, USAen_US
dc.description.acknowledgementThis research is supported by the National Research Foundation, Singapore, and the Civil Aviation Authority of Singapore, under the Aviation Transformation Programme.en_US
item.grantfulltextopen-
item.fulltextWith Fulltext-
Appears in Collections:ATMRI Conference Papers
MAE Conference Papers
Files in This Item:
File Description SizeFormat 
A Self-Supervised Monocular Depth Estimation Approach Based on UAV Aerial Images.pdf3.49 MBAdobe PDFThumbnail
View/Open

Page view(s)

58
Updated on Feb 3, 2023

Download(s) 50

35
Updated on Feb 3, 2023

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.