Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/142133
Full metadata record
DC FieldValueLanguage
dc.contributor.authorWang, Lien_US
dc.contributor.authorLi, Ruifengen_US
dc.contributor.authorSun, Jingwenen_US
dc.contributor.authorLiu, Xingxingen_US
dc.contributor.authorZhao, Lijunen_US
dc.contributor.authorSeah, Hock Soonen_US
dc.contributor.authorQuah, Chee Kwangen_US
dc.contributor.authorTandianus, Budiantoen_US
dc.date.accessioned2020-06-16T05:21:41Z-
dc.date.available2020-06-16T05:21:41Z-
dc.date.issued2019-
dc.identifier.citationWang, L., Li, R., Sun, J., Liu, X., Zhao, L., Seah, H. S., . . . Tandianus, B. (2019). Multi-view fusion-based 3D object detection for robot indoor scene perception. Sensors, 19(19), 4092-. doi:10.3390/s19194092en_US
dc.identifier.issn1424-8220en_US
dc.identifier.urihttps://hdl.handle.net/10356/142133-
dc.description.abstractTo autonomously move and operate objects in cluttered indoor environments, a service robot requires the ability of 3D scene perception. Though 3D object detection can provide an object-level environmental description to fill this gap, a robot always encounters incomplete object observation, recurring detections of the same object, error in detection, or intersection between objects when conducting detection continuously in a cluttered room. To solve these problems, we propose a two-stage 3D object detection algorithm which is to fuse multiple views of 3D object point clouds in the first stage and to eliminate unreasonable and intersection detections in the second stage. For each view, the robot performs a 2D object semantic segmentation and obtains 3D object point clouds. Then, an unsupervised segmentation method called Locally Convex Connected Patches (LCCP) is utilized to segment the object accurately from the background. Subsequently, the Manhattan Frame estimation is implemented to calculate the main orientation of the object and subsequently, the 3D object bounding box can be obtained. To deal with the detected objects in multiple views, we construct an object database and propose an object fusion criterion to maintain it automatically. Thus, the same object observed in multi-view is fused together and a more accurate bounding box can be calculated. Finally, we propose an object filtering approach based on prior knowledge to remove incorrect and intersecting objects in the object dataset. Experiments are carried out on both SceneNN dataset and a real indoor environment to verify the stability and accuracy of 3D semantic segmentation and bounding box detection of the object with multi-view fusion.en_US
dc.description.sponsorshipNRF (Natl Research Foundation, S’pore)en_US
dc.language.isoenen_US
dc.relation.ispartofSensorsen_US
dc.rights© 2019 The Authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).en_US
dc.subjectEngineering::Computer science and engineeringen_US
dc.titleMulti-view fusion-based 3D object detection for robot indoor scene perceptionen_US
dc.typeJournal Articleen
dc.contributor.schoolSchool of Computer Science and Engineeringen_US
dc.identifier.doi10.3390/s19194092-
dc.description.versionPublished versionen_US
dc.identifier.pmid31546674-
dc.identifier.scopus2-s2.0-85072586553-
dc.identifier.issue19en_US
dc.identifier.volume19en_US
dc.subject.keywords3D Object Detectionen_US
dc.subject.keywordsMulti-view Fusionen_US
item.fulltextWith Fulltext-
item.grantfulltextopen-
Appears in Collections:SCSE Journal Articles
Files in This Item:
File Description SizeFormat 
Multi-view fusion-based 3D object detection for robot indoor scene perception.pdf6.88 MBAdobe PDFView/Open

SCOPUSTM   
Citations 50

2
Updated on Mar 10, 2021

PublonsTM
Citations 50

1
Updated on Mar 8, 2021

Page view(s)

31
Updated on Jun 16, 2021

Download(s)

9
Updated on Jun 16, 2021

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.