Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/171751
Full metadata record
DC FieldValueLanguage
dc.contributor.authorAdrian, Nicholasen_US
dc.contributor.authorDo, Van Thachen_US
dc.contributor.authorPham, Quang-Cuongen_US
dc.date.accessioned2023-11-07T01:57:00Z-
dc.date.available2023-11-07T01:57:00Z-
dc.date.issued2022-
dc.identifier.citationAdrian, N., Do, V. T. & Pham, Q. (2022). DFBVS: deep feature-based visual servo. IEEE 18th International Conference on Automation Science and Engineering (CASE 2022), 1783-1789. https://dx.doi.org/10.1109/CASE49997.2022.9926560en_US
dc.identifier.isbn9781665490429-
dc.identifier.urihttps://hdl.handle.net/10356/171751-
dc.description.abstractClassical Visual Servoing (VS) relies on handcrafted visual features, which limit their generalizability. Recently, a number of approaches, some based on Deep Neural Networks, have been proposed to overcome this limitation by comparing directly the entire target and current camera images. However, by getting rid of the visual features altogether, those approaches require the target and current images to be essentially similar, which precludes the generalization to unknown, cluttered, scenes. Here we propose to perform VS based on visual features as in classical VS approaches but, contrary to the latter, we leverage recent breakthroughs in Deep Learning to automatically extract and match the visual features. By doing so, our approach enjoys the advantages from both worlds: (i) because our approach is based on visual features, it is able to steer the robot towards the object of interest even in presence of significant distraction in the background; (ii) because the features are automatically extracted and matched, our approach can easily and automatically generalize to unseen objects and scenes. In addition, we propose to use a render engine to synthesize the target image, which offers a further level of generalization. We demonstrate these advantages in a robotic grasping task, where the robot is able to steer, with high accuracy, towards the object to grasp, based simply on an image of the object rendered from the camera view corresponding to the desired robot grasping pose.en_US
dc.language.isoenen_US
dc.rights© 2022 IEEE. All rights reserved.en_US
dc.subjectEngineering::Mechanical engineeringen_US
dc.titleDFBVS: deep feature-based visual servoen_US
dc.typeConference Paperen
dc.contributor.schoolSchool of Mechanical and Aerospace Engineeringen_US
dc.contributor.conferenceIEEE 18th International Conference on Automation Science and Engineering (CASE 2022)en_US
dc.contributor.researchHP-NTU Digital Manufacturing Corporate Laben_US
dc.identifier.doi10.1109/CASE49997.2022.9926560-
dc.identifier.scopus2-s2.0-85141708941-
dc.identifier.spage1783en_US
dc.identifier.epage1789en_US
dc.subject.keywordsDeep Learningen_US
dc.subject.keywordsVisualizationen_US
dc.citation.conferencelocationMexico City, Mexicoen_US
dc.description.acknowledgementThis study is supported under the RIE2020 Industry Alignment Fund Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner, HP Inc., through the HP-NTU Digital Manufacturing Corporate Lab.en_US
item.grantfulltextnone-
item.fulltextNo Fulltext-
Appears in Collections:MAE Conference Papers

SCOPUSTM   
Citations 50

2
Updated on Jul 18, 2024

Page view(s)

135
Updated on Jul 20, 2024

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.