Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/171751
Title: DFBVS: deep feature-based visual servo
Authors: Adrian, Nicholas
Do, Van Thach
Pham, Quang-Cuong
Keywords: Engineering::Mechanical engineering
Issue Date: 2022
Source: Adrian, N., Do, V. T. & Pham, Q. (2022). DFBVS: deep feature-based visual servo. IEEE 18th International Conference on Automation Science and Engineering (CASE 2022), 1783-1789. https://dx.doi.org/10.1109/CASE49997.2022.9926560
Conference: IEEE 18th International Conference on Automation Science and Engineering (CASE 2022)
Abstract: Classical Visual Servoing (VS) relies on handcrafted visual features, which limit their generalizability. Recently, a number of approaches, some based on Deep Neural Networks, have been proposed to overcome this limitation by comparing directly the entire target and current camera images. However, by getting rid of the visual features altogether, those approaches require the target and current images to be essentially similar, which precludes the generalization to unknown, cluttered, scenes. Here we propose to perform VS based on visual features as in classical VS approaches but, contrary to the latter, we leverage recent breakthroughs in Deep Learning to automatically extract and match the visual features. By doing so, our approach enjoys the advantages from both worlds: (i) because our approach is based on visual features, it is able to steer the robot towards the object of interest even in presence of significant distraction in the background; (ii) because the features are automatically extracted and matched, our approach can easily and automatically generalize to unseen objects and scenes. In addition, we propose to use a render engine to synthesize the target image, which offers a further level of generalization. We demonstrate these advantages in a robotic grasping task, where the robot is able to steer, with high accuracy, towards the object to grasp, based simply on an image of the object rendered from the camera view corresponding to the desired robot grasping pose.
URI: https://hdl.handle.net/10356/171751
ISBN: 9781665490429
DOI: 10.1109/CASE49997.2022.9926560
Schools: School of Mechanical and Aerospace Engineering 
Research Centres: HP-NTU Digital Manufacturing Corporate Lab
Rights: © 2022 IEEE. All rights reserved.
Fulltext Permission: none
Fulltext Availability: No Fulltext
Appears in Collections:MAE Conference Papers

SCOPUSTM   
Citations 50

2
Updated on Jun 16, 2024

Page view(s)

132
Updated on Jun 18, 2024

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.