Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/74841
Title: Deep object affordance learning for mobile robot applications
Authors: Teh, Han Wei
Keywords: DRNTU::Engineering
Issue Date: 2018
Abstract: Deep learning is a subset of artificial intelligence which uses artificial neural network that can learn and make decisions on its own. Numerous deep learning frameworks have been developed for the purpose of object detection and classification. However, for mobile robots to work autonomously or collaborate with humans in daily workspaces, they should possess the capability of recognizing object affordances instead of just identifying a certain object. The possible functions of tools’ parts are what we define as affordances in the context of this study. Various methods have been presented over the years for object affordance detection and most previous works relied on hand-designed geometric features to localize and identify object affordances. Undeniably, the new state-of-the-art method would be deep learning which has recently gained much popularity due to its capability in handling a huge amount of data and learning deep features automatically. This project was targeted towards developing an affordance detection system with higher accuracy that the existing ones. The proposed method is to use a deep learning approach for semantic segmentation to detect object affordances from RGB images. The input data are RGB images which may represent multiple modalities to allow the network to learn features more effectively during training. SegNet is chosen for implementation due to it being the most memory efficient deep neural network for segmentation. The dataset used in this project contains a diverse collection of everyday tools such as knife and hammer and the 7 affordances associated with these tools’ parts are grasp, cut, scoop, contain, pound, support and wrap-grasp respectively. The training model obtained was validated through the inference process. The affordance detection system achieved an accuracy of 78.4% which is about 2% higher than the existing one trained using a different deep learning architecture.
URI: http://hdl.handle.net/10356/74841
Schools: School of Electrical and Electronic Engineering 
Rights: Nanyang Technological University
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:EEE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
FYP final.pdf
  Restricted Access
3.58 MBAdobe PDFView/Open

Page view(s)

384
Updated on Oct 5, 2024

Download(s) 50

36
Updated on Oct 5, 2024

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.