Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/40797
Title: Implementation of boundary detection for autonomous rescue robot
Authors: Maruvanda, Aiyappa Chengappa.
Keywords: DRNTU::Engineering::Electrical and electronic engineering::Control and instrumentation::Robotics
Issue Date: 2010
Abstract: During times of a natural disaster, urban calamity or explosion, the post-disaster site is most likely to be unsafe, unreachable and strewn with debris and rubble. Such an area poses a threat to all rescue personnel that enter in search for survivors. A rescue robot meant solely for the purpose of exploring such territories would result in reduced personnel requirements and reduced fatigue while having the ability to reach inaccessible areas. It also allows rescue personnel to focus their efforts on specific areas marked by the robot, rather than spend time and energy in searching throughout the entire site. An autonomous rescue robot in unfamiliar terrain needs to be able to maintain its track on site and be aware of its proximity from the boundary edges. For these reasons, a boundary detection program is of importance to an autonomous rescue robot. This report explores and implements a boundary detection program for an autonomous robot on the assumption that a boundary is a colored line. This is done by making use of various image processing techniques: line, edge, and color and shape detection. This report carries out a comparative study to select the best and most appropriate methods for color, edge and line detection followed by which it gives a brief introduction of these methodologies that explain their working and theory. The implemented algorithm performs the image analysis on images taken in from a single Firefly camera device in the form of a video. Images are passed through color detection, line detection, and edge & shape detection algorithms. On completion of these procedures, the program is capable of tracking the boundary line. This report majorly looks into the results of these image analysis steps and discusses them while testing the accuracy, performance, efficiency and robust nature of the proposed program/algorithm.
URI: http://hdl.handle.net/10356/40797
Rights: Nanyang Technological University
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:EEE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
eA4124-091.pdf
  Restricted Access
1.8 MBAdobe PDFView/Open

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.