Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/59049
Title: Intelligent vision based coordinated control for quadcopter
Authors: Hsu, Htet
Keywords: DRNTU::Engineering::Electrical and electronic engineering::Control and instrumentation::Control engineering
DRNTU::Engineering::Mechanical engineering::Motor vehicles
DRNTU::Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
Issue Date: 2014
Abstract: Within the past decade, quadcopters have been widely used in military operations, surveillance, search missions and many more interesting areas. They have also become the focus of aerodynamics research institutes to test flight control theory, autonomous navigation and more. When quadcopters are manufactured smaller in recent years and called nano quadcopters of incredible small size, its potential for more useful applications sparked interest for many. The challenge with quadcopters is to make it fly stably and hold a target position. Increasing amounts of interest go to improving natural interaction with quadcopters. Hence the goal of this project is set to (1) explore possible means to control a nano quadcopter (2) implement a close-loop control system for stable hovering (3) implement a vision-based system to control quadcopter using gestures. From our project, we have discovered and implemented possible means to control a nano quadcopter. They include manual control using a joystick controller such as Xbox 360 controller or Playstation controller, keyboard and mouse as controllers and gesture-based controllers such as Leap Motion controller and Microsoft Kinect controller. Close-loop control system was implemented using PID controller. Input to PID controller is the difference between target position and actual position of a nano quadcopter. Actual position of a nano quadcopter was determined by tracking red ball attached on top of nano quadcopter and applying imaging processing techniques on RGB and depth images produced by Microsoft Kinect. OpenNI framework and NITE middleware allowed us to develop skeleton tracking algorithm. Gesture recognition algorithm implementation was based on skeleton joint data of left hand of the user. Visualization of skeleton joint data was also made available.
URI: http://hdl.handle.net/10356/59049
Schools: School of Computer Engineering 
Rights: Nanyang Technological University
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
FYP[final].pdf
  Restricted Access
2.36 MBAdobe PDFView/Open

Page view(s)

453
Updated on Mar 21, 2025

Download(s) 50

43
Updated on Mar 21, 2025

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.