Please use this identifier to cite or link to this item:
|Title:||Vision based control for mobile robot||Authors:||Siah, Clarence Jun Da||Keywords:||DRNTU::Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision||Issue Date:||2015||Abstract:||With the advancement of technology, many well-known companies come up with their own autonomous vehicles prototypes which may revolutionize transportation in the near future. Indoor autonomous vehicles possess applications such as surveillance, data collection and rescue mission that consequently improve our quality of lives. To realize these prototypes, there is a need for developing a mobile robot to enable a safer and accurate navigation. Previous studies have introduced different vision based approaches for object tracking and obstacle avoidance during the navigation of the mobile robots in an indoor environment. However, the drawback of those approaches will raise the cost and complexity issue as they require a significant amount of image processing time for a resource-constrained robot. This study sought to study the integration of three properties namely object avoidance and object tracking as well as distance estimation during robot navigation. For object to be tracked in the space, the X/Y position of the object can be determined with the help of color detection by using computer vision library. Whereas, the Z position or distance of the object from the robot can be calculated using angle of depression of the camera which facing towards the interested object. Bumper sensors were used for the obstacle avoidance and turning decision of the mobile robot. The error of the estimation distance from the actual distance and the decision making of the robot were then obtained to evaluate the performance of the navigation. Through the experiment, bumper sensor performed well for evading the obstacle based on the suggested obstacle avoidance scheme. On the other hand, the distance estimation error was bound to increase over the time up to 13.81cm and caused some unpredictable errors. This is because the distance falls between the consecutive angles points cannot be captured by the mathematical model. In our application, the conservative way to overcome with this issue was to constantly updating the distance such that the color ball reached the points nearer to the mobile robot. The integration result based on the suggested solution was promising and the error distance could be achieved approximately less than 5cm. Finally, it is evident from the findings that using bumper sensor together with monocular camera does a good performance in obstacle avoidance and object tracking. This study can be considered as a reference for computer scientists to design a more efficient method for ensuring a safer and accurate navigation.||URI:||http://hdl.handle.net/10356/63477||Rights:||Nanyang Technological University||Fulltext Permission:||restricted||Fulltext Availability:||With Fulltext|
|Appears in Collections:||SCSE Student Reports (FYP/IA/PA/PI)|
Page view(s) 50152
checked on Oct 1, 2020
checked on Oct 1, 2020
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.