Please use this identifier to cite or link to this item:
Title: Positioning with augmented reality
Authors: Su, Xin
Keywords: DRNTU::Engineering::Electrical and electronic engineering
Issue Date: 2019
Abstract: This report summaries the past 40 weeks of professional final year project on the contextual knowledge needed for augmented reality navigation and machine learning object detection. Fast, robust and accurate object detection is required for augmented reality navigation. The objective was to develop an augmented reality navigation application that can help the user to navigate through the real world. Machine learning is an add on to improve the user location accuracy while using the app. The highlights include overlaying the augmented map information over the real world with precise coordinates. However, the limitation of miss alignment of the augmented map with the real world. Moreover, how to use press trained model to do object detection that helps to provide better and more accurate location of the user. Also, the limitation of model size and time complicity over precision. I will talk more about the recent framework like Convolution Neural Network (CNN), You Only Look Once (YOLO), Region Convolution Neural Network (RCNN) that can be implemented through TensorFlow and Keras. As for augmented reality map service application development, I will talk more about ARCore SDK, MapBox SDK and WRLD SDK.
Schools: School of Electrical and Electronic Engineering 
Rights: Nanyang Technological University
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:EEE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
SU XIN-FYP Final Report.pdf
  Restricted Access
2.01 MBAdobe PDFView/Open

Page view(s)

Updated on Jun 22, 2024


Updated on Jun 22, 2024

Google ScholarTM


Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.