Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/62547
Title: Android smart phone based participatory sensing
Authors: Teo, Kok Hien
Keywords: DRNTU::Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
DRNTU::Engineering::Computer science and engineering::Computing methodologies::Pattern recognition
Issue Date: 2015
Abstract: Global Positioning System (GPS) units are a regular navigational aid for many modern-day drivers. However, they are accurate only to about 3 to 15 metres and are prone to misdirections. The problem is exacerbated by the fact that entry into certain road sections in Singapore is chargeable. Fortunately, smartphones are frequently used by drivers as their dedicated GPS units and this has given birth to the idea of embedding a software engine – built atop the open source OpenCV library – within a GPS system to recognize and match buildings captured through the smartphone camera against a pictorial database of buildings in the vicinity to accurate identify the exact location of the vehicle (and phone), thus enhancing the overall accuracy of the GPS system. Various methodologies exist to implement the image matching sub-system, ranging from histogram comparison to template matching to feature analysis. Feature analysis has proven to be the best technique given that it holds the traits of being both photometric and geometric invariant. The combination of (SURF, SURF, FLANNBASED) of feature descriptor, descriptor extractor and descriptor matcher proved to be highly accurate but extremely slow on a mobile device. (ORB, ORB, BRUTEFORCE_L1) was eventually chosen given that it was computationally efficient with a runtime of less than 3 seconds per transaction with only an accuracy drop of 3% in day time building recognition (but with a slight overall improvement in accuracy when accounting for junctions and night time recognition). The experiment utilized an array of images that covered buildings and junctions in day and night settings with the overall ranking of the different methods judged based on the 3 criteria: (1) percentage of true positives; (2) percentage of true negatives; and (3) matching duration. The image matching sub-system is packaged into an easy-to-use function call which takes in two different images as parameters – one from the smartphone’s camera, the other from the pictorial database. The entire database is stored in a list and images are eliminated from the search space once their latitude and longitude values exceed a certain proximity compared to the current location to improve overall efficiency.
URI: http://hdl.handle.net/10356/62547
Rights: Nanyang Technological University
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
Teo Kok Hien - Report on Android Smart Phone based Participatory Sensing.pdf
  Restricted Access
Main Article2.53 MBAdobe PDFView/Open

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.