Please use this identifier to cite or link to this item:
|Title:||Object recognition-based mnemonics mobile app for senior adults communication||Authors:||Pang, Natalie
Sesagiri Raamkumar, Aravind
|Keywords:||DRNTU::Engineering::Computer science and engineering::Information systems::Information interfaces and presentation||Issue Date:||2015||Source:||Pang, N., Foo, S., Raamkumar, A. S., Xue Zhang & Vu, S. (2015). Object recognition-based mnemonics mobile app for senior adults communication. 2015 6th International Conference on Computing, Communication and Networking Technologies (ICCCNT), 1-6.||Abstract:||There has been an exponential increase in smartphone usage across the globe due to decreasing production costs and a competitive marketplace. Smartphones and mobile internet services have thrived together to create virtual connections between people. However, there have been very few successful mobile apps developed for assisting senior adults in their day-to-day activities. Our research center has been developing artifacts for helping older adults. In this paper, we present the design and features of an android-based mobile app that has been developed as a mnemonics app. The app utilizes image recognition technology to trigger events using the options of calling phone numbers, sending short message service (SMS) and tweets. Caregivers and family members can map images of everyday target objects to events in the app. Subsequently, seniors can use the app to scan these objects for automatically triggering the pre-defined events, thereby saving time and effort.||URI:||https://hdl.handle.net/10356/82350
|DOI:||10.1109/icccnt.2015.7395164||Rights:||© 2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: [http://dx.doi.org/10.1109/icccnt.2015.7395164].||Fulltext Permission:||open||Fulltext Availability:||With Fulltext|
|Appears in Collections:||WKWSCI Conference Papers|
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.