Please use this identifier to cite or link to this item:
|Title:||Deep Activity Recognition Models with Triaxial Accelerometers||Authors:||Abu Alsheikh, Mohammad
|Issue Date:||2016||Source:||Abu Alsheikh, M., Selim, A., Niyato, D., Doyle, L., Lin, S., & Tan, H.-P. (2016). Deep activity recognition models with triaxial accelerometers. The Workshops of the Thirtieth AAAI Conference on Artificial Intelligence, 8-13.||Abstract:||Despite the widespread installation of accelerometers in almost all mobile phones and wearable devices, activity recognition using accelerometers is still immature due to the poor recognition accuracy of existing recognition methods and the scarcity of labeled training data. We consider the problem of human activity recognition using triaxial accelerometers and deep learning paradigms. This paper shows that deep activity recognition models (a) provide better recognition accuracy of human activities, (b) avoid the expensive design of handcrafted features in existing systems, and (c) utilize the massive unlabeled acceleration samples for unsupervised feature extraction. Moreover, a hybrid approach of deep learning and hidden Markov models (DL-HMM) is presented for sequential activity recognition. This hybrid approach integrates the hierarchical representations of deep activity recognition models with the stochastic modeling of temporal sequences in the hidden Markov models. We show substantial recognition improvement on real world datasets over state-of-the-art methods of human activity recognition using triaxial accelerometers.||URI:||https://hdl.handle.net/10356/80702
|Rights:||© 2016 Association for the Advancement of Artificial Intelligence (AAAI). This is the author created version of a work that has been peer reviewed and accepted for publication by The Workshops of the Thirtieth AAAI Conference on Artificial Intelligence, Association for the Advancement of Artificial Intelligence (AAAI). It incorporates referee’s comments but changes resulting from the publishing process, such as copyediting, structural formatting, may not be reflected in this document. The published version is available at: [https://arxiv.org/abs/1511.04664].||Fulltext Permission:||open||Fulltext Availability:||With Fulltext|
|Appears in Collections:||SCSE Conference Papers|
Updated on Feb 24, 2021
Updated on Feb 24, 2021
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.