Please use this identifier to cite or link to this item:
Full metadata record
DC FieldValueLanguage
dc.contributor.authorWang, Mingen_US
dc.contributor.authorYan, Zhengen_US
dc.contributor.authorWang, Tingen_US
dc.contributor.authorCai, Pingqiangen_US
dc.contributor.authorGao, Siyuen_US
dc.contributor.authorZeng, Yien_US
dc.contributor.authorWan, Changjinen_US
dc.contributor.authorWang, Hongen_US
dc.contributor.authorPan, Liangen_US
dc.contributor.authorYu, Jiancanen_US
dc.contributor.authorPan, Shaowuen_US
dc.contributor.authorHe, Keen_US
dc.contributor.authorLu, Jieen_US
dc.contributor.authorChen, Xiaodongen_US
dc.identifier.citationWang, M., Yan, Z., Wang, T., Cai, P., Gao, S., Zeng, Y., Wan, C., Wang, H., Pan, L., Yu, J., Pan, S., He, K., Lu, J. & Chen, X. (2020). Gesture recognition using a bioinspired learning architecture that integrates visual data with somatosensory data from stretchable sensors. Nature Electronics, 3, 563-570.
dc.description.abstractGesture recognition using machine learning methods is valuable in the development of advanced cybernetics, robotics, and healthcare systems, and typically relies on images or videos. To improve recognition accuracy, such visual data can be fused with data from other sensors, but this approach is limited by the quality of the sensor data and the incompatibility of the datasets. Here, we report a bioinspired data fusion architecture that can perform human gesture recognition by integrating visual data with somatosensory data from skin-like stretchable strain sensors. The learning architecture uses a convolutional neural network for visual processing, and then implements a sparse neural network for sensor data fusion and recognition. Our approach can achieve a recognition accuracy of 100%, and maintain recognition accuracy with noisy, under- or over-exposed images. We also show that our architecture can be implemented for robot navigation using hand gestures with a small error, even in the dark.en_US
dc.description.sponsorshipMinistry of Education (MOE)en_US
dc.description.sponsorshipNational Research Foundation (NRF)en_US
dc.relation.ispartofNature Electronicsen_US
dc.rights© 2020 Macmillan Publishers Limited, part of Springer Nature. All rights reserved. This paper was published in Nature Electronics and is made available with permission of Macmillan Publishers Limited, part of Springer Nature.en_US
dc.titleGesture recognition using a bioinspired learning architecture that integrates visual data with somatosensory data from stretchable sensorsen_US
dc.typeJournal Articleen
dc.contributor.schoolSchool of Materials Science and Engineeringen_US
dc.contributor.researchInnovative Centre for Flexible Devicesen_US
dc.description.versionAccepted versionen_US
dc.subject.keywordsConvolutional Neural Networksen_US
dc.subject.keywordsNetwork architectureen_US
dc.description.acknowledgementThe project was partially supported by the National Research Foundation (NRF), Prime Minister’s office, Singapore, under its NRF Investigatorship (NRF2016NRF-NRF1001-21), Singapore Ministry of Education (MOE2015-T2-2-60), Advanced Manufacturing and Engineering (AME) Programmatic Grant (No. A19A1b0045), and the Australian Research Council (ARC) under Discovery Grant DP200100700. The authors thank all the volunteers for collecting data and also thank Dr. Ai Lin Chun for critically reading and editing the manuscript.en_US
item.fulltextWith Fulltext-
Appears in Collections:MSE Journal Articles

Citations 5

Updated on Oct 2, 2022

Web of ScienceTM
Citations 5

Updated on Oct 2, 2022

Page view(s)

Updated on Oct 5, 2022

Download(s) 10

Updated on Oct 5, 2022

Google ScholarTM




Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.