Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/152915
Title: | Optogenetics inspired transition metal dichalcogenide neuristors for in-memory deep recurrent neural networks | Authors: | John, Rohit Abraham Acharya, Jyotibdha Zhu, Chao Surendran, Abhijith Bose, Sumon Kumar Chaturvedi, Apoorva Tiwari, Nidhi Gao, Yang He, Yongmin Zhang, Keke K. Xu, Manzhang Leong, Wei Lin Liu, Zheng Basu, Arindam Mathews, Nripan |
Keywords: | Engineering::Materials Engineering::Electrical and electronic engineering |
Issue Date: | 2020 | Source: | John, R. A., Acharya, J., Zhu, C., Surendran, A., Bose, S. K., Chaturvedi, A., Tiwari, N., Gao, Y., He, Y., Zhang, K. K., Xu, M., Leong, W. L., Liu, Z., Basu, A. & Mathews, N. (2020). Optogenetics inspired transition metal dichalcogenide neuristors for in-memory deep recurrent neural networks. Nature Communications, 11, 3211-. https://dx.doi.org/10.1038/s41467-020-16985-0 | Journal: | Nature Communications | Abstract: | Shallow feed-forward networks are incapable of addressing complex tasks such as natural language processing that require learning of temporal signals. To address these requirements, we need deep neuromorphic architectures with recurrent connections such as deep recurrent neural networks. However, the training of such networks demand very high precision of weights, excellent conductance linearity and low write-noise- not satisfied by current memristive implementations. Inspired from optogenetics, here we report a neuromorphic computing platform comprised of photo-excitable neuristors capable of in-memory computations across 980 addressable states with a high signal-to-noise ratio of 77. The large linear dynamic range, low write noise and selective excitability allows high fidelity opto-electronic transfer of weights with a two-shot write scheme, while electrical in-memory inference provides energy efficiency. This method enables implementing a memristive deep recurrent neural network with twelve trainable layers with more than a million parameters to recognize spoken commands with >90% accuracy. | URI: | https://hdl.handle.net/10356/152915 | ISSN: | 2041-1723 | DOI: | 10.1038/s41467-020-16985-0 | DOI (Related Dataset): | 10.21979/N9/SQ7XOF | Rights: | © 2020 The Author(s). This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/ licenses/by/4.0/. | Fulltext Permission: | open | Fulltext Availability: | With Fulltext |
Appears in Collections: | EEE Journal Articles ERI@N Journal Articles IGS Journal Articles MSE Journal Articles |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
s41467-020-16985-0.pdf | 2.25 MB | Adobe PDF | ![]() View/Open |
SCOPUSTM
Citations
20
21
Updated on Feb 1, 2023
Web of ScienceTM
Citations
20
20
Updated on Feb 1, 2023
Page view(s)
173
Updated on Feb 2, 2023
Download(s) 50
61
Updated on Feb 2, 2023
Google ScholarTM
Check
Altmetric
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.