Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/139881
Title: Robustness to training disturbances in SpikeProp Learning
Authors: Shrestha, Sumit Bam
Song, Qing
Keywords: Engineering::Electrical and electronic engineering
Issue Date: 2017
Source: Shrestha, S. B., & Song, Q. (2018). Robustness to training disturbances in SpikeProp Learning. IEEE Transactions on Neural Networks and Learning Systems, 29(7), 3126-3139. doi:10.1109/TNNLS.2017.2713125
Journal: IEEE Transactions on Neural Networks and Learning Systems
Abstract: Stability is a key issue during spiking neural network training using SpikeProp. The inherent nonlinearity of Spiking Neuron means that the learning manifold changes abruptly; therefore, we need to carefully choose the learning steps at every instance. Other sources of instability are the external disturbances that come along with training sample as well as the internal disturbances that arise due to modeling imperfection. The unstable learning scenario can be indirectly observed in the form of surges, which are sudden increases in the learning cost and are a common occurrence during SpikeProp training. Research in the past has shown that proper learning step size is crucial to minimize surges during training process. To determine proper learning step in order to avoid steep learning manifolds, we perform weight convergence analysis of SpikeProp learning in the presence of disturbance signals. The weight convergence analysis is further extended to robust stability analysis linked with overall system error. This ensures boundedness of the total learning error with minimal assumption of bounded disturbance signals. These analyses result in the learning rate normalization scheme, which are the key results of this paper. The performance of learning using this scheme has been compared with the prevailing methods for different benchmark data sets and the results show that this method has stable learning reflected by minimal surges during learning, higher success in training instances, and faster learning as well.
URI: https://hdl.handle.net/10356/139881
ISSN: 2162-237X
DOI: 10.1109/TNNLS.2017.2713125
Rights: © 2017 IEEE. All rights reserved.
Fulltext Permission: none
Fulltext Availability: No Fulltext
Appears in Collections:EEE Journal Articles

SCOPUSTM   
Citations

3
Updated on Sep 3, 2020

PublonsTM
Citations

2
Updated on Nov 26, 2020

Page view(s)

14
Updated on Nov 26, 2020

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.