Please use this identifier to cite or link to this item:
Title: Convergence properties of recurrent neural network
Authors: Fei, Yong Gang
Keywords: DRNTU::Engineering::Electrical and electronic engineering
Issue Date: 2010
Abstract: Elman networks (ENs) can be viewed as a feed-forward (FF) neural network with an additional set of inputs from the context layer input (feedback from the hidden layer). Therefore, a standard on-line (real time) back propagation (BP) algorithm, instead of the off-line back propagation through time (BPTT) algorithm, can be applied for the training of ENs, which is usually called Elman back propagation (EBP) for discrete time sequence prediction applications. However, the standard BP training algorithm is not the most suitable one for ENs. Using a small learning rate may help improve the training of ENs, but it can result in very slow convergence speed and poor generalization performance, while a large learning rate may lead to unstable training in terms of weight divergence.
Description: 58 p.
Rights: Nanyang Technological University
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:EEE Theses

Files in This Item:
File Description SizeFormat 
  Restricted Access
4.41 MBAdobe PDFView/Open

Page view(s)

checked on Sep 30, 2020


checked on Sep 30, 2020

Google ScholarTM


Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.