dc.contributor.authorChen, Leien_US
dc.date.accessioned2008-09-17T09:37:53Z
dc.date.accessioned2017-07-23T08:31:25Z
dc.date.available2008-09-17T09:37:53Z
dc.date.available2017-07-23T08:31:25Z
dc.date.copyright2007en_US
dc.date.issued2007
dc.identifier.citationChen, L. (2007). Incremental extreme learning machine. Doctoral thesis, Nanyang Technological University, Singapore.
dc.identifier.urihttp://hdl.handle.net/10356/3804
dc.description.abstractThis new theory shows that in order to let SLFNs work as universal approximators, one may simply randomly choose input-to-hidden nodes, and then we only need to adjust the output weights linking the hidden layer and the output layer. In such SLFNs implementations, the activation functions for additive nodes can be any bounded nonconstant piecewise continuous functions or the activation functions for RBF nodes can be any integrable piecewise continuous functions.We propose two incremental algorithms:1) Incremental extreme learning machine (I-ELM) 2) Convex I-ELM (CI-ELM).en_US
dc.rightsNanyang Technological Universityen_US
dc.subjectDRNTU::Engineering::Electrical and electronic engineering::Computer hardware, software and systems
dc.titleIncremental extreme learning machineen_US
dc.typeThesisen_US
dc.contributor.schoolSchool of Electrical and Electronic Engineeringen_US
dc.contributor.supervisorHuang Guangbinen_US
dc.description.degreeDOCTOR OF PHILOSOPHY (EEE)en_US


Files in this item

FilesSizeFormatView
EEE-THESES_163.pdf891.2Kbapplication/pdfView/Open

This item appears in the following Collection(s)

Show simple item record