Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/101260
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSuresh, Sundaramen
dc.contributor.authorSavitha, R.en
dc.contributor.authorKim, H. J.en
dc.date.accessioned2013-10-24T06:58:47Zen
dc.date.accessioned2019-12-06T20:35:46Z-
dc.date.available2013-10-24T06:58:47Zen
dc.date.available2019-12-06T20:35:46Z-
dc.date.copyright2013en
dc.date.issued2013en
dc.identifier.citationSavitha, R., Suresh, S., & Kim, H. J.(2013). A meta-cognitive learning algorithm for an extreme learning machine classifier. Cognitive computation, 6(2), 253-263.en
dc.identifier.issn1866-9956en
dc.identifier.urihttps://hdl.handle.net/10356/101260-
dc.description.abstractThis paper presents an efficient fast learning classifier based on the Nelson and Narens model of human meta-cognition, namely ‘Meta-cognitive Extreme Learning Machine (McELM).’ McELM has two components: a cognitive component and a meta-cognitive component. The cognitive component of McELM is a three-layered extreme learning machine (ELM) classifier. The neurons in the hidden layer of the cognitive component employ the q-Gaussian activation function, while the neurons in the input and output layers are linear. The meta-cognitive component of McELM has a self-regulatory learning mechanism that decides what-to-learn, when-to-learn, and how-to-learn in a meta-cognitive framework. As the samples in the training set are presented one-by-one, the meta-cognitive component receives the monitory signals from the cognitive component and chooses suitable learning strategies for the sample. Thus, it either deletes the sample, uses the sample to add a new neuron, or updates the output weights based on the sample, or reserves the sample for future use. Therefore, unlike the conventional ELM, the architecture of McELM is not fixed a priori, instead, the network is built during the training process. While adding a neuron, McELM chooses the centers based on the sample, and the width of the Gaussian function is chosen randomly. The output weights are estimated using the least square estimate based on the hinge-loss error function. The hinge-loss error function facilitates prediction of posterior probabilities better than the mean-square error and is hence preferred to develop the McELM classifier. While updating the network parameters, the output weights are updated using a recursive least square estimate. The performance of McELM is evaluated on a set of benchmark classification problems from the UCI machine learning repository. Performance study results highlight that meta-cognition in ELM framework enhances the decision-making ability of ELM significantly.en
dc.language.isoenen
dc.relation.ispartofseriesCognitive computationen
dc.subjectDRNTU::Engineering::Computer science and engineeringen
dc.titleA meta-cognitive learning algorithm for an extreme learning machine classifieren
dc.typeJournal Articleen
dc.contributor.schoolSchool of Computer Engineeringen
dc.identifier.doi10.1007/s12559-013-9223-2en
item.fulltextNo Fulltext-
item.grantfulltextnone-
Appears in Collections:SCSE Journal Articles

SCOPUSTM   
Citations 5

62
Updated on Mar 22, 2024

Web of ScienceTM
Citations 5

53
Updated on Oct 27, 2023

Page view(s) 5

931
Updated on Mar 28, 2024

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.