Please use this identifier to cite or link to this item:
|Title:||Self-reorganizing TSK fuzzy inference system with BCM theory of meta-plasticity||Authors:||Jacob, Biju Jaseph
Cheu, Eng Yeow
|Keywords:||DRNTU::Engineering::Computer science and engineering||Issue Date:||2012||Source:||Jacob, B. J., Cheu, E. Y., Tan, J., & Quek, C. (2012). Self-reorganizing TSK fuzzy inference system with BCM theory of meta-plasticity. The 2012 International Joint Conference on Neural Networks (IJCNN).||Abstract:||The usage of online learning technique in neuro-fuzzy system (NFS) to address system variance is more prevalent in recent times. Since a lot of external factors have an effect on time-variant datasets, these datasets tend to experience changes in their pattern. While small changes (“drifts”) can be handled by the traditional self-organizing techniques, major changes (“shifts”) are not handled. Thus, there is a growing need for these systems to be able to self-reorganize their structures to adapt to major changes in data patterns. Hebb's theory for learning in NFSs, proposed that synaptic strengths could be determined by a simple linear relation of the pre- and post-synaptic signals. However this theory resulted in a unidirectional growth of synaptic strengths and destabilized the model. The Bienenstock-Cooper-Munro (BCM) theory of learning resolves these problems by incorporating synaptic potentiation (association or Hebbian) and depression (dissociation or anti-Hebbian), which is useful for time-variant data computations. There are two popular methods for fuzzy rule representation, namely: Mamdani and Takagi-Sugeno-Kang (TSK) model. Mamdani model focuses on interpretability and compensates on accuracy. Rules are created by associating an input fuzzy region to an output fuzzy region. However, the TSK model associates an input fuzzy region to a linear function/plane making it more accurate than the Mamdani model. Current TSK models like SAFIS, eTS, and DENFIS attempt to strike a balance between the accuracy and interpretability of the model. However, most of the models utilize offline learning algorithms and require multiple passes of the data samples. Furthermore, the models that use online learning mainly employ Hebb's theory of incremental learning. This paper proposes a neuro-fuzzy architecture that uses the BCM theory of online learning with extensive self-reorganizing capabilities. It also uses a first-order TSK model for know- edge representation, which allows for an accurate output calculation.||URI:||https://hdl.handle.net/10356/98222
|DOI:||http://dx.doi.org/10.1109/IJCNN.2012.6252527||Rights:||© 2012 IEEE.||Fulltext Permission:||none||Fulltext Availability:||No Fulltext|
|Appears in Collections:||SCSE Conference Papers|
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.