Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/161032
Title: An incremental construction of deep neuro fuzzy system for continual learning of nonstationary data streams
Authors: Pratama, Mahardhika
Pedrycz, Witold
Webb, Geoffrey I.
Keywords: Engineering::Computer science and engineering
Issue Date: 2019
Source: Pratama, M., Pedrycz, W. & Webb, G. I. (2019). An incremental construction of deep neuro fuzzy system for continual learning of nonstationary data streams. IEEE Transactions On Fuzzy Systems, 28(7), 1315-1328. https://dx.doi.org/10.1109/TFUZZ.2019.2939993
Project: RG130/17
Journal: IEEE Transactions on Fuzzy Systems
Abstract: Existing fuzzy neural networks (FNNs) are mostly developed under a shallow network configuration having lower generalization power than those of deep structures. This article proposes a novel self-organizing deep FNN, namely deep evolving fuzzy neural network (DEVFNN). Fuzzy rules can be automatically extracted from data streams or removed if they play limited role during their lifespan. The structure of the network can be deepened on demand by stacking additional layers using a drift detection method, which not only detects the covariate drift, variations of input space, but also accurately identifies the real drift, dynamic changes of both feature space and target space. The DEVFNN is developed under the stacked generalization principle via the feature augmentation concept, where a recently developed algorithm, namely generic classifier, drives the hidden layer. It is equipped by an automatic feature selection method, which controls activation and deactivation of input attributes to induce varying subsets of input features. A deep network simplification procedure is put forward using the concept of hidden layer merging to prevent the uncontrollable growth of dimensionality of input space due to the nature of the feature augmentation approach in building a deep network structure. The DEVFNN works in the samplewise fashion and is compatible for data stream applications. The efficacy of the DEVFNN has been thoroughly evaluated using seven datasets with nonstationary properties under the prequential test-then-train protocol. It has been compared with four popular continual learning algorithms and its shallow counterpart, where the DEVFNN demonstrates improvement of classification accuracy. Moreover, it is also shown that the concept of the drift detection method is an effective tool to control the depth of the network structure, while the hidden layer merging scenario is capable of simplifying the network complexity of a deep network with negligible compromise of generalization performance.
URI: https://hdl.handle.net/10356/161032
ISSN: 1063-6706
DOI: 10.1109/TFUZZ.2019.2939993
Schools: School of Computer Science and Engineering 
Rights: © 2019 IEEE. All rights reserved.
Fulltext Permission: none
Fulltext Availability: No Fulltext
Appears in Collections:SCSE Journal Articles

SCOPUSTM   
Citations 20

27
Updated on Feb 26, 2024

Web of ScienceTM
Citations 20

23
Updated on Oct 25, 2023

Page view(s)

102
Updated on Feb 27, 2024

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.