Please use this identifier to cite or link to this item:
|Title:||Decision trees, random vector functional link neural network and their ensembles||Authors:||Katuwal, Rakesh Kumar||Keywords:||Engineering::Computer science and engineering::Computing methodologies::Pattern recognition||Issue Date:||2019||Publisher:||Nanyang Technological University||Source:||Katuwal, R. K. (2019). Decision trees, random vector functional link neural network and their ensembles. Doctoral thesis, Nanyang Technological University, Singapore.||Abstract:||Decision Tree (DT) and Neural Network (NN) are two popular machine learning algorithms. A decision tree consists of internal nodes and leaf nodes. Each internal node is associated with a test function while each leaf node represents a class label or contains probability distribution. A decision tree is effectively a sequence of IF-THEN rules which is intuitive to humans and simple to implement. Meanwhile, a neural network is configured to mimic the functionality of the animal brain using highly interconnected system of computational units called neurons. The neurons are basically realized in the form of non-linear activation functions which process numeric inputs and output. The neurons are interconnected by transmission channels called synapse which is represented by the weights of the network connection. The weights are typically learned using back-propagation (BP) algorithm. The standard BP algorithm is time consuming, requires large amount of training data and may fail to converge. However, randomization based neural networks such as Random Vector Functional Link (RVFL) neural network eschew these issues by randomly fixing some parts (parameters) of the network while optimizing the rest using a closed form solution or an iterative procedure. Ensemble of classifiers, also known as multiple classifier system, is a widely researched and frequently applied approach in machine learning. A multitude of studies corroborate that the combination of many classifiers leads to an improved performance compared to a single individual classifier. Decision trees and neural networks are typically used in an ensemble framework to obtain better performance. Random Forest (RaF), an ensemble of decision trees, is extensively used two win several machine learning competitions and is considered one of the best algorithm. On the other hand, the popularity of Random Vector Functional Link (RVFL) neural network is also escalating because of its impressive performance in several diverse domains and its faster training time compared to conventional BP-trained NN. The first part of this thesis is based on random forest (including a hybrid ensemble classifier based on RaF and standard shallow RVFL) while the second part is based on multi-layer or deep random vector functional link neural network. Decision trees in random forest usually employ binary splits at each node. The binary splits result in very deep trees. We utilize RVFL at the root node of the DTs to create multi-way splits. The proposed method provides a rich insight into the data by grouping the confusing or hard to classify samples and thus, provides an opportunity to employ fine-grained classification rule over the data. Also, by using multi-way splits, we obtain shallow trees that agree with Ockham\textquotesingle s razor principle of building smaller trees. Extensive experiments on several multi-class datasets show the efficacy of our method. We also present our work in oblique decision trees. In oblique decision trees, an oblique (linear) hyperplane is employed instead of an axis-parallel (orthogonal) hyperplane at each non-leaf node to partition the data. Trees with such hyperplanes can better exploit the geometric structure in the data to increase the accuracy of the trees and reduce the depth. While the standard decision trees perform an exhaustive search for the best axis-parallel hyperplane, such an exhaustive search for the best oblique hyperplane is computationally expensive. Therefore, the present realizations of oblique decision trees do not evaluate many promising oblique splits to select the best. We present several oblique random forest variants that search for the best oblique hyperplane from a pool of homogeneous hyperplanes by employing one-vs-all approach. Similarly, we also propose a random forest of heterogeneous oblique decision trees that employ oblique splits generated via diverse linear classifiers at each non-leaf node on some top ranked partitions. In extensive experiments on 121 UCI machine learning datasets, 3 (out of 7) of our proposed oblique random forests take the top 3 ranks by outperforming other random forests and hundreds of other classifiers. Deep learning, also known as representational learning, has sparked a surging interest in neural networks in recent years. Deep NN consists of several hidden layers stacked on top of each other wherein each hidden layer builds an internal representation of the data. With the current trend of building deeper networks, there have also been several attempts in the literature to build deep networks based on randomized neural networks. Even though there exist several deep learning models with randomized neural networks, there are limited works in the context of RVFL neural network. Thus, in this thesis, we propose several multi-layer (deep) random vector functional link neural networks variants. We also present an implicit ensemble of deep RVFL neural network which can be regarded as a marriage of ensemble and deep learning, that is simple and straight-forward to implement.||URI:||https://hdl.handle.net/10356/136779||DOI:||10.32657/10356/136779||Rights:||This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0).||Fulltext Permission:||open||Fulltext Availability:||With Fulltext|
|Appears in Collections:||EEE Theses|
Page view(s) 50540
Updated on Jan 31, 2023
Updated on Jan 31, 2023
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.