Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/99250
Title: Laplacian embedded regression for scalable manifold regularization
Authors: Chen, Lin
Tsang, Ivor Wai-Hung
Xu, Dong
Keywords: DRNTU::Engineering::Computer science and engineering
Issue Date: 2012
Source: Chen, L., Tsang, I. W., & Xu, D. (2012). Laplacian Embedded Regression for Scalable Manifold Regularization. IEEE Transactions on Neural Networks and Learning Systems, 23(6), 902-915.
Series/Report no.: IEEE transactions on neural networks and learning systems
Abstract: Semi-supervised learning (SSL), as a powerful tool to learn from a limited number of labeled data and a large number of unlabeled data, has been attracting increasing attention in the machine learning community. In particular, the manifold regularization framework has laid solid theoretical foundations for a large family of SSL algorithms, such as Laplacian support vector machine (LapSVM) and Laplacian regularized least squares (LapRLS). However, most of these algorithms are limited to small scale problems due to the high computational cost of the matrix inversion operation involved in the optimization problem. In this paper, we propose a novel framework called Laplacian embedded regression by introducing an intermediate decision variable into the manifold regularization framework. By using ∈-insensitive loss, we obtain the Laplacian embedded support vector regression (LapESVR) algorithm, which inherits the sparse solution from SVR. Also, we derive Laplacian embedded RLS (LapERLS) corresponding to RLS under the proposed framework. Both LapESVR and LapERLS posses a simpler form of a transformed kernel, which is the summation of the original kernel and a graph kernel that captures the manifold structure. The benefits of the transformed kernel are two-fold: (1) we can deal with the original kernel matrix and the graph Laplacian matrix in the graph kernel separately and (2) if the graph Laplacian matrix is sparse, we only need to perform the inverse operation for a sparse matrix, which is much more efficient when compared with that for a dense one. Inspired by kernel principal component analysis, we further propose to project the introduced decision variable into a subspace spanned by a few eigenvectors of the graph Laplacian matrix in order to better reflect the data manifold, as well as accelerate the calculation of the graph kernel, allowing our methods to efficiently and effectively cope with large scale SSL problems. Extensive experiments on both toy and r- al world data sets show the effectiveness and scalability of the proposed framework.
URI: https://hdl.handle.net/10356/99250
http://hdl.handle.net/10220/13482
ISSN: 2162-237X
DOI: 10.1109/TNNLS.2012.2190420
Rights: © 2012 IEEE
Fulltext Permission: none
Fulltext Availability: No Fulltext
Appears in Collections:SCSE Journal Articles

Google ScholarTM

Check

Altmetric

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.