dc.contributor.authorTan, Ming Kui
dc.date.accessioned2014-12-04T09:02:55Z
dc.date.accessioned2017-07-23T08:30:16Z
dc.date.available2014-12-04T09:02:55Z
dc.date.available2017-07-23T08:30:16Z
dc.date.copyright2014en_US
dc.date.issued2014
dc.identifier.citationTan, M. K. (2014). Towards efficient large-scale learning by exploiting sparsity. Doctoral thesis, Nanyang Technological University, Singapore.
dc.identifier.urihttp://hdl.handle.net/10356/61881
dc.description.abstractThe last decade has witnessed explosive growth in data. The ultrahigh-dimensional and large volume data have brought many critical issues, such as the storage disaster, the scalability issues for data analysis, and so on. To enable efficient and effective big data analysis, this thesis exploits the sparsity constraints of learning tasks and investigates large-scale learning in three directions, namely feature selection for classification tasks, sparse recovery for signal processing, and matrix recovery problem. %Focusing on the scalability challenges A {Feature Generating Machine} (FGM) is proposed to address the large-scale and ultrahigh-dimensional feature selection for classification tasks (e.g. $O(10^{12})$ features). Unlike traditional gradient-based approaches that conduct optimization on all features, FGM iteratively activates a group of features, and solves a sequence of subproblems w.r.t. the activated features only. As a result, it effectively avoids the storage disaster, and scales well on \emph{big data}. %FGM also tackles two challenging tasks -- feature selection with complex structures and nonlinear %feature selection with explicit feature mappings. A {Matching Pursuit LASSO} (MPL) algorithm is developed to address the large-scale sparse recovery problem. MPL is guaranteed to converge to a global solution, and greatly reduces the computational cost under \emph{big dictionary} (e.g. with 1 million atoms). In particular, by taking the advantage of its optimization scheme, a batch-mode MPL is developed to vastly speed up the optimization with many signals. A {Riemannian Pursuit} (RP) algorithm is proposed to address the low-rank {matrix recovery} problem. RP consists of a sequence of fixed-rank optimization problems. Each subproblem, solved by a nonlinear Riemannian conjugate gradient method. Compared to existing methods, RP does not require the rank estimation and performs stably on ill-conditioned big matrices. Extensive experiments on both synthetic and real-world problems demonstrate that the proposed methods achieve superior scalability and comparable or even better performance than the considered state-of-the-art baselines.en_US
dc.format.extent237 p.en_US
dc.language.isoenen_US
dc.subjectDRNTU::Engineering::Computer science and engineering::Computing methodologies::Artificial intelligenceen_US
dc.titleTowards efficient large-scale learning by exploiting sparsityen_US
dc.typeThesis
dc.contributor.supervisor2Ivor W. Tsangen_US
dc.contributor.researchCentre for Computational Intelligenceen_US
dc.contributor.schoolSchool of Computer Engineeringen_US
dc.description.degreeDOCTOR OF PHILOSOPHY (SCE)en_US
dc.identifier.doihttps://doi.org/10.32657/10356/61881


Files in this item

FilesSizeFormatView
main_thesis.pdf1.539Mbapplication/pdfView/Open

This item appears in the following Collection(s)

Show simple item record