Please use this identifier to cite or link to this item:
|Title:||Dictionary training for sparse representation as generalization of K-means clustering||Authors:||Sahoo, Sujit Kumar
|Keywords:||DRNTU::Engineering::Electrical and electronic engineering||Issue Date:||2013||Source:||Sahoo, S. K., & Makur, A. (2013). Dictionary Training for Sparse Representation as Generalization of K-Means Clustering. IEEE Signal Processing Letters, 20(6), 587-590.||Series/Report no.:||IEEE signal processing letters||Abstract:||Recent dictionary training algorithms for sparse representation like K-SVD, MOD, and their variation are reminiscent of K-means clustering, and this letter investigates such algorithms from that viewpoint. It shows: though K-SVD is sequential like K-means, it fails to simplify to K-means by destroying the structure in the sparse coefficients. In contrast, MOD can be viewed as a parallel generalization of K-means, which simplifies to K-means without perturbing the sparse coefficients. Keeping memory usage in mind, we propose an alternative to MOD; a sequential generalization of K-means (SGK). While experiments suggest a comparable training performances across the algorithms, complexity analysis shows MOD and SGK to be faster under a dimensionality condition.||URI:||https://hdl.handle.net/10356/96655
|DOI:||10.1109/LSP.2013.2258912||Rights:||© 2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: [http://dx.doi.org/10.1109/LSP.2013.2258912].||Fulltext Permission:||open||Fulltext Availability:||With Fulltext|
|Appears in Collections:||EEE Journal Articles|
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.