Please use this identifier to cite or link to this item:
|Title:||Multilabel prediction via cross-view search||Authors:||Shen, Xiaobo
Tsang, Ivor W.
|Keywords:||Engineering::Computer science and engineering||Issue Date:||2017||Source:||Shen, X., Liu, W., Tsang, I. W., Sun, Q.-S., & Ong, Y.-S. (2018). Multilabel prediction via cross-view search. IEEE Transactions on Neural Networks and Learning Systems, 29(9), 4324-4338. doi:10.1109/TNNLS.2017.2763967||Journal:||IEEE Transactions on Neural Networks and Learning Systems||Abstract:||Embedding methods have shown promising performance in multilabel prediction, as they are able to discover the label dependence. However, most methods ignore the correlations between the input and output, such that their learned embeddings are not well aligned, which leads to degradation in prediction performance. This paper presents a formulation for multilabel learning, from the perspective of cross-view learning, that explores the correlations between the input and the output. The proposed method, called Co-Embedding (CoE), jointly learns a semantic common subspace and view-specific mappings within one framework. The semantic similarity structure among the embeddings is further preserved, ensuring that close embeddings share similar labels. Additionally, CoE conducts multilabel prediction through the cross-view k nearest neighborhood (k NN) search among the learned embeddings, which significantly reduces computational costs compared with conventional decoding schemes. A hashing-based model, i.e., Co-Hashing (CoH), is further proposed. CoH is based on CoE, and imposes the binary constraint on continuous latent embeddings. CoH aims to generate compact binary representations to improve the prediction efficiency by benefiting from the efficient k NN search of multiple labels in the Hamming space. Extensive experiments on various real-world data sets demonstrate the superiority of the proposed methods over the state of the arts in terms of both prediction accuracy and efficiency.||URI:||https://hdl.handle.net/10356/139886||ISSN:||2162-237X||DOI:||10.1109/TNNLS.2017.2763967||Rights:||© 2017 IEEE. All rights reserved.||Fulltext Permission:||none||Fulltext Availability:||No Fulltext|
|Appears in Collections:||SCSE Journal Articles|
Updated on Sep 3, 2020
Updated on Nov 18, 2020
Updated on Nov 24, 2020
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.