Please use this identifier to cite or link to this item:
|Title:||Multi-Task CNN Model for Attribute Prediction||Authors:||Abdulnabi, Abrar H.
Latent tasks matrix
|Issue Date:||2015||Source:||Abdulnabi, A. H., Wang, G., Lu, J., & Jia, K. (2015). Multi-Task CNN Model for Attribute Prediction. IEEE Transactions on Multimedia, 17(11), 1949-1959.||Series/Report no.:||IEEE Transactions on Multimedia||Abstract:||This paper proposes a joint multi-task learning algorithm to better predict attributes in images using deep convolutional neural networks (CNN). We consider learning binary semantic attributes through a multi-task CNN model, where each CNN will predict one binary attribute. The multitask learning allows CNN models to simultaneously share visual knowledge among different attribute categories. Each CNN will generate attribute-specific feature representations, and then we apply multi-task learning on the features to predict their attributes. In our multi-task framework, we propose a method to decompose the overall model’s parameters into a latent task matrix and combination matrix. Furthermore, under-sampled classifiers can leverage shared statistics from other classifiers to improve their performance. Natural grouping of attributes is applied such that attributes in the same group are encouraged to share more knowledge. Meanwhile, attributes in different groups will generally compete with each other, and consequently share less knowledge. We show the effectiveness of our method on two popular attribute datasets.||URI:||https://hdl.handle.net/10356/82925
|ISSN:||1520-9210||DOI:||10.1109/TMM.2015.2477680||Rights:||© 2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: [http://dx.doi.org/10.1109/TMM.2015.2477680].||Fulltext Permission:||open||Fulltext Availability:||With Fulltext|
|Appears in Collections:||EEE Journal Articles|
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.