Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/147336
Title: | Error-correcting output codes with ensemble diversity for robust learning in neural networks | Authors: | Song, Yang Kang, Qiyu Tay, Wee Peng |
Keywords: | Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence | Issue Date: | 2021 | Source: | Song, Y., Kang, Q. & Tay, W. P. (2021). Error-correcting output codes with ensemble diversity for robust learning in neural networks. The Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21). | Project: | A19D6a0053 Award I1901E0046 |
Conference: | The Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21) | Abstract: | Though deep learning has been applied successfully in many scenarios, malicious inputs with human-imperceptible perturbations can make it vulnerable in real applications. This paper proposes an error-correcting neural network (ECNN) that combines a set of binary classifiers to combat adversarial examples in the multi-class classification problem. To build an ECNN, we propose to design a code matrix so that the minimum Hamming distance between any two rows (i.e., two codewords) and the minimum shared information distance between any two columns (i.e., two partitions of class labels) are simultaneously maximized. Maximizing row distances can increase the system fault tolerance while maximizing column distances helps increase the diversity between binary classifiers. We propose an end-to-end training method for our ECNN, which allows further improvement of the diversity between binary classifiers. The end-to-end training renders our proposed ECNN different from the traditional error-correcting output code (ECOC) based methods that train binary classifiers independently. ECNN is complementary to other existing defense approaches such as adversarial training and can be applied in conjunction with them. We empirically demonstrate that our proposed ECNN is effective against the state-of-the-art white-box and black-box attacks on several datasets while maintaining good classification accuracy on normal examples. | URI: | https://hdl.handle.net/10356/147336 | Schools: | School of Electrical and Electronic Engineering | Research Centres: | Continental-NTU Corporate Lab | Rights: | © 2021 Association for the Advancement of Artificial Intelligence (AAAI). All rights reserved. This paper was published in The Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21) and is made available with permission of Association for the Advancement of Artificial Intelligence (AAAI). | Fulltext Permission: | open | Fulltext Availability: | With Fulltext |
Appears in Collections: | EEE Conference Papers |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
9770.YangS.pdf | 491.63 kB | Adobe PDF | View/Open |
Page view(s)
351
Updated on Sep 8, 2024
Download(s) 50
104
Updated on Sep 8, 2024
Google ScholarTM
Check
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.