Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/178531
Title: Purify unlearnable examples via rate-constrained variational autoencoders
Authors: Yu, Yi
Wang, Yufei
Xia, Song
Yang, Wenhan
Lu, Shijian
Tan, Yap Peng
Kot, Alex Chichung
Keywords: Computer and Information Science
Issue Date: 2024
Source: Yu, Y., Wang, Y., Xia, S., Yang, W., Lu, S., Tan, Y. P. & Kot, A. C. (2024). Purify unlearnable examples via rate-constrained variational autoencoders. 41st International Conference on Machine Learning (ICML 2024), PMLR 235, 1-25.
Project: DSOCL22332 
Conference: 41st International Conference on Machine Learning (ICML 2024)
Abstract: Unlearnable examples (UEs) seek to maximize testing error by making subtle modifications to training examples that are correctly labeled. Defenses against these poisoning attacks can be categorized based on whether specific interventions are adopted during training. The first approach is training-time defense, such as adversarial training, which can mitigate poisoning effects but is computationally intensive. The other approach is pre-training purification, e.g., image short squeezing, which consists of several simple compressions but often encounters challenges in dealing with various UEs. Our work provides a novel disentanglement mechanism to build an efficient pre-training purification method. Firstly, we uncover rate-constrained variational autoencoders (VAEs), demonstrating a clear tendency to suppress the perturbations in UEs. We subsequently conduct a theoretical analysis for this phenomenon. Building upon these insights, we introduce a disentangle variational autoencoder (D- VAE), capable of disentangling the perturbations with learnable class-wise embeddings. Based on this network, a two-stage purification approach is naturally developed. The first stage focuses on roughly eliminating perturbations, while the second stage produces refined, poison-free results, ensuring effectiveness and robustness across various scenarios. Extensive experiments demonstrate the remarkable performance of our method across CIFAR-10, CIFAR-100, and a 100-class ImageNet-subset. Code is available at https://github.com/yuyi-sd/D-VAE.
URI: https://hdl.handle.net/10356/178531
URL: https://proceedings.mlr.press/v235/
https://icml.cc/
Schools: Interdisciplinary Graduate School (IGS) 
School of Electrical and Electronic Engineering 
School of Computer Science and Engineering 
Research Centres: Rapid-Rich Object Search (ROSE) Lab 
Rights: © 2024 The Author(s). All rights reserved. This article may be downloaded for personal use only. Any other use requires prior permission of the copyright holder. The Version of Record is available online at https://proceedings.mlr.press/v235/.
Fulltext Permission: open
Fulltext Availability: With Fulltext
Appears in Collections:IGS Conference Papers

Files in This Item:
File Description SizeFormat 
paper.pdf4.23 MBAdobe PDFThumbnail
View/Open

Page view(s)

124
Updated on Mar 16, 2025

Download(s) 50

27
Updated on Mar 16, 2025

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.