Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/181218
Title: | Revisiting class-incremental learning with pre-trained models: generalizability and adaptivity are all you need | Authors: | Zhou, Da-Wei Cai, Zi-Wen Ye, Han-Jia Zhan, De-Chuan Liu, Ziwei |
Keywords: | Computer and Information Science | Issue Date: | 2024 | Source: | Zhou, D., Cai, Z., Ye, H., Zhan, D. & Liu, Z. (2024). Revisiting class-incremental learning with pre-trained models: generalizability and adaptivity are all you need. International Journal of Computer Vision. https://dx.doi.org/10.1007/s11263-024-02218-0 | Project: | MOET2EP20221-0012 NTU NAP IAF-ICP |
Journal: | International Journal of Computer Vision | Abstract: | Class-incremental learning (CIL) aims to adapt to emerging new classes without forgetting old ones. Traditional CIL models are trained from scratch to continually acquire knowledge as data evolves. Recently, pre-training has achieved substantial progress, making vast pre-trained models (PTMs) accessible for CIL. Contrary to traditional methods, PTMs possess generalizable embeddings, which can be easily transferred for CIL. In this work, we revisit CIL with PTMs and argue that the core factors in CIL are adaptivity for model updating and generalizability for knowledge transferring. (1) We first reveal that frozen PTM can already provide generalizable embeddings for CIL. Surprisingly, a simple baseline (SimpleCIL) which continually sets the classifiers of PTM to prototype features can beat state-of-the-art even without training on the downstream task. (2) Due to the distribution gap between pre-trained and downstream datasets, PTM can be further cultivated with adaptivity via model adaptation. We propose AdaPt and mERge (Aper), which aggregates the embeddings of PTM and adapted models for classifier construction. Aper is a general framework that can be orthogonally combined with any parameter-efficient tuning method, which holds the advantages of PTM’s generalizability and adapted model’s adaptivity. (3) Additionally, considering previous ImageNet-based benchmarks are unsuitable in the era of PTM due to data overlapping, we propose four new benchmarks for assessment, namely ImageNet-A, ObjectNet, OmniBenchmark, and VTAB. Extensive experiments validate the effectiveness of Aper with a unified and concise framework. Code is available at https://github.com/zhoudw-zdw/RevisitingCIL. | URI: | https://hdl.handle.net/10356/181218 | ISSN: | 0920-5691 | DOI: | 10.1007/s11263-024-02218-0 | Schools: | College of Computing and Data Science | Research Centres: | S-Lab | Rights: | © 2024 The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature. All rights reserved. | Fulltext Permission: | none | Fulltext Availability: | No Fulltext |
Appears in Collections: | CCDS Journal Articles |
SCOPUSTM
Citations
50
4
Updated on Mar 24, 2025
Page view(s)
71
Updated on Mar 23, 2025
Google ScholarTM
Check
Altmetric
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.