Chuangxin Zhang, Guangfeng Lin, Enhui Zhao, Kaiyang Liao, Yajun Chen
The paper introduces SCL-PNC, a scalable class-incremental learning method that uses parametric neural collapse to efficiently expand models while maintaining feature consistency and addressing class misalignment.
Incremental learning allows AI models to learn new information over time without forgetting what they've previously learned, but it often struggles with balancing new and old data. This research presents a novel approach called SCL-PNC that helps AI models expand flexibly and efficiently as they encounter new categories, ensuring they don't forget older information. The method uses a strategy called neural collapse to align features across different learning modules, which helps maintain consistency and accuracy as the model grows. Experiments show that this approach works well in practical scenarios, making it a promising solution for scalable learning systems.