Prakhar Kaushik, Ankit Vaidya, Shravan Chaudhari, Rama Chellappa, Alan Yuille
The Share method enables efficient continual learning by using a single, adaptable low-rank subspace, greatly reducing parameters and memory compared to traditional methods while maintaining performance.
This research introduces a new method called Share, which allows large AI models to learn new tasks one after the other without forgetting previous ones. Unlike traditional methods that require a lot of memory and separate models for each task, Share uses a single, shared space to store and integrate knowledge from all tasks. This approach drastically reduces the amount of memory and number of parameters needed, making it much more efficient. The method has been tested on various tasks like image classification and language understanding, showing it can perform as well as models trained on all tasks at once.