Yaoyue Zheng, Yin Zhang, Joost van de Weijer, Gido M van de Ven, Shaoyi Du, Xuetao Zhang, Zhiqiang Tian
The paper introduces EWC-LoRA, a method that uses weight regularization with low-rank updates to improve continual learning with pre-trained models, reducing task interference while maintaining efficiency.
Continual learning is about teaching machines to learn new tasks without forgetting old ones. This paper focuses on improving how machines can keep learning by using pre-trained models efficiently. The authors propose a new method called EWC-LoRA, which combines low-rank updates with a technique called Elastic Weight Consolidation to reduce interference between tasks. This approach is efficient in terms of memory and computation, and it performs better than existing methods in maintaining a balance between learning new tasks and remembering old ones.