Zijian Li, Xiaocheng Feng, Huixin Liu, Yichong Huang, Ting Liu, Bing Qin
FroM is a new adaptive model merging method that uses the Frobenius norm to effectively combine fine-tuned models without data, reducing task interference.
As large language models become more advanced, fine-tuning them for specific tasks is a common way to improve their performance. However, merging these fine-tuned models can lead to problems where the tasks interfere with each other, especially when using parameter-efficient fine-tuning methods. The paper introduces FroM, an innovative method that merges models by measuring their parameters with the Frobenius norm, which doesn't require any training data. This approach reduces the task interference problem and performs better than existing methods.