Chuiyang Meng, Ming Tang, Vincent W. S. Wong
FLoRG improves federated fine-tuning of large language models by using a single low-rank matrix and Procrustes alignment, enhancing accuracy and reducing communication overhead.
FLoRG is a new method for improving how large language models (LLMs) are fine-tuned across different devices without sharing sensitive data. It simplifies the process by using one low-rank matrix instead of two, which helps avoid errors and reduces the amount of data that needs to be communicated. Additionally, it uses a technique called Procrustes alignment to ensure consistent updates, leading to better performance. Tests show that FLoRG is more accurate and efficient compared to other current methods.