FeDeRA:Efficient Fine-tuning of Language Models in Federated Learning Leveraging Weight Decomposition

29 Apr 2024  ·  Yuxuan Yan, Qianqian Yang, Shunpu Tang, Zhiguo Shi ·

Despite their exceptional performance on various tasks after fine-tuning, pre-trained language models (PLMs) face significant challenges due to growing privacy concerns with data in centralized training methods. We consider federated learning (FL) to fine-tune PLMs in this paper. However, the substantial number of parameters in PLMs poses significant difficulties for client devices with limited communication and computational resources. One promising solution is to exploit parameter-efficient fine-tuning (PEFT) into FL, which trains a much smaller set of parameters than full parameter fine-tuning (FFT). Although remarkably improving training efficiency, PEFT methods may lead to degraded performance especially when data across different clients are non i.i.d, as revealed by experimental results. To overcome this, we propose FeDeRA, which extends and improves a widely used PEFT method, i.e., low-rank adaption (LoRA). FeDeRA follows LoRA by decomposing the weight matrices of the PLMs into low-rank matrices, which allows for more efficient computation and parameter updates during fine-tuning. Different from LoRA which simply initializes these low-rank matrices by random sampling or zeros, the proposed FeDeRA initializes these matrices by the results of performing singular value decomposition (SVD) on the pre-trained weight matrices. Extensive experiments across various tasks and datasets show that FeDeRA outperforms the considered PEFT baselines and is comparable to or even surpasses FFT method within the FL setting in terms of task performance. Moreover, FeDeRA requires only 1% trainable paramentes compared to FFT, significantly reducing training time costs by more than 90% to achieve the same task performance level. The experimental results also highlight the robustness of FeDeRA against data heterogeneity, as it maintains stable task performance even as data heterogeneity increases.

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods