FedJudge: Federated Legal Large Language Model (2309.08173v3)
Abstract: LLMs have gained prominence in the field of Legal Intelligence, offering potential applications in assisting legal professionals and laymen. However, the centralized training of these Legal LLMs raises data privacy concerns, as legal data is distributed among various institutions containing sensitive individual information. This paper addresses this challenge by exploring the integration of Legal LLMs with Federated Learning (FL) methodologies. By employing FL, Legal LLMs can be fine-tuned locally on devices or clients, and their parameters are aggregated and distributed on a central server, ensuring data privacy without directly sharing raw data. However, computation and communication overheads hinder the full fine-tuning of LLMs under the FL setting. Moreover, the distribution shift of legal data reduces the effectiveness of FL methods. To this end, in this paper, we propose the first Federated Legal LLM (FedJudge) framework, which fine-tunes Legal LLMs efficiently and effectively. Specifically, FedJudge utilizes parameter-efficient fine-tuning methods to update only a few additional parameters during the FL training. Besides, we explore the continual learning methods to preserve the global model's important parameters when training local clients to mitigate the problem of data shifts. Extensive experimental results on three real-world datasets clearly validate the effectiveness of FedJudge. Code is released at https://github.com/yuelinan/FedJudge.
- OpenAI, “Introducing chatgpt,” OpenAI Blogs, 2023.
- “Llama: Open and efficient foundation language models,” arXiv preprint arXiv:2302.13971, 2023.
- “Lawyer llama technical report,” ArXiv, 2023.
- “Chatlaw: Open-source legal large language model with integrated external knowledge bases,” ArXiv, 2023.
- “Fedclip: Fast generalization and personalization for CLIP in federated learning,” IEEE Data Eng. Bull., 2023.
- “Federated large language model: A position paper,” ArXiv, 2023.
- “Fedlegal: The first real-world federated learning benchmark for legal nlp,” in Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics, 2023, pp. 3492–3507.
- “Communication-efficient learning of deep networks from decentralized data,” in Artificial intelligence and statistics. PMLR, 2017, pp. 1273–1282.
- “Federated machine learning: Concept and applications,” ACM Transactions on Intelligent Systems and Technology (TIST), vol. 10, no. 2, pp. 1–19, 2019.
- “Communication-efficient online federated learning framework for nonlinear regression,” in ICASSP 2022, Virtual and Singapore, 23-27 May 2022.
- “Federated learning challenges and opportunities: An outlook,” in ICASSP 2022, Virtual and Singapore, 23-27 May 2022.
- “FedPETuning: When federated learning meets the parameter-efficient tuning methods of pre-trained language models,” in Findings of the Association for Computational Linguistics: ACL 2023, July 2023.
- “LoRA: Low-rank adaptation of large language models,” in International Conference on Learning Representations, 2022.
- “Few-shot continual learning for audio classification,” in ICASSP 2021, Toronto, ON, Canada, June 6-11, 2021.
- “Image de-raining via continual learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2021, pp. 4907–4916.
- “Attention is all you need,” Advances in neural information processing systems, 2017.
- “Circumstances enhanced criminal court view generation,” in Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2021.
- “Stanford alpaca: An instruction-following llama model,” https://github.com/tatsu-lab/stanford_alpaca, 2023.
- BaiChuan-Inc, “A large-scale 7b pretraining language model developed by baichuan-inc.,” https://github.com/baichuan-inc/Baichuan-7B, 2023.
- “Bertscore: Evaluating text generation with BERT,” in 8th International Conference on Learning Representations, ICLR 2020,Addis Ababa, Ethiopia, April 26-30, 2020. 2020, OpenReview.net.
- “Adam: A method for stochastic optimization,” in 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, Yoshua Bengio and Yann LeCun, Eds., 2015.