Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Domain Adaptation of Llama3-70B-Instruct through Continual Pre-Training and Model Merging: A Comprehensive Evaluation (2406.14971v1)

Published 21 Jun 2024 in cs.CL, cs.AI, and cs.LG

Abstract: We conducted extensive experiments on domain adaptation of the Meta-Llama-3-70B-Instruct model on SEC data, exploring its performance on both general and domain-specific benchmarks. Our focus included continual pre-training (CPT) and model merging, aiming to enhance the model's domain-specific capabilities while mitigating catastrophic forgetting. Through this study, we evaluated the impact of integrating financial regulatory data into a robust LLM and examined the effectiveness of our model merging techniques in preserving and improving the model's instructive abilities. The model is accessible at hugging face: https://huggingface.co/arcee-ai/Llama-3-SEC-Base, arcee-ai/Llama-3-SEC-Base. This is an intermediate checkpoint of our final model, which has seen 20B tokens so far. The full model is still in the process of training. This is a preprint technical report with thorough evaluations to understand the entire process.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Shamane Siriwardhana (8 papers)
  2. Mark McQuade (5 papers)
  3. Thomas Gauthier (36 papers)
  4. Lucas Atkins (3 papers)
  5. Fernando Fernandes Neto (6 papers)
  6. Luke Meyers (4 papers)
  7. Anneketh Vij (2 papers)
  8. Tyler Odenthal (1 paper)
  9. Charles Goddard (5 papers)
  10. Mary MacCarthy (1 paper)
  11. Jacob Solawetz (5 papers)
Citations (3)
X Twitter Logo Streamline Icon: https://streamlinehq.com