Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Qwen2.5 Technical Report (2412.15115v1)

Published 19 Dec 2024 in cs.CL

Abstract: In this report, we introduce Qwen2.5, a comprehensive series of LLMs designed to meet diverse needs. Compared to previous iterations, Qwen 2.5 has been significantly improved during both the pre-training and post-training stages. In terms of pre-training, we have scaled the high-quality pre-training datasets from the previous 7 trillion tokens to 18 trillion tokens. This provides a strong foundation for common sense, expert knowledge, and reasoning capabilities. In terms of post-training, we implement intricate supervised finetuning with over 1 million samples, as well as multistage reinforcement learning. Post-training techniques enhance human preference, and notably improve long text generation, structural data analysis, and instruction following. To handle diverse and varied use cases effectively, we present Qwen2.5 LLM series in rich sizes. Open-weight offerings include base and instruction-tuned models, with quantized versions available. In addition, for hosted solutions, the proprietary models currently include two mixture-of-experts (MoE) variants: Qwen2.5-Turbo and Qwen2.5-Plus, both available from Alibaba Cloud Model Studio. Qwen2.5 has demonstrated top-tier performance on a wide range of benchmarks evaluating language understanding, reasoning, mathematics, coding, human preference alignment, etc. Specifically, the open-weight flagship Qwen2.5-72B-Instruct outperforms a number of open and proprietary models and demonstrates competitive performance to the state-of-the-art open-weight model, Llama-3-405B-Instruct, which is around 5 times larger. Qwen2.5-Turbo and Qwen2.5-Plus offer superior cost-effectiveness while performing competitively against GPT-4o-mini and GPT-4o respectively. Additionally, as the foundation, Qwen2.5 models have been instrumental in training specialized models such as Qwen2.5-Math, Qwen2.5-Coder, QwQ, and multimodal models.

Technical Overview of the Qwen2.5 LLM Series

This technical report introduces Qwen2.5, an advanced series of LLMs developed by the Qwen team. Qwen2.5 has been designed to serve a wide range of applications, building on previous iterations from the Qwen series. The models demonstrate significant enhancements in performance through improvements in both pre-training and post-training methodologies.

Key Enhancements in Qwen2.5

  1. Pre-training Advancements:
    • The Qwen2.5 models are pre-trained on an extensive dataset comprising 18 trillion tokens. This sizable increase from the previous 7 trillion tokens used in Qwen2 enables the models to acquire a more robust foundation in world knowledge, common sense, and expert reasoning capabilities.
    • The pre-training process is complemented by a strategic mixture of domain-specific datasets that cover areas such as coding and mathematics, enhancing the models' performative capabilities in these fields.
  2. Post-training Techniques:
  3. Model Configurations:
    • Qwen2.5 offers a diverse range of models with open weights available in various sizes (from 0.5B to 72B parameters) to cater to different resource constraints. Proprietary models, such as Qwen2.5-Turbo and Qwen2.5-Plus, utilize Mixture-of-Experts (MoE) architectures to optimize computational efficiency.
    • Both quantized and standard versions of instruction-tuned models are accessible, providing flexibility in application.
  4. Evaluation Performance:
    • Extensive benchmarks highlight that Qwen2.5 models exhibit top-tier performance across language understanding, reasoning, mathematics, and coding tasks. Notably, Qwen2.5-72B-Instruct competes closely with larger state-of-the-art models like Llama-3-405B, despite having significantly fewer parameters.
    • Qwen2.5’s proprietary models also deliver cost-effective solutions while maintaining high performance levels.

Implications for AI Development

The enhancements observed in Qwen2.5 illustrate critical steps toward improving LLM efficacy across diverse applications. Increased token input for pre-training and more sophisticated post-training procedures contribute to a model that evidences superior understanding and executive abilities.

  • Practical Applications: Qwen2.5 models are positioned to benefit sectors requiring large-scale language processing capabilities, such as automated customer service, content creation, and technical support.
  • Theoretical Implications: The scaling of model size and improved processing techniques support ongoing hypotheses around model efficiency as size increases and the significance of reinforcement learning in model alignment with human intentions.
  • Future Developments: As machine learning models continue to expand their capabilities, additional focus will likely center on enhancing cross-domain capabilities and developing models with even longer context comprehension. Future iterations may aim to reduce computational demands through innovations in sparse computation and expand the models' multimodal processing abilities.

In conclusion, Qwen2.5 represents a progressive stride in the evolution of LLMs, presenting a balance of model intricacy, size, and application. It underscores the pivotal role of comprehensive datasets and methodological enhancement in advancing AI's state of the art.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (44)
  1. Qwen (1 paper)
  2. : (643 papers)
  3. An Yang (32 papers)
  4. Baosong Yang (57 papers)
  5. Beichen Zhang (27 papers)
  6. Binyuan Hui (57 papers)
  7. Bo Zheng (205 papers)
  8. Bowen Yu (89 papers)
  9. Chengyuan Li (78 papers)
  10. Dayiheng Liu (75 papers)
  11. Fei Huang (408 papers)
  12. Haoran Wei (55 papers)
  13. Huan Lin (55 papers)
  14. Jian Yang (503 papers)
  15. Jianhong Tu (10 papers)
  16. Jianwei Zhang (114 papers)
  17. Jianxin Yang (11 papers)
  18. Jiaxi yang (31 papers)
  19. Jingren Zhou (198 papers)
  20. Junyang Lin (99 papers)
Youtube Logo Streamline Icon: https://streamlinehq.com