Learning or Self-aligning? Rethinking Instruction Fine-tuning (2402.18243v3)
Abstract: Instruction Fine-tuning~(IFT) is a critical phase in building LLMs~(LLMs). Previous works mainly focus on the IFT's role in the transfer of behavioral norms and the learning of additional world knowledge. However, the understanding of the underlying mechanisms of IFT remains significantly limited. In this paper, we design a knowledge intervention framework to decouple the potential underlying factors of IFT, thereby enabling individual analysis of different factors. Surprisingly, our experiments reveal that attempting to learn additional world knowledge through IFT often struggles to yield positive impacts and can even lead to markedly negative effects. Further, we discover that maintaining internal knowledge consistency before and after IFT is a critical factor for achieving successful IFT. Our findings reveal the underlying mechanisms of IFT and provide robust support for some very recent and potential future works.
- Christopher M Bishop. 2006. Pattern recognition and machine learning, volume 4. Springer.
- Language Models are Few-Shot Learners. Advances in Neural Information Processing Systems, 33:1877–1901.
- Weak-to-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision. ArXiv:2312.09390 [cs].
- Maybe Only 0.5% Data is Needed: A Preliminary Exploration of Low Training Data Instruction Tuning. ArXiv:2305.09246 [cs].
- SoulChat: Improving LLMs’ Empathy, Listening, and Comfort Abilities through Fine-tuning with Multi-turn Empathy Conversations. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 1170–1183, Singapore. Association for Computational Linguistics.
- Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models. ArXiv:2401.01335 [cs, stat].
- Scaling Instruction-Finetuned Language Models. ArXiv:2210.11416 [cs].
- ChatLaw: Open-Source Legal Large Language Model with Integrated External Knowledge Bases. ArXiv:2306.16092 [cs].
- A Survey on In-context Learning. ArXiv:2301.00234 [cs].
- Ambiguity-Aware In-Context Learning with Large Language Models. ArXiv:2309.07900 [cs].
- Deep Learning. MIT Press. Google-Books-ID: omivDQAAQBAJ.
- Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation. ArXiv:2306.05783 [cs].
- The False Promise of Imitating Proprietary LLMs. ArXiv:2305.15717 [cs].
- Human-Instruction-Free LLM Self-Alignment with Limited Samples. ArXiv:2401.06785 [cs].
- Measuring Massive Multitask Language Understanding. ArXiv:2009.03300 [cs].
- OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization. ArXiv:2212.12017 [cs].
- Exploring the Benefits of Training Expert Language Models over Instruction Tuning. ArXiv:2302.03202 [cs].
- Mistral 7B. ArXiv:2310.06825 [cs].
- ChatDoctor: A Medical Chat Model Fine-Tuned on a Large Language Model Meta-AI (LLaMA) Using Medical Domain Knowledge. ArXiv:2303.14070 [cs].
- The Unlocking Spell on Base LLMs: Rethinking Alignment via In-Context Learning. ArXiv:2312.01552 [cs].
- Tuning Language Models by Proxy. ArXiv:2401.08565 [cs].
- Large Language Models are Superpositions of All Characters: Attaining Arbitrary Role-play via Self-Alignment. ArXiv:2401.12474 [cs].
- Rethinking the role of demonstrations: What makes in-context learning work? In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 11048–11064, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
- MedMCQA: A Large-scale Multi-Subject Multi-Choice Dataset for Medical domain Question Answering. In Conference on Health, Inference, and Learning, pages 248–260. PMLR. ISSN: 2640-3498.
- Instruction Tuning with GPT-4. ArXiv:2304.03277 [cs].
- DeepSpeed: System Optimizations Enable Training Deep Learning Models with Over 100 Billion Parameters. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD ’20, pages 3505–3506, New York, NY, USA. Association for Computing Machinery.
- Ming Shen. 2024. Rethinking Data Selection for Supervised Fine-Tuning. ArXiv:2402.06094 [cs].
- C. Spearman. 1961. The Proof and Measurement of Association Between Two Things. Studies in individual differences: The search for intelligence. Appleton-Century-Crofts, East Norwalk, CT, US. Pages: 58.
- Principle-Driven Self-Alignment of Language Models from Scratch with Minimal Human Supervision. ArXiv:2305.03047 [cs].
- LLaMA: Open and Efficient Foundation Language Models. ArXiv:2302.13971 [cs].
- Llama 2: Open Foundation and Fine-Tuned Chat Models.
- Mitigating Hallucinations of Large Language Models via Knowledge Consistent Alignment. ArXiv:2401.10768.
- Label words are anchors: An information flow perspective for understanding in-context learning.
- Baichuan 2: Open large-scale language models.
- Self-Rewarding Language Models. ArXiv:2401.10020 [cs].
- R-Tuning: Teaching Large Language Models to Refuse Unknown Questions. ArXiv:2311.09677 [cs].
- A survey of large language models.
- Pytorch fsdp: Experiences on scaling fully sharded data parallel.
- Judging llm-as-a-judge with mt-bench and chatbot arena.
- LIMA: Less Is More for Alignment.