Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Mixture of insighTful Experts (MoTE): The Synergy of Thought Chains and Expert Mixtures in Self-Alignment (2405.00557v3)

Published 1 May 2024 in cs.CL and cs.AI

Abstract: As the capabilities of LLMs have expanded dramatically, aligning these models with human values presents a significant challenge. Traditional alignment strategies rely heavily on human intervention, such as Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF), or on the self-alignment capacities of LLMs, which usually require a strong LLM's emergent ability to improve its original bad answer. To address these challenges, we propose a novel self-alignment method that utilizes a Chain of Thought (CoT) approach, termed AlignCoT. This method encompasses stages of Question Analysis, Answer Guidance, and Safe Answer production. It is designed to enable LLMs to generate high-quality, safe responses throughout various stages of their development. Furthermore, we introduce the Mixture of insighTful Experts (MoTE) architecture, which applies mixture of experts to enhance each component of the AlignCoT process, markedly increasing alignment efficiency. The MoTE approach not only outperforms existing methods in aligning LLMs with human values but also highlights the benefits of using self-generated data, revealing the dual benefits of improved alignment and training efficiency.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (50)
  1. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023.
  2. Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073, 2022.
  3. Are aligned neural networks adversarially aligned? In NeruIPS, 2023.
  4. Multisiam: Self-supervised multi-instance siamese representation learning for autonomous driving. In ICCV, 2021.
  5. Mixed autoencoder for self-supervised visual representation learning. In CVPR, 2023a.
  6. Gaining wisdom from setbacks: Aligning large language models via mistake analysis. arXiv preprint arXiv:2310.10477, 2023b.
  7. Integrating geometric control into text-to-image diffusion models for high-quality detection data generation via text prompt. arXiv preprint arXiv:2306.04607, 2023c.
  8. Llava-mole: Sparse mixture of lora experts for mitigating data conflicts in instruction finetuning mllms. arXiv preprint arXiv:2401.16160, 2024.
  9. Octavius: Mitigating task interference in mllms via moe. arXiv preprint arXiv:2311.02684, 2023d.
  10. Safe rlhf: Safe reinforcement learning from human feedback. arXiv preprint arXiv:2310.12773, 2023.
  11. The art of balancing: Revolutionizing mixture of experts for maintaining world knowledge in language model alignment. arXiv preprint arXiv:2312.09979, 2023.
  12. Alpacafarm: A simulation framework for methods that learn from human feedback. In NeurIPS, 2023.
  13. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. In JMLR, 2021.
  14. Mixture-of-loras: An efficient multitask tuning for large language models. arXiv preprint arXiv:2403.03432, 2024.
  15. Magicdrive: Street view generation with diverse 3d geometry control. arXiv preprint arXiv:2310.02601, 2023.
  16. Mixture of cluster-conditional lora experts for vision-language instruction tuning. arXiv preprint arXiv:2312.12379, 2023.
  17. Eyes closed, safety on: Protecting multimodal llms via image-to-text transformation. arXiv preprint arXiv:2403.09572, 2024.
  18. Soda10m: Towards large-scale object detection benchmark for autonomous driving. arXiv preprint arXiv:2106.11118, 2021.
  19. Parameter-efficient transfer learning for nlp. In International conference on machine learning, pp.  2790–2799. PMLR, 2019.
  20. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021.
  21. Mixtral of experts. arXiv preprint arXiv:2401.04088, 2024.
  22. Large language models are zero-shot reasoners. In Advances in neural information processing systems, 2022.
  23. Beyond distillation: Task-level mixture-of-experts for efficient inference. arXiv preprint arXiv:2110.03742, 2021.
  24. Rlaif: Scaling reinforcement learning from human feedback with ai feedback. arXiv preprint arXiv:2309.00267, 2023.
  25. Gshard: Scaling giant models with conditional computation and automatic sharding. arxiv preprint arxiv:2006.16668, 2020.
  26. Coda: A real-world road corner case dataset for object detection in autonomous driving. arXiv preprint arXiv:2203.07724, 2022.
  27. Trackdiffusion: Multi-object tracking data generation via diffusion models. arXiv preprint arXiv:2312.00651, 2023.
  28. Automated evaluation of large vision-language models on self-driving corner cases. arXiv preprint arXiv:2404.10595, 2024.
  29. Moe-llava: Mixture of experts for large vision-language models. arXiv preprint arXiv:2401.15947, 2024.
  30. Chain of hindsight aligns language models with feedback. arXiv preprint arXiv:2302.02676, 2023a.
  31. Task-customized self-supervised pre-training with scalable dynamic routing. In AAAI, 2022.
  32. Geom-erasing: Geometry-driven removal of implicit concept in diffusion models. arXiv preprint arXiv:2310.05873, 2023b.
  33. Task-customized masked autoencoder via mixture of cluster-conditional experts. arXiv preprint arXiv:2402.05382, 2024.
  34. Multimodal contrastive learning with limoe: the language-image mixture of experts. arxiv preprint arxiv:2206.02770, 2022.
  35. Training language models to follow instructions with human feedback. In NeurIPS, 2022.
  36. Self-alignment of large language models via multi-agent social simulation. In ICLR Workshop on Large Language Model (LLM) Agents, 2024.
  37. Direct preference optimization: Your language model is secretly a reward model. arXiv preprint arXiv:2305.18290, 2023.
  38. Scaling vision with sparse mixture of experts. In NeurIPS, 2021.
  39. Mixture-of-experts meets instruction tuning: A winning combination for large language models. arXiv preprint arXiv:2305.14705, 2023a.
  40. Scaling vision-language models with sparse mixture of experts. arxiv preprint arxiv:2303.07226, 2023b.
  41. Alpaca: A strong, replicable instruction-following model. Stanford Center for Research on Foundation Models. https://crfm. stanford. edu/2023/03/13/alpaca. html, 3(6):7, 2023.
  42. Stablerep: Synthetic images from text-to-image models make strong visual representation learners. arXiv preprint arXiv:2306.00984, 2023.
  43. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
  44. Adamix: Mixture-of-adaptations for parameter-efficient model tuning. arxiv preprint arxiv:2210.17451, 2022.
  45. Detdiffusion: Synergizing generative and perceptive models for enhanced data generation and perception. arXiv preprint arXiv:2403.13304, 2024.
  46. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652, 2021.
  47. Chain-of-thought prompting elicits reasoning in large language models. In NeurIPS, 2022.
  48. Pushing mixture of experts to the limit: Extremely parameter efficient moe for instruction tuning. arxiv preprint arxiv:2309.05444, 2023.
  49. Judging llm-as-a-judge with mt-bench and chatbot arena. In NeurIPS, 2024.
  50. Universal and transferable adversarial attacks on aligned language models. arXiv preprint arXiv:2307.15043, 2023.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Zhili Liu (20 papers)
  2. Yunhao Gou (9 papers)
  3. Kai Chen (512 papers)
  4. Lanqing Hong (72 papers)
  5. Jiahui Gao (25 papers)
  6. Fei Mi (56 papers)
  7. Yu Zhang (1399 papers)
  8. Zhenguo Li (195 papers)
  9. Xin Jiang (242 papers)
  10. Qun Liu (230 papers)
  11. James T. Kwok (65 papers)
Citations (8)
X Twitter Logo Streamline Icon: https://streamlinehq.com