Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Jamba: A Hybrid Transformer-Mamba Language Model (2403.19887v2)

Published 28 Mar 2024 in cs.CL and cs.LG

Abstract: We present Jamba, a new base LLM based on a novel hybrid Transformer-Mamba mixture-of-experts (MoE) architecture. Specifically, Jamba interleaves blocks of Transformer and Mamba layers, enjoying the benefits of both model families. MoE is added in some of these layers to increase model capacity while keeping active parameter usage manageable. This flexible architecture allows resource- and objective-specific configurations. In the particular configuration we have implemented, we end up with a powerful model that fits in a single 80GB GPU. Built at large scale, Jamba provides high throughput and small memory footprint compared to vanilla Transformers, and at the same time state-of-the-art performance on standard LLM benchmarks and long-context evaluations. Remarkably, the model presents strong results for up to 256K tokens context length. We study various architectural decisions, such as how to combine Transformer and Mamba layers, and how to mix experts, and show that some of them are crucial in large scale modeling. We also describe several interesting properties of these architectures which the training and evaluation of Jamba have revealed, and plan to release checkpoints from various ablation runs, to encourage further exploration of this novel architecture. We make the weights of our implementation of Jamba publicly available under a permissive license.

Jamba: Unveiling a Hybrid Transformer-Mamba Architecture with MoE for Enhanced LLM Performance

Introduction to Jamba

The recently developed Jamba framework represents a significant stride in LLM architecture, integrating Transformer and Mamba layers in a hybrid fashion, along with employing a mixture-of-experts (MoE) component. This architecture leverages the strengths of both the Transformer's and Mamba's architectural benefits, enhancing model capacity and performance while optimally managing memory usage and computational efficiency. Jamba is particularly designed to fit within the confines of a single 80GB GPU, making it highly accessible for large-scale LLMing tasks.

Model Architecture

The Jamba architecture is unique in its combination of Transformer layers, known for their attention mechanism, with Mamba layers, a class of state-space models acclaimed for efficiently handling sequence data. This amalgamation is further fortified with MoE layers, strategically enhancing the model's capacity. Each 'Jamba block' contains a mix of Mamba and Attention layers, interspersed with MoE layers applied to some of the MLPs. This structure allows for flexibility in model design, enabling the balancing of memory footprint, computational demands, and overall model performance. Jamba employs a configurable ratio of Attention-to-Mamba layers, thus allowing for adjustments based on specific resource and objective needs.

Performance Insights

Jamba's innovative architecture demonstrates superior performance on standard benchmarks, particularly excelling in tasks requiring long context lengths of up to 256K tokens. It showcases strong results across various evaluations, attaining comparable or superior performance relative to current leading models, such as Mixtral-8x7B and Llama-2 70B, while supporting significantly longer contexts. Furthermore, Jamba achieves this with a significantly smaller KV cache footprint and superior throughput efficiency, marking a substantial advancement in the practical application of large-scale LLMs.

Computational Efficiency

In addition to its impressive performance on benchmarks, Jamba stands out for its computational efficiency. Its unique architecture supports much larger batch processing and extended context lengths within single-GPU environments, a critical consideration for real-world applications. This efficiency is particularly pronounced in scenarios with extended sequence lengths, where Jamba's throughput far surpasses that of comparable models, highlighting its practical advantages in handling long-context tasks.

Future Implications and Research Directions

The introduction of Jamba opens up new avenues for the development of efficient and powerful LLMs. Its hybrid architecture provides a template for balancing the computational and memory requirements of large models, a common challenge in the field. The successful integration of MoE layers into this setup further underscores the potential for such techniques to expand model capacity without proportionately increasing computational demands. As the first production-grade model of its kind, Jamba sets a precedent for future research and development in the field of hybrid LLMs.

Concluding Remarks

Jamba represents a significant advancement in LLMing, effectively harnessing the strengths of Transformer and Mamba architectures alongside MoE components. This hybrid model not only achieves state-of-the-art performance across a broad range of benchmarks but does so with remarkable efficiency and adaptability. The release of Jamba under a permissive license encourages further exploration and optimization by the research community, potentially spurring the next wave of innovations in LLM development.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (50)
  1. The hidden attention of mamba models. arXiv preprint arXiv:2403.01590, 2024.
  2. L-Eval: Instituting standardized evaluation for long context language models. arXiv preprint arXiv:2307.11088, 2023.
  3. PIQA: Reasoning about physical commonsense in natural language. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 7432–7439, 2020.
  4. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021.
  5. QuAC: Question answering in context. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2174–2184, 2018.
  6. Palm: Scaling language modeling with pathways. Journal of Machine Learning Research, 24(240):1–113, 2023.
  7. Unified scaling laws for routed language models. In International conference on machine learning, pages 4057–4086. PMLR, 2022.
  8. BoolQ: Exploring the surprising difficulty of natural yes/no questions. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2924–2936, 2019.
  9. Think you have solved question answering? try ARC, the AI2 reasoning challenge. arXiv preprint arXiv:1803.05457, 2018.
  10. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021.
  11. Hugging Face. Open LLM leaderboard. https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard, 2024.
  12. Multi-head state space model for speech recognition. In Proceedings of INTERSPEECH 2023, pages 241–245, 2023.
  13. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. Journal of Machine Learning Research, 23(120):1–39, 2022.
  14. Hungry hungry hippos: Towards language modeling with state space models. In The Eleventh International Conference on Learning Representations, 2022.
  15. Philip Gage. A new algorithm for data compression. The C Users Journal, 12(2):23–38, 1994.
  16. Mamba: Linear-time sequence modeling with selective state spaces. arXiv preprint arXiv:2312.00752, 2023.
  17. Efficiently modeling long sequences with structured state spaces. In International Conference on Learning Representations, 2021.
  18. Combining recurrent, convolutional, and continuous-time models with linear state space layers. Advances in neural information processing systems, 34:572–585, 2021.
  19. Transformer language models without positional encodings still learn positional information. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 1382–1390, 2022.
  20. Measuring massive multitask language understanding. In International Conference on Learning Representations, 2020.
  21. CUAD: An expert-annotated NLP dataset for legal contract review. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1), 2021.
  22. Mistral 7b. arXiv preprint arXiv:2310.06825, 2023.
  23. Mixtral of experts. arXiv preprint arXiv:2401.04088, 2024.
  24. Greg Kamradt. Needle in a haystack - pressure testing llms. https://github.com/gkamradt/LLMTest_NeedleInAHaystack/, 2023.
  25. The NarrativeQA reading comprehension challenge. Transactions of the Association for Computational Linguistics, 6:317–328, 2018.
  26. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:452–466, 2019.
  27. TruthfulQA: Measuring how models mimic human falsehoods. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3214–3252, Dublin, Ireland, May 2022. Association for Computational Linguistics.
  28. Learning word vectors for sentiment analysis. In Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies, pages 142–150, 2011.
  29. Between words and characters: A brief history of open-vocabulary modeling and tokenization in NLP. arXiv preprint arXiv:2112.10508, 2021.
  30. Rethinking the role of demonstrations: What makes in-context learning work? In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 11048–11064, 2022.
  31. In-context learning and induction heads. arXiv preprint arXiv:2209.11895, 2022.
  32. Can mamba learn how to learn? a comparative study on in-context learning tasks. arXiv preprint arXiv:2402.04248, 2024.
  33. Block-state transformers. In Thirty-seventh Conference on Neural Information Processing Systems, 2023.
  34. MoE-Mamba: Efficient selective state space models with mixture of experts. arXiv preprint arXiv:2401.04081, 2024.
  35. Hyena hierarchy: Towards larger convolutional language models. In International Conference on Machine Learning, pages 28043–28078. PMLR, 2023.
  36. StripedHyena: Moving Beyond Transformers with Hybrid Signal Processing Models. https://github.com/togethercomputer/stripedhyena, 2023.
  37. WinoGrande: An adversarial winograd schema challenge at scale. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 8732–8740, 2020.
  38. Diagonal state space augmented transformers for speech recognition. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1–5. IEEE, 2023.
  39. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715–1725, 2016.
  40. Noam Shazeer. Glu variants improve transformer. arXiv preprint arXiv:2002.05202, 2020.
  41. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. In International Conference on Learning Representations, 2017.
  42. Roformer: Enhanced transformer with rotary position embedding. Neurocomputing, 568:127063, 2024.
  43. Challenging BIG-Bench tasks and whether chain-of-thought can solve them. In Findings of the Association for Computational Linguistics: ACL 2023, pages 13003–13051, 2023.
  44. Gemma: Open models based on gemini research and technology. arXiv preprint arXiv:2403.08295, 2024.
  45. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
  46. Attention is all you need. Advances in neural information processing systems, 30, 2017.
  47. HellaSwag: Can a machine really finish your sentence? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4791–4800, 2019.
  48. Root mean square layer normalization. Advances in Neural Information Processing Systems, 32, 2019.
  49. ST-MoE: Designing stable and transferable sparse expert models. arXiv preprint arXiv:2202.08906, 2022.
  50. Efficient long sequence modeling via state space augmented transformer. arXiv preprint arXiv:2212.08136, 2022.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (22)
  1. Opher Lieber (5 papers)
  2. Barak Lenz (8 papers)
  3. Hofit Bata (4 papers)
  4. Gal Cohen (4 papers)
  5. Jhonathan Osin (4 papers)
  6. Itay Dalmedigos (5 papers)
  7. Erez Safahi (2 papers)
  8. Shaked Meirom (2 papers)
  9. Yonatan Belinkov (111 papers)
  10. Shai Shalev-Shwartz (67 papers)
  11. Omri Abend (75 papers)
  12. Raz Alon (4 papers)
  13. Tomer Asida (3 papers)
  14. Amir Bergman (2 papers)
  15. Roman Glozman (2 papers)
  16. Michael Gokhman (2 papers)
  17. Avashalom Manevich (1 paper)
  18. Nir Ratner (5 papers)
  19. Noam Rozen (4 papers)
  20. Erez Shwartz (1 paper)
Citations (135)
Youtube Logo Streamline Icon: https://streamlinehq.com

HackerNews

Reddit Logo Streamline Icon: https://streamlinehq.com