Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Emu3: Next-Token Prediction is All You Need (2409.18869v1)

Published 27 Sep 2024 in cs.CV
Emu3: Next-Token Prediction is All You Need

Abstract: While next-token prediction is considered a promising path towards artificial general intelligence, it has struggled to excel in multimodal tasks, which are still dominated by diffusion models (e.g., Stable Diffusion) and compositional approaches (e.g., CLIP combined with LLMs). In this paper, we introduce Emu3, a new suite of state-of-the-art multimodal models trained solely with next-token prediction. By tokenizing images, text, and videos into a discrete space, we train a single transformer from scratch on a mixture of multimodal sequences. Emu3 outperforms several well-established task-specific models in both generation and perception tasks, surpassing flagship models such as SDXL and LLaVA-1.6, while eliminating the need for diffusion or compositional architectures. Emu3 is also capable of generating high-fidelity video via predicting the next token in a video sequence. We simplify complex multimodal model designs by converging on a singular focus: tokens, unlocking great potential for scaling both during training and inference. Our results demonstrate that next-token prediction is a promising path towards building general multimodal intelligence beyond language. We open-source key techniques and models to support further research in this direction.

Emu3: Next-Token Prediction is All You Need

The Emu3 suite of models leverages next-token prediction exclusively to achieve state-of-the-art results in multimodal tasks—comprising image, text, and video generation and understanding. By tokenizing these modalities into a discrete space, Emu3 employs a single transformer architecture trained on a mixture of multimodal sequences. This approach bears notable implications for the field of artificial general intelligence (AGI), particularly in removing dependencies on diffusion and compositional models, which have historically dominated multimodal tasks.

Model and Training

Emu3's architecture is rooted in transformer models, akin to those used in recent advancements in LLMs such as GPT-3 and Llama-2. The key innovation lies in expanding the transformer’s embedding layer to incorporate discrete vision tokens, allowing the model to process image and video data alongside text. A notable feature is the integration of vision tokenizer, which facilitates the transformation of high-resolution images and video frames into discrete tokens, to be processed uniformly within the transformer framework.

The training process involved two significant stages:

  1. Pre-training: Conducted on a broad set of multimodal data, including an extensive corpus of language, image, and video datasets. The emphasis was on maintaining high-resolution fidelity through various preprocessing steps, such as filtering based on resolution and aesthetic quality.
  2. Post-training: This stage refined the model’s performance in specific tasks such as vision generation and vision-language understanding, involving techniques like Quality Fine-Tuning (QFT) and Direct Preference Optimization (DPO) to align model outputs more closely with human preferences.

Empirical Evaluation

Image and Video Generation

Emu3 has shown significant prowess in image generation tasks. It outperformed established task-specific models such as Stable Diffusion XL (SDXL) and equivalently or better than DALL-E 3 in several benchmarks:

  • MSCOCO-30K: Emu3 exhibited superior performance in terms of FID and CLIP coherence scores, indicating high alignment between generated images and text prompts.
  • GenEval and DPG-Bench: It demonstrated high capability in generating images that matched dense descriptive prompts more accurately compared to other autoregressive and diffusion models.

Human evaluation also corroborated Emu3's superiority, where it scored comparably to premier closed models and surpassed many open models in reflecting visual quality and prompt adherence.

In video generation, Emu3 shone by outperforming diffusion models in dynamic scene generation and temporal consistency. Evaluated via the VBench benchmark, it showed high coherence in motion dynamics and scene stability, showcasing its groundbreaking capability in generating high-quality video conditioned on textual prompts.

Vision-Language Understanding

For tasks that combine vision and textual understanding, Emu3 bridged previously disparate architectures into a unified framework. Evaluation across multiple benchmarks, such as OCRBench, MMVet, and RealWorldQA, revealed it to be consistently superior or on par with models that combine pretrained vision encoders with LLMs like LLaVA-1.6 and ShareGPT4V. This aligns with the potential of next-token prediction to simplify model architectures while enhancing task-specific efficacy.

Implications and Future Developments

Emu3's successful unification of multimodal processing through next-token prediction advances the frontier of multimodal AI, providing a streamlined and scalable alternative to diverse existing frameworks reliant on diffusion or composite models. Its architecture holds particular promise:

  • Scalability: By focusing on token-based modalities, Emu3 simplifies the architectural requirements, thereby enhancing both training and inference scalability.
  • Versatility: Its ability to handle complex, multimodal tasks via a single model endows it with substantial potential in various application areas, from interactive AI systems to automated content generation.

Future research could explore further scaling of Emu3’s architecture, refining tokenization processes to capture even higher resolution and more nuanced aspects of multimodal data. Additionally, integrating adaptive learning strategies or reinforcement learning techniques could further align model outputs with intricate human preferences, solidifying next-token prediction as a cornerstone in the evolution toward AGI.

Conclusion

Emu3 marks a significant step forward in multimodal AI research, demonstrating the viability and efficacy of next-token prediction across varied modalities. Its robust performance in generation and understanding tasks validates this unified approach, offering a promising path forward in the development of sophisticated, multimodal AI systems. By open-sourcing critical techniques and models, Emu3 fosters further exploration and enhancement in this exciting domain.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (25)
  1. Xinlong Wang (56 papers)
  2. Xiaosong Zhang (29 papers)
  3. Zhengxiong Luo (16 papers)
  4. Quan Sun (31 papers)
  5. Yufeng Cui (12 papers)
  6. Jinsheng Wang (4 papers)
  7. Fan Zhang (685 papers)
  8. Yueze Wang (14 papers)
  9. Zhen Li (334 papers)
  10. Qiying Yu (13 papers)
  11. Yingli Zhao (5 papers)
  12. Yulong Ao (7 papers)
  13. Xuebin Min (2 papers)
  14. Tao Li (440 papers)
  15. Boya Wu (5 papers)
  16. Bo Zhao (242 papers)
  17. Bowen Zhang (161 papers)
  18. Liangdong Wang (10 papers)
  19. Guang Liu (30 papers)
  20. Zheqi He (5 papers)
Citations (44)
Youtube Logo Streamline Icon: https://streamlinehq.com
Reddit Logo Streamline Icon: https://streamlinehq.com

Reddit

  1. [2409.18869] Article not found (1 point, 0 comments)