Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Symbolic Discovery of Optimization Algorithms (2302.06675v4)

Published 13 Feb 2023 in cs.LG, cs.AI, cs.CL, cs.CV, and cs.NE

Abstract: We present a method to formulate algorithm discovery as program search, and apply it to discover optimization algorithms for deep neural network training. We leverage efficient search techniques to explore an infinite and sparse program space. To bridge the large generalization gap between proxy and target tasks, we also introduce program selection and simplification strategies. Our method discovers a simple and effective optimization algorithm, $\textbf{Lion}$ ($\textit{Evo$\textbf{L}$ved S$\textbf{i}$gn M$\textbf{o}$me$\textbf{n}$tum}$). It is more memory-efficient than Adam as it only keeps track of the momentum. Different from adaptive optimizers, its update has the same magnitude for each parameter calculated through the sign operation. We compare Lion with widely used optimizers, such as Adam and Adafactor, for training a variety of models on different tasks. On image classification, Lion boosts the accuracy of ViT by up to 2% on ImageNet and saves up to 5x the pre-training compute on JFT. On vision-language contrastive learning, we achieve 88.3% $\textit{zero-shot}$ and 91.1% $\textit{fine-tuning}$ accuracy on ImageNet, surpassing the previous best results by 2% and 0.1%, respectively. On diffusion models, Lion outperforms Adam by achieving a better FID score and reducing the training compute by up to 2.3x. For autoregressive, masked LLMing, and fine-tuning, Lion exhibits a similar or better performance compared to Adam. Our analysis of Lion reveals that its performance gain grows with the training batch size. It also requires a smaller learning rate than Adam due to the larger norm of the update produced by the sign function. Additionally, we examine the limitations of Lion and identify scenarios where its improvements are small or not statistically significant. Lion is also successfully deployed in production systems such as Google search ads CTR model.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (12)
  1. Xiangning Chen (17 papers)
  2. Chen Liang (140 papers)
  3. Da Huang (67 papers)
  4. Esteban Real (15 papers)
  5. Kaiyuan Wang (18 papers)
  6. Yao Liu (116 papers)
  7. Hieu Pham (35 papers)
  8. Xuanyi Dong (28 papers)
  9. Thang Luong (9 papers)
  10. Cho-Jui Hsieh (211 papers)
  11. Yifeng Lu (16 papers)
  12. Quoc V. Le (128 papers)
Citations (278)

Summary

  • The paper introduces a symbolic program search framework that discovers novel optimization algorithms, culminating in the sign-based Lion optimizer.
  • The methodology employs evolutionary search with warm-start and restart strategies to navigate an infinite and sparse program space for deep learning tasks.
  • Empirical results show Lion improves performance with up to 2% higher accuracy in ViTs and 5x reduced compute, outperforming traditional optimizers.

Symbolic Discovery of Optimization Algorithms: An Expert Overview

The paper "Symbolic Discovery of Optimization Algorithms" introduces a novel approach to discovering optimization algorithms specifically tailored for deep neural network training. The authors frame the problem as a symbolic program search over an infinite and sparse space, harnessing advanced search techniques to expose novel optimizers such as the "Lion" optimizer (EvoLved S\textbf{ign Momentum). Lion is designed to be both simple and efficient, optimizing various machine learning tasks with significant performance improvements.

Overview of Methods and Findings

The research articulates a systematic approach whereby algorithm discovery is transformed into a program search problem. The task is tackled using evolutionary search, supplemented by the warm-start and restart strategies to overcome the challenges posed by the vast and sparse search space. Notably, the discovery method includes pipeline stages for program selection and simplification, which address the generalization challenges from proxy to target tasks.

The optimizer Lion, discovered through this symbolic search, operates fundamentally different from traditional adaptive optimizers like Adam. Lion utilizes the sign operation to compute updates, leading to uniform update magnitudes and reduced memory overhead. This confers its distinguishing capability as compared to existing adaptive optimizers, such as Adam or RMSProp, which rely on diverse update magnitudes across parameters.

Extensive empirical results underscore the effectiveness of the Lion optimizer compared to several widely used optimizers. For instance, Lion boosts the accuracy of Vision Transformer (ViT) models on ImageNet by up to 2%, achieves better results with up to 5x less compute during pre-training, and yields superior Fréchet Inception Distance (FID) scores when training diffusion models.

Experimental Results

The analysis of Lion's performance spans a diverse set of tasks, encompassing image classification, vision-language contrastive learning, diffusion models, and LLMing tasks. Key results include:

  • Image Classification: Lion significantly improves performance for ViT architectures across benchmarks such as ImageNet, ImageNet ReaL, and V2, demonstrating robustness even under varied data conditions.
  • Vision-Language Learning: In vision-language tasks like BASIC and LiT, the Lion optimizer enhances zero-shot and fine-tuning accuracy, surpassing previous state-of-the-art results.
  • Diffusion and LLMs: Lion not only bettered FID scores but also maintained competitive or superior performance in terms of memory efficiency and learning rate adjustments on extensive LLMing tasks.

An important observation in the experiments is that Lion's performance advantage grows with the increase in batch size, which aligns well with common practices in large-scale training.

Theoretical Implications and Future Directions

The symbolic approach to discovering optimization algorithms points towards a future where the design of learning algorithms could be fully automated, moving beyond the current reliance on human intuition. The paper contributes to the broader discussion on AutoML, where the symbolic search space can serve as a foundation to discover and adapt algorithms for a wide range of tasks beyond deep learning optimization.

Future research can extend the methodology by exploring a broader range of program constructs such as incorporating conditionals, loops, and new functional definitions that go beyond the limitations addressed in this paper. Further, exploring more granular areas of differentiable programming and advanced second-order optimization methods could lead to the identification of more specialized optimizers.

Conclusion

The paper presents a methodologically sound and practically effective contribution to the discovery of optimization algorithms through symbolic program searches. The introduction of the Lion optimizer signifies a considerable step forward in efficiency and performance. This research lays a pivotal groundwork for subsequent exploration in automated algorithm design, advocating for a paradigm shift in how machine learning models learn from optimization processes.

Youtube Logo Streamline Icon: https://streamlinehq.com