Papers
Topics
Authors
Recent
Search
2000 character limit reached

Scaling Recurrent Neural Networks to a Billion Parameters with Zero-Order Optimization

Published 23 May 2025 in cs.LG and cs.AI | (2505.17852v1)

Abstract: During inference, Recurrent Neural Networks (RNNs) scale constant in both FLOPs and GPU memory with increasing context length, as they compress all prior tokens into a fixed-size memory. In contrast, transformers scale linearly in FLOPs and, at best, linearly in memory during generation, since they must attend to all previous tokens explicitly. Despite this inference-time advantage, training large RNNs on long contexts remains impractical because standard optimization methods depend on Backpropagation Through Time (BPTT). BPTT requires retention of all intermediate activations during the forward pass, causing memory usage to scale linearly with both context length and model size. In this paper, we show that Zero-Order Optimization (ZOO) methods such as Random-vector Gradient Estimation (RGE) can successfully replace BPTT to train RNNs with convergence rates that match, or exceed BPTT by up to 19 fold, while using orders of magnitude less memory and cost, as the model remains in inference mode throughout training. We further demonstrate that Central-Difference RGE (CD-RGE) corresponds to optimizing a smoothed surrogate loss, inherently regularizing training and improving generalization. Our method matches or outperforms BPTT across three settings: (1) overfitting, (2) transduction, and (3) language modeling. Across all tasks, with sufficient perturbations, our models generalize as well as or better than those trained with BPTT, often in fewer steps. Despite the need for more forward passes per step, we can surpass BPTT wall-clock time per step using recent advancements such as FlashRNN and distributed inference.

Summary

Analyzing RNN Scalability with Zero-Order Optimization

The paper in question introduces an innovative approach to scaling Recurrent Neural Networks (RNNs) up to a billion parameters using Zero-Order Optimization (ZOO), specifically through the Random-vector Gradient Estimation (RGE) method. Francois Chaubard and Mykel J. Kochenderfer present a framework that could potentially mitigate the limitations associated with Backpropagation Through Time (BPTT) by leveraging the computational efficiency of ZOO.

Key Contributions and Methods

The study underscores a crucial inference-time advantage of RNNs over transformers, particularly due to the fixed-size memory usage during generation, which contrasts the quadratic complexity of transformers that results from their need to compute pairwise attention across all previous tokens. Despite this benefit, effectively training RNNs on long contexts is constrained by memory bottlenecks inherent in BPTT, where all intermediate activations must be retained. The authors propose substituting BPTT with ZOO techniques like Central-Difference RGE (CD-RGE) to address this issue.

The research details how CD-RGE can replace BPTT with convergence rates either matching or surpassing it by up to 19 times under certain settings, while significantly reducing memory usage and cost. The authors emphasize that with optimized distributed inference and recent advancements such as FlashRNN, it is feasible to train RNNs efficiently with large-scale models and long sequences, which were traditionally prohibitive.

CD-RGE, a focal method in the study, provides an unbiased gradient estimate using a smoothed surrogate loss. This approach inherently regularizes the training, leading to improved generalization. The study further illustrates with empirical evidence that CD-RGE not only matches but often exceeds BPTT on tasks involving overfitting, transduction, and language modeling. RNNs trained with CD-RGE exhibit comparable or superior performance to those trained with BPTT, often requiring fewer optimization steps, even if more computational steps per iteration are needed.

Implications and Future Directions

The implications of this research are significant, both theoretically and practically. Theoretically, it challenges the assumption that BPTT is indispensable for training large RNNs, proposing an alternative that offers memory efficiency and ease of scaling. Practically, improving the computational efficiency and viability of RNNs could render them more competitive with transformers, particularly in applications where memory resources are a limiting factor.

The study also opens avenues for future exploration. For instance, quantifying the trade-offs between memory usage and computation time in more diverse settings and understanding the potential for further scaling of RNNs using other zero-order methods warrant further investigation. Additionally, the practical adoption of RNNs trained via ZOO on resource-limited hardware could be explored, bringing a resurgence of interest in RNNs as an alternative to transformers under specific constraints.

In conclusion, this paper lays foundational work for exploring the capabilities of Zero-Order Optimization in training RNNs at a scale that was previously infeasible due to memory constraints. By advancing distributed optimization techniques in conjunction with CD-RGE, the study offers a promising pathway to harnessing the latent potential of RNNs, potentially contributing to more efficient models and sustainable AI practices.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.

HackerNews