Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
134 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

SFT Memorizes, RL Generalizes: A Comparative Study of Foundation Model Post-training (2501.17161v2)

Published 28 Jan 2025 in cs.AI, cs.CV, and cs.LG

Abstract: Supervised fine-tuning (SFT) and reinforcement learning (RL) are widely used post-training techniques for foundation models. However, their roles in enhancing model generalization capabilities remain unclear. This paper studies the difference between SFT and RL on generalization and memorization, focusing on text-based rule variants and visual variants. We introduce GeneralPoints, an arithmetic reasoning card game, and adopt V-IRL, a real-world navigation environment, to assess how models trained with SFT and RL generalize to unseen variants in both textual and visual domains. We show that RL, especially when trained with an outcome-based reward, generalizes across both rule-based textual and visual variants. SFT, in contrast, tends to memorize training data and struggles to generalize out-of-distribution scenarios. Further analysis reveals that RL improves the model's underlying visual recognition capabilities, contributing to its enhanced generalization in the visual domain. Despite RL's superior generalization, we show that SFT remains essential for effective RL training; SFT stabilizes the model's output format, enabling subsequent RL to achieve its performance gains. These findings demonstrates the capability of RL for acquiring generalizable knowledge in complex, multi-modal tasks.

Summary

  • The paper demonstrates that reinforcement learning (RL) significantly improves generalization in foundation models compared to supervised fine-tuning (SFT), which tends to memorize training data.
  • The paper utilizes rigorous experiments on arithmetic reasoning and spatial navigation tasks in both unimodal and multimodal settings, highlighting RL's enhancements with improvements up to 11.0% and SFT's steep declines.
  • The paper reveals that combining RL with increased verification iterations effectively boosts visual recognition capabilities, suggesting promising directions for future adaptive multimodal systems.

SFT Memorizes, RL Generalizes: Insights from a Comparative Study of Foundation Model Post-training

The research paper titled "SFT Memorizes, RL Generalizes: A Comparative Study of Foundation Model Post-training" provides a detailed examination of the roles that supervised fine-tuning (SFT) and reinforcement learning (RL) play in enhancing the generalization capabilities of foundation models. The authors, Chu et al., investigate the effects of these post-training techniques, notably scrutinizing their behavior in memorization versus generalization tasks within both unimodal text-based and multimodal contexts.

Methodology and Experimental Design

The paper is essentially an empirical exploration into how foundation models adapt to new, unseen tasks after being fine-tuned through either SFT or RL. The authors present an innovative initial scenario using two distinct environments: the General Points (GP) task for arithmetic reasoning, and the V-IRL task for spatial navigation challenges in realistic visual settings. For each task, two variants are explored: one focusing solely on linguistic cues (e.g., GP-L) and the other involving both visual and linguistic inputs (e.g., GP-VL).

The GP environment requires models to output arithmetic equations that yield a target number using only visual descriptions or graphical depictions of playing cards. Meanwhile, the V-IRL task challenges models to navigate based on complex textual and spatial instructions, assessing their capability to integrate visual observations into decision-making processes.

The core experimental approach involves:

  • Conducting a series of training sessions using SFT or RL on accessible rules/configurations before testing on new rule variants.
  • Providing models with visual and rule-based variants to test their capacity to generalize beyond memorized data.
  • Utilizing outcome-based rewards for RL, contrasting its finesse in deriving general rules against SFT's penchant for copying training details.

Key Findings

  1. Generalization vs. Memorization: The paper finds that RL significantly outperforms SFT in terms of generalization across unseen rule changes and visual constituents. RL's ability to learn generalizable strategies is underscored in scenarios where SFT fails to adapt beyond memorizing training data.
  2. Numerical Insights: The paper reveals substantial numerical divergences between RL and SFT outcomes in OOD (out-of-distribution) testing. For instance, RL improves performance by 3.5% on GP-L and 11.0% on V-IRL-L beyond initial baselines, whereas SFT shows decreases of 8.1% and 79.5%, respectively.
  3. Impact on Visual Recognition: RL also enhances visual recognition capabilities within VLMs, indicated by improvements in tasks that involve visual perception. This enhancement is crucial in complex multimedia environments where visual understanding and textual reasoning must converge.
  4. Role of SFT: Despite its less robust generalization, SFT plays a crucial stabilizing role for RL training in ensuring the model's outputs align with anticipated formats, thus supporting subsequent RL gains.
  5. Verification Iterations: A rise in verification iterations during evaluation significantly boosts generalization, suggesting inference-time compute scaling is essential for maximizing RL's efficacy.

Implications and Future Directions

This research delineates the nuanced roles of SFT and RL in post-training foundation models and provides a data-driven rationale for employing RL over SFT when generalization is requisite. The implications extend into practical AI applications requiring adaptability, such as autonomous navigation, dynamic strategy games, and contexts requiring real-time data interpretation.

Future exploration may involve refining RL techniques for even finer-grained control over multimodal learning tasks or improving SFT to encompass strategies that synthetically encourage generalization. Moreover, assessing these results across diverse architectures and model scales could bolster the robustness of the conclusions drawn from this compelling paper. These considerations represent promising avenues for advancing the field of machine learning towards increasingly adaptive intelligent systems.

Youtube Logo Streamline Icon: https://streamlinehq.com