Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

RLSF: Reinforcement Learning via Symbolic Feedback (2405.16661v2)

Published 26 May 2024 in cs.CL, cs.AI, cs.LG, and cs.LO

Abstract: Reinforcement Learning with Human Feedback (RLHF) is considered a standard approach to fine-tuning LLMs. However, such methods often face limitations such as unsound black-box reward models, difficulties in collecting human preference data, and the reliance on sparse scalar rewards. These methods often fall short when applied to tasks that require complex domain-specific understanding. To address these challenges, we propose a new fine-tuning paradigm we refer to as Reinforcement Learning via Symbolic Feedback (RLSF), which aims to improve domain-specific understanding of LLMs more effectively than traditional reward signals. In the RLSF setting, the LLM being fine-tuned is considered an RL agent, while the environment is allowed access to reasoning or domain knowledge tools (e.g., solvers, provers, algebra systems, or knowledge bases). Crucially, in RLSF, these reasoning tools can provide feedback to the LLMs via poly-sized certificates (e.g., proofs), that characterize errors in the LLM-generated object with respect to some correctness specification. As a bonus, our RLSF approach does not require the reasoning systems we use to be differentiable. The ability of RLSF-based fine-tuning to leverage certificate-generating symbolic tools enables sound fine-grained (token-level) reward signals to LLMs, and thus addresses the limitations of traditional reward models mentioned above. Via extensive evaluations, we show that our RLSF-based fine-tuning of LLMs outperforms traditional approaches on five different applications, namely, program synthesis from natural language pseudo-code to programming language, three chemistry tasks, and solving the Game of 24. A takeaway is that fine-tuning via RLSF enables relatively smaller LLMs to significantly outperform closed-source models that are orders of magnitude larger (e.g., GPT-4).

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Piyush Jha (11 papers)
  2. Prithwish Jana (9 papers)
  3. Arnav Arora (24 papers)
  4. Vijay Ganesh (62 papers)
  5. Pranavkrishna Suresh (1 paper)
X Twitter Logo Streamline Icon: https://streamlinehq.com