Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AutoPRM: Automating Procedural Supervision for Multi-Step Reasoning via Controllable Question Decomposition (2402.11452v1)

Published 18 Feb 2024 in cs.CL

Abstract: Recent advancements in LLMs have shown promise in multi-step reasoning tasks, yet their reliance on extensive manual labeling to provide procedural feedback remains a significant impediment. To address this challenge, in this paper, we propose a novel self-supervised framework AutoPRM that efficiently enhances the fine-tuning of LLMs for intricate reasoning challenges. Specifically, AutoPRM first decomposes complex problems into more manageable subquestions with a controllable granularity switch, then sequentially apply reinforcement learning to iteratively improve the subquestion solver. Additionally, we propose context-guided-decoding to avoid reward tampering and guide the subquestion solver towards the solution of the holistic problem. Extensive experiments show that AutoPRM significantly improves performance on mathematical and commonsense reasoning tasks over SOTA. More encouragingly, AutoPRM can be easily integrated with other orthogonal reasoning pipelines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Zhaorun Chen (28 papers)
  2. Zhuokai Zhao (21 papers)
  3. Zhihong Zhu (45 papers)
  4. Ruiqi Zhang (58 papers)
  5. Xiang Li (1003 papers)
  6. Bhiksha Raj (180 papers)
  7. Huaxiu Yao (103 papers)
Citations (19)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com