LLM should think and action as a human (2502.13475v2)
Abstract: It is popular lately to train LLMs to be used as chat assistants, but in the conversation between the user and the chat assistant, there are prompts, require multi-turns between the chat assistant and the user. However, there are a number of issues with the multi-turns conversation: The response of the chat assistant is prone to errors and can't help users achieve their goals, and as the number of conversation turns increases, the probability of errors will also increase; It is difficult for chat assistant to generate responses with different processes based on actual needs for the same prompt; Chat assistant require the use of tools, but the current approach is not elegant and efficient, and the number of tool calls is limited. The main reason for these issues is that LLMs don't have the thinking ability as a human, lack the reasoning ability and planning ability, and lack the ability to execute plans. To solve these issues, we propose a thinking method based on a built-in chain of thought: In the multi-turns conversation, for each user prompt, the LLM thinks based on elements such as chat history, thinking context, action calls, memory and knowledge, makes detailed reasoning and planning, and actions according to the plan. We also explored how the LLM enhances thinking ability through this thinking method: Collect training datasets according to the thinking method and fine tune the LLM through supervised learning; Train a consistency reward model and use it as a reward function to fine tune the LLM using reinforcement learning, and the reinforced LLM outputs according to this way of thinking. Our experimental results show that the reasoning ability and planning ability of the LLM are enhanced, and the issues in the multi-turns conversation are solved.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.