Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reinforcement Learning Based Emotional Editing Constraint Conversation Generation (1904.08061v1)

Published 17 Apr 2019 in cs.CL and cs.LG

Abstract: In recent years, the generation of conversation content based on deep neural networks has attracted many researchers. However, traditional neural LLMs tend to generate general replies, lacking logical and emotional factors. This paper proposes a conversation content generation model that combines reinforcement learning with emotional editing constraints to generate more meaningful and customizable emotional replies. The model divides the replies into three clauses based on pre-generated keywords and uses the emotional editor to further optimize the final reply. The model combines multi-task learning with multiple indicator rewards to comprehensively optimize the quality of replies. Experiments shows that our model can not only improve the fluency of the replies, but also significantly enhance the logical relevance and emotional relevance of the replies.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Jia Li (380 papers)
  2. Xiao Sun (99 papers)
  3. Xing Wei (88 papers)
  4. Changliang Li (11 papers)
  5. Jianhua Tao (139 papers)
Citations (17)