Planning with Logical Graph-based Language Model for Instruction Generation (2308.13782v2)
Abstract: Despite the superior performance of LLMs to generate natural language texts, it is hard to generate texts with correct logic according to a given task, due to the difficulties for neural models to capture implied rules from free-form texts. In this paper, we propose a novel graph-based LLM, Logical-GLM, to infuse logic into LLMs for more valid text generation and interpretability. Specifically, we first capture information from natural language instructions and construct logical bayes graphs that generally describe domains. Next, we generate logical skeletons to guide LLM training, infusing domain knowledge into LLMs. Finally, we alternately optimize the searching policy of graphs and LLMs until convergence. The experimental results show that Logical-GLM is both effective and efficient compared with traditional LLMs, despite using smaller-scale training data and fewer parameters. Our approach can generate instructional texts with more correct logic owing to the internalized domain knowledge. Moreover, the usage of logical graphs reflects the inner mechanism of the LLMs, which improves the interpretability of black-box models.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.