Papers
Topics
Authors
Recent
Search
2000 character limit reached

Logic-of-Thought: Injecting Logic into Contexts for Full Reasoning in Large Language Models

Published 26 Sep 2024 in cs.CL | (2409.17539v2)

Abstract: LLMs have demonstrated remarkable capabilities across various tasks but their performance in complex logical reasoning tasks remains unsatisfactory. Although some prompting methods, such as Chain-of-Thought, can improve the reasoning ability of LLMs to some extent, they suffer from an unfaithful issue where derived conclusions may not align with the generated reasoning chain. To address this issue, some studies employ the approach of propositional logic to further enhance logical reasoning abilities of LLMs. However, the potential omissions in the extraction of logical expressions in these methods can cause information loss in the logical reasoning process, thereby generating incorrect results. To this end, we propose Logic-of-Thought (LoT) prompting which employs propositional logic to generate expanded logical information descriptions and utilizes them as an additional augmentation to original contexts, thereby ensuring information completeness and enhancing logical reasoning ability. LoT is orthogonal to existing prompting methods and can be seamlessly integrated with them. Extensive experiments demonstrate that LoT boosts the performance of various prompting methods with a striking margin across five logical reasoning tasks. In particular, LoT enhances Chain-of-Thought's performance on the ReClor dataset by +4.35%, improves Chain-of-Thought with Self-Consistency's performance on the RuleTaker dataset by +3.52%, and boosts performance of Tree-of-Thoughts on the ProofWriter dataset by +8%.

Citations (1)

Summary

  • The paper introduces LoT as a breakthrough that extracts and expands logical expressions to improve LLM reasoning performance.
  • It mitigates unfaithful reasoning and information loss by preserving natural language context while applying logical expansion laws like double negation and contraposition.
  • Experimental results demonstrate significant accuracy gains of up to 8% on multi-step inference tasks across five diverse logical reasoning datasets.

Injecting Logic into Contexts for Enhanced Reasoning in LLMs

Logic-of-Thought: Injecting Logic into Contexts for Full Reasoning in LLMs explores a novel methodological enhancement to improve the logical reasoning performance of LLMs. The authors address the limitations of current verbal-based reasoning techniques and introduce Logic-of-Thought (LoT). LoT leverages propositional logic constructs to supplement existing prompts with logically encapsulated extensions, proving to be significantly effective across several datasets.

Core Contributions and Methodology

LoT is premised on addressing two main issues in existing prompting techniques and their neuro-symbolic counterparts: the unfaithfulness in reasoning chains and information loss during symbolic extraction.

  1. Definition and Extraction: LoT begins by extracting logical symbols and expressions from natural language contexts using LLMs. The logic extraction phase involves identifying propositions and their relationships, such as negations and implications.
  2. Expansion using Logic Laws: The extracted logical expressions are then expanded using well-defined logical reasoning laws, such as Double Negation, Contraposition, and Transitivity, implemented via computational methods to derive new logical propositions.
  3. Translation back to Natural Language: These expanded logical expressions are subsequently translated back into natural language using LLMs, ensuring the augmented contextual information remains interpretable for inference tasks.

Experimental Evaluation

The efficacy of LoT is thoroughly evaluated against five extensive logical reasoning datasets: ReClor, LogiQA, RuleTaker, ProofWriter, and FOLIO, using different prompting methods and models, including GPT-3.5 and GPT-4.

Results

ReClor and LogiQA:

  • LoT significantly boosts the performance of Chain-of-Thought (CoT), achieving up to +4.35% accuracy improvement on ReClor and +5.00% on LogiQA.
  • When combined with Self-Consistency (SC), LoT enhances accuracy by up to 6.52%.

RuleTaker, ProofWriter, and FOLIO:

  • On RuleTaker, LoT combined with Tree-of-Thought improves performance by +8%.
  • On ProofWriter, LoT enhances CoT-SC by +6.00%, demonstrating its utility in complex multi-step logical reasoning tasks.

Comparative Analysis

SatLM vs LoT:

A comparative study between SatLM and LoT demonstrates the superior performance of LoT. Unlike SatLM, which relies heavily on accurate extraction of formal symbolic expressions, LoT maintains and integrates the original natural language context, effectively mitigating information loss.

Practical and Theoretical Implications

Practical Implications:

LoT significantly improves the practical application of LLMs in logically intensive tasks such as standardized tests, enhancing LLMs’ reliability in educational and evaluative domains. Moreover, LoT's framework can be seamlessly integrated with various prompting methods, providing a robust and adaptable tool for enhancing AI-driven reasoning across diverse contexts.

Theoretical Implications:

The research advances our understanding of integrating symbolic logic with neural networks, providing a pathway to more effective neuro-symbolic AI systems. By maintaining the natural language context, LoT bridges the gap between formal logical reasoning and the broader interpretability required in natural language processing.

Future Directions

Future work could explore more comprehensive sets of logical connectives and expand the logical reasoning laws integrated into LoT, enhancing its applicability and effectiveness. Additionally, addressing the limitations in the logical extraction phase, possibly through more advanced LLMs or hybrid symbolic-natural language systems, could further improve LoT's robustness and accuracy.

Conclusion

Logic-of-Thought: Injecting Logic into Contexts for Full Reasoning in LLMs provides a significant methodological advancement in enhancing the logical reasoning abilities of LLMs. By integrating propositional logic into natural language contexts, LoT mitigates issues of unfaithful reasoning and information loss, proving beneficial across multiple datasets and prompting methods. This research contributes valuable insights into the future development of more robust and logically proficient AI systems.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 13 tweets with 607 likes about this paper.