Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Logic-of-Thought: Injecting Logic into Contexts for Full Reasoning in Large Language Models (2409.17539v1)

Published 26 Sep 2024 in cs.CL
Logic-of-Thought: Injecting Logic into Contexts for Full Reasoning in Large Language Models

Abstract: LLMs have demonstrated remarkable capabilities across various tasks but their performance in complex logical reasoning tasks remains unsatisfactory. Although some prompting methods, such as Chain-of-Thought, can improve the reasoning ability of LLMs to some extent, they suffer from an unfaithful issue where derived conclusions may not align with the generated reasoning chain. To address this issue, some studies employ the approach of propositional logic to further enhance logical reasoning abilities of LLMs. However, the potential omissions in the extraction of logical expressions in these methods can cause information loss in the logical reasoning process, thereby generating incorrect results. To this end, we propose Logic-of-Thought (LoT) prompting which employs propositional logic to generate expanded logical information from input context, and utilizes the generated logical information as an additional augmentation to the input prompts, thereby enhancing the capability of logical reasoning. The LoT is orthogonal to existing prompting methods and can be seamlessly integrated with them. Extensive experiments demonstrate that LoT boosts the performance of various prompting methods with a striking margin across five logical reasoning tasks. In particular, the LoT enhances Chain-of-Thought's performance on the ReClor dataset by +4.35%; moreover, it improves Chain-of-Thought with Self-Consistency's performance on LogiQA by +5%; additionally, it boosts performance of Tree-of-Thoughts on ProofWriter dataset by +8%.

Injecting Logic into Contexts for Enhanced Reasoning in LLMs

Logic-of-Thought: Injecting Logic into Contexts for Full Reasoning in LLMs explores a novel methodological enhancement to improve the logical reasoning performance of LLMs. The authors address the limitations of current verbal-based reasoning techniques and introduce Logic-of-Thought (LoT). LoT leverages propositional logic constructs to supplement existing prompts with logically encapsulated extensions, proving to be significantly effective across several datasets.

Core Contributions and Methodology

LoT is premised on addressing two main issues in existing prompting techniques and their neuro-symbolic counterparts: the unfaithfulness in reasoning chains and information loss during symbolic extraction.

  1. Definition and Extraction: LoT begins by extracting logical symbols and expressions from natural language contexts using LLMs. The logic extraction phase involves identifying propositions and their relationships, such as negations and implications.
  2. Expansion using Logic Laws: The extracted logical expressions are then expanded using well-defined logical reasoning laws, such as Double Negation, Contraposition, and Transitivity, implemented via computational methods to derive new logical propositions.
  3. Translation back to Natural Language: These expanded logical expressions are subsequently translated back into natural language using LLMs, ensuring the augmented contextual information remains interpretable for inference tasks.

Experimental Evaluation

The efficacy of LoT is thoroughly evaluated against five extensive logical reasoning datasets: ReClor, LogiQA, RuleTaker, ProofWriter, and FOLIO, using different prompting methods and models, including GPT-3.5 and GPT-4.

Results

ReClor and LogiQA:

  • LoT significantly boosts the performance of Chain-of-Thought (CoT), achieving up to +4.35% accuracy improvement on ReClor and +5.00% on LogiQA.
  • When combined with Self-Consistency (SC), LoT enhances accuracy by up to 6.52%.

RuleTaker, ProofWriter, and FOLIO:

  • On RuleTaker, LoT combined with Tree-of-Thought improves performance by +8%.
  • On ProofWriter, LoT enhances CoT-SC by +6.00%, demonstrating its utility in complex multi-step logical reasoning tasks.

Comparative Analysis

SatLM vs LoT:

A comparative paper between SatLM and LoT demonstrates the superior performance of LoT. Unlike SatLM, which relies heavily on accurate extraction of formal symbolic expressions, LoT maintains and integrates the original natural language context, effectively mitigating information loss.

Practical and Theoretical Implications

Practical Implications:

LoT significantly improves the practical application of LLMs in logically intensive tasks such as standardized tests, enhancing LLMs’ reliability in educational and evaluative domains. Moreover, LoT's framework can be seamlessly integrated with various prompting methods, providing a robust and adaptable tool for enhancing AI-driven reasoning across diverse contexts.

Theoretical Implications:

The research advances our understanding of integrating symbolic logic with neural networks, providing a pathway to more effective neuro-symbolic AI systems. By maintaining the natural language context, LoT bridges the gap between formal logical reasoning and the broader interpretability required in natural language processing.

Future Directions

Future work could explore more comprehensive sets of logical connectives and expand the logical reasoning laws integrated into LoT, enhancing its applicability and effectiveness. Additionally, addressing the limitations in the logical extraction phase, possibly through more advanced LLMs or hybrid symbolic-natural language systems, could further improve LoT's robustness and accuracy.

Conclusion

Logic-of-Thought: Injecting Logic into Contexts for Full Reasoning in LLMs provides a significant methodological advancement in enhancing the logical reasoning abilities of LLMs. By integrating propositional logic into natural language contexts, LoT mitigates issues of unfaithful reasoning and information loss, proving beneficial across multiple datasets and prompting methods. This research contributes valuable insights into the future development of more robust and logically proficient AI systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Tongxuan Liu (12 papers)
  2. Wenjiang Xu (2 papers)
  3. Weizhe Huang (8 papers)
  4. Xingyu Wang (37 papers)
  5. Jiaxing Wang (16 papers)
  6. Hailong Yang (27 papers)
  7. Jing Li (621 papers)
Citations (1)
Youtube Logo Streamline Icon: https://streamlinehq.com