Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Extending Automated Deduction for Commonsense Reasoning (2003.13159v1)

Published 29 Mar 2020 in cs.AI

Abstract: Commonsense reasoning has long been considered as one of the holy grails of artificial intelligence. Most of the recent progress in the field has been achieved by novel machine learning algorithms for natural language processing. However, without incorporating logical reasoning, these algorithms remain arguably shallow. With some notable exceptions, developers of practical automated logic-based reasoners have mostly avoided focusing on the problem. The paper argues that the methods and algorithms used by existing automated reasoners for classical first-order logic can be extended towards commonsense reasoning. Instead of devising new specialized logics we propose a framework of extensions to the mainstream resolution-based search methods to make these capable of performing search tasks for practical commonsense reasoning with reasonable efficiency. The proposed extensions mostly rely on operating on ordinary proof trees and are devised to handle commonsense knowledge bases containing inconsistencies, default rules, taxonomies, topics, relevance, confidence and similarity measures. We claim that machine learning is best suited for the construction of commonsense knowledge bases while the extended logic-based methods would be well-suited for actually answering queries from these knowledge bases.

Commonsense reasoning represents a significant challenge in AI as it requires understanding and reasoning about everyday situations that humans inherently grasp. Recent advancements in automated deduction and machine learning provide promising directions to extend automated deduction for efficient commonsense reasoning.

One approach proposes extending existing resolution-based search methods used in classical first-order logic to handle practical commonsense reasoning tasks. This method involves operating on ordinary proof trees and dealing with inconsistencies, default rules, taxonomies, topics, relevance, confidence, and similarity measures within commonsense knowledge bases (Tammet, 2020 ). These extensions enable automated logic-based reasoners to process commonsense knowledge efficiently, potentially complementing machine learning methods which are primarily used to construct these knowledge bases.

Additionally, leveraging external structured knowledge sources, such as knowledge graphs, further aids commonsense reasoning. Methods like KagNet utilize external knowledge graphs to perform explainable inferences. This framework grounds question-answer pairs into a knowledge-based symbolic space and represents these graphs using knowledge-aware graph networks, enhancing the interpretability and accuracy of machine predictions (Lin et al., 2019 ).

Moreover, integrating generative LLMs with commonsense knowledge also shows promise. Generated knowledge prompting, a technique where knowledge is generated from a LLM and provided as input for answering questions, has shown improvement across various commonsense reasoning tasks (Liu et al., 2021 ). Similarly, graph-based reasoning approaches leveraging both structured and unstructured knowledge sources demonstrate enhanced performance in commonsense question answering by constructing and utilizing relational structures of evidence (Lv et al., 2019 ).

Additional research highlights the potential of hybrid models that combine different reasoning paradigms. A hybrid neural network (HNN) model that integrates a masked LLM and a semantic similarity model, both based on BERT, has achieved state-of-the-art results on several classic commonsense reasoning benchmarks (He et al., 2019 ). This model effectively marries the strengths of distinct approaches to improve machine understanding of commonsense.

There are also novel methodologies such as CAT, which uses semi-supervised learning to integrate event conceptualization and instantiation, enabling machines to generalize and infer new commonsense knowledge from existing data (Wang et al., 2023 ). This process mirrors human-like conceptual induction and deduction, enhancing the machine's ability to perform commonsense reasoning across diverse scenarios.

In summary, extending automated deduction for commonsense reasoning involves a combination of extending classical logic-based methods, leveraging knowledge graphs, integrating generative LLMs, and utilizing hybrid neural network models. These approaches collectively enhance the ability of AI systems to reason about everyday scenarios, bridging the gap towards achieving more human-like understanding in machines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (1)
  1. Tanel Tammet (2 papers)
Citations (3)