Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Neuro-vector-symbolic Architecture for Solving Raven's Progressive Matrices (2203.04571v2)

Published 9 Mar 2022 in cs.LG, cs.AI, and cs.CV

Abstract: Neither deep neural networks nor symbolic AI alone has approached the kind of intelligence expressed in humans. This is mainly because neural networks are not able to decompose joint representations to obtain distinct objects (the so-called binding problem), while symbolic AI suffers from exhaustive rule searches, among other problems. These two problems are still pronounced in neuro-symbolic AI which aims to combine the best of the two paradigms. Here, we show that the two problems can be addressed with our proposed neuro-vector-symbolic architecture (NVSA) by exploiting its powerful operators on high-dimensional distributed representations that serve as a common language between neural networks and symbolic AI. The efficacy of NVSA is demonstrated by solving the Raven's progressive matrices datasets. Compared to state-of-the-art deep neural network and neuro-symbolic approaches, end-to-end training of NVSA achieves a new record of 87.7% average accuracy in RAVEN, and 88.1% in I-RAVEN datasets. Moreover, compared to the symbolic reasoning within the neuro-symbolic approaches, the probabilistic reasoning of NVSA with less expensive operations on the distributed representations is two orders of magnitude faster. Our code is available at https://github.com/IBM/neuro-vector-symbolic-architectures.

A Neuro-Vector-Symbolic Architecture for Solving Raven’s Progressive Matrices

The research presented in this paper introduces a novel approach to solving the Raven’s Progressive Matrices (RPM), a well-established test of non-verbal abstract reasoning. The paper addresses the limitations inherent in both deep neural networks and symbolic AI when faced with tasks that require a combination of perception and reasoning. Neither of these paradigms independently satisfactorily resolves the so-called binding problem in neural networks or avoids exhaustive rule searching in symbolic AI. The authors suggest that their proposed neuro-vector-symbolic architecture (NVSA) overcomes these issues by bridging high-dimensional distributed representations with neural and symbolic models.

NVSA demonstrates its efficacy through notable performance on the RAVEN and I-RAVEN datasets, setting a new benchmark average accuracy of 87.7% and 88.1% respectively. This represents an improvement over existing neuro-symbolic methods, not only in terms of accuracy but also in computational efficiency, achieving a speed two orders of magnitude greater than traditional probabilistic reasoning approaches.

Neuro-Vector-Symbolic Architecture

The core innovation in NVSA is its ability to unify perception and reasoning within a single framework through vector-symbolic architectures (VSAs). VSAs leverage high-dimensional vectors to encode and manipulate symbolic information. This architecture employs distributed representations that can represent intricate structures while maintaining low-dimensional mapping from input to symbolic form, inherently solving the binding problem by using algebraic vector operations.

Perception Frontend

The perception component relies on deep neural networks (DNNs) trained to map input data into vector-symbolic representations. The use of a DNN such as ResNet-18 allows for the integration of the perceived features into a structured, symbolic-like high-dimensional vector space. This transformation enables the NVSA to efficiently bundle and unbundle perceptions into symbology that feeds directly into the reasoning process without facing the superposition catastrophe that typically plagues neural networks dealing with compositional inference.

Reasoning Backend

The reasoning component of NVSA leverages the algebraic properties of vectors to perform logical operations necessary for solving RPM tasks. The backend can support various reasoning rules, such as addition and permutation rules using distributive vector operations. Unlike prior symbolic methods, which often demand prohibitive exhaustive searches, NVSA’s rules are executed within the vector space, thereby drastically reducing computational costs and enabling end-to-end training with real-time inference capabilities.

Implications and Future Directions

This work provides compelling evidence for the effectiveness of hybrid architectures in complex cognitive tasks, where both precise perception and intricate reasoning are indispensable. NVSA sets a precedent in its class for accuracy and efficiency, emphasizing the potential of combining vector operations with neural and symbolic methods. The approach exemplifies how high-dimensional vector spaces can serve as potent interfaces between sensory perception and abstract reasoning.

The implications of this work extend beyond RPM, pointing toward possible applications in areas that require a nuanced interaction between representation learning and logical theorem proving. Future research could explore applications in domains such as automated theorem proving, planning, and more general AI reasoning tasks. Moreover, there is potential to scale NVSA further using advancements in hardware acceleration, such as in-memory computing solutions that can perform associative memory tasks with greater speed and efficiency. The NVSA’s approach to resolving the binding problem and exhaustive search inefficiencies offers a promising avenue for advancing AI’s ability to emulate human-like reasoning in an increasingly complex world.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Michael Hersche (29 papers)
  2. Mustafa Zeqiri (3 papers)
  3. Luca Benini (362 papers)
  4. Abu Sebastian (67 papers)
  5. Abbas Rahimi (44 papers)
Citations (73)
X Twitter Logo Streamline Icon: https://streamlinehq.com