D4C: Improving Negative Example Quality to Enhance Machine Abstract Reasoning Ability
Abstract: This paper is dedicated to addressing the challenge of enhancing the abstract reasoning capabilities of AI, particularly for tasks involving complex human concepts. We introduce Lico-Net, a novel reasoning engine grounded in deep learning theory, which encodes the logical structure of Raven's Progressive Matrices (RPM) problems into probabilistic representations. Lico-Net excels in solving RPM tasks. Furthermore, we propose Lico-Net-Bongard, a tailored version of Lico-Net for the Bongard-Logo problem, which also achieves high reasoning accuracy through probabilistic representations. However, we observe a mismatch between the way deep learning algorithms and humans induce reasoning concepts, primarily attributed to the inadequate quality of negative samples. Improper configuration of negative samples can convey erroneous conceptual information to deep learning algorithms, thereby distorting their learning objectives. To address this issue, we propose two novel approaches: first, treating different sample points within reasoning problems as mutual negative samples to alter the existing negative sample structure in the data; second, designing a negative sample generator based on a step-wise linear attention mechanism to produce high-quality negative samples. Experimental results demonstrate that these methods significantly improve the performance of Lico-Net (-Bongard) and other baseline models on the RPM and Bongard-Logo datasets, as well as in the domain of foundational vision model processing, particularly when addressing the NICO dataset's distribution shift problem. Our findings emphasize the importance of improving negative sample quality for enhancing the abstract reasoning capabilities of deep learning algorithms and suggest that systems represent a promising direction for future research in this field.
- Raven J. C. Raven’s Progressive Matrices. (Western Psychological Services, (1938).
- Dosovitskiy, A. et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. Preprint at https://arxiv.org/abs/2010.11929 (2020).
- S.Kharagorgiev,“Solvingbongardproblemswithdeeplearning,” k10v.github.io,2020.
- R.Song, B.Yuan. Solving the bongard-logo problem by modeling a probabilistic model. Preprint at https://arxiv.org/abs/ arXiv:2403.03173 (2024).
- R.Song, B.Yuan. Triple-CFN: Restructuring Conceptual Spaces for Enhancing Abstract Reasoning process. Preprint at https://arxiv.org/abs/arXiv:2403.03190 (2024).
- Cuturi, Marco. ”Sinkhorn distances: Lightspeed computation of optimal transport.” Advances in neural information processing systems 26 (2013).
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.