Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Beneath Surface Similarity: Large Language Models Make Reasonable Scientific Analogies after Structure Abduction (2305.12660v2)

Published 22 May 2023 in cs.CL and cs.AI

Abstract: The vital role of analogical reasoning in human cognition allows us to grasp novel concepts by linking them with familiar ones through shared relational structures. Despite the attention previous research has given to word analogies, this work suggests that LLMs often overlook the structures that underpin these analogies, raising questions about the efficacy of word analogies as a measure of analogical reasoning skills akin to human cognition. In response to this, our paper introduces a task of analogical structure abduction, grounded in cognitive psychology, designed to abduce structures that form an analogy between two systems. In support of this task, we establish a benchmark called SCAR, containing 400 scientific analogies from 13 distinct fields, tailored for evaluating analogical reasoning with structure abduction. The empirical evidence underlines the continued challenges faced by LLMs, including ChatGPT and GPT-4, in mastering this task, signifying the need for future exploration to enhance their abilities.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Siyu Yuan (46 papers)
  2. Jiangjie Chen (46 papers)
  3. Xuyang Ge (9 papers)
  4. Yanghua Xiao (151 papers)
  5. Deqing Yang (55 papers)
Citations (5)