Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reasoning in Transformers -- Mitigating Spurious Correlations and Reasoning Shortcuts (2403.11314v1)

Published 17 Mar 2024 in cs.LG and cs.CL

Abstract: Transformer LLMs are neural networks used for a wide variety of tasks concerning natural language, including some that also require logical reasoning. However, a transformer model may easily learn spurious patterns in the data, short-circuiting actual reasoning. In this paper we investigate to what extent transformers can be trained to a) approximate reasoning in propositional logic while b) avoiding known reasoning shortcuts via spurious correlations in the training data. To do so, we use a dataset with known spurious correlation between truth and e.g. the number of rules in the problem. We augment the data with proofs, and train two models: a generative transformer, WP-BART, trained on problems and their whole proofs, and a neuro-symbolic model, SIP-BART, trained on individual proof steps and combining the generative transformer model BART with a symbolic proof checker. We find that SIP-BART succeeds in avoiding reasoning shortcuts, while WP-BART does not. For SIP-BART, we then identify a few remaining reasoning errors, not previously described in the literature, arising from using a pre-trained LLM. These are qualitatively analysed to create a taxonomy of four different types of additional pitfalls.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Daniel Enström (1 paper)
  2. Viktor Kjellberg (1 paper)
  3. Moa Johansson (20 papers)
Citations (1)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets