Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Logically Consistent Adversarial Attacks for Soft Theorem Provers (2205.00047v1)

Published 29 Apr 2022 in cs.LG, cs.CL, and cs.CR

Abstract: Recent efforts within the AI community have yielded impressive results towards "soft theorem proving" over natural language sentences using LLMs. We propose a novel, generative adversarial framework for probing and improving these models' reasoning capabilities. Adversarial attacks in this domain suffer from the logical inconsistency problem, whereby perturbations to the input may alter the label. Our Logically consistent AdVersarial Attacker, LAVA, addresses this by combining a structured generative process with a symbolic solver, guaranteeing logical consistency. Our framework successfully generates adversarial attacks and identifies global weaknesses common across multiple target models. Our analyses reveal naive heuristics and vulnerabilities in these models' reasoning capabilities, exposing an incomplete grasp of logical deduction under logic programs. Finally, in addition to effective probing of these models, we show that training on the generated samples improves the target model's performance.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Alexander Gaskell (4 papers)
  2. Yishu Miao (19 papers)
  3. Lucia Specia (68 papers)
  4. Francesca Toni (96 papers)
Citations (7)