Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Perturbing Inputs for Fragile Interpretations in Deep Natural Language Processing (2108.04990v2)

Published 11 Aug 2021 in cs.CL

Abstract: Interpretability methods like Integrated Gradient and LIME are popular choices for explaining natural LLM predictions with relative word importance scores. These interpretations need to be robust for trustworthy NLP applications in high-stake areas like medicine or finance. Our paper demonstrates how interpretations can be manipulated by making simple word perturbations on an input text. Via a small portion of word-level swaps, these adversarial perturbations aim to make the resulting text semantically and spatially similar to its seed input (therefore sharing similar interpretations). Simultaneously, the generated examples achieve the same prediction label as the seed yet are given a substantially different explanation by the interpretation methods. Our experiments generate fragile interpretations to attack two SOTA interpretation methods, across three popular Transformer models and on two different NLP datasets. We observe that the rank order correlation drops by over 20% when less than 10% of words are perturbed on average. Further, rank-order correlation keeps decreasing as more words get perturbed. Furthermore, we demonstrate that candidates generated from our method have good quality metrics.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Sanchit Sinha (11 papers)
  2. Hanjie Chen (28 papers)
  3. Arshdeep Sekhon (15 papers)
  4. Yangfeng Ji (59 papers)
  5. Yanjun Qi (68 papers)
Citations (40)

Summary

We haven't generated a summary for this paper yet.