Normative desirability of open-mindedness in LLMs

Determine whether large language models should be open-minded to external, in-context arguments on contentious issues, recognizing that the desirability of open-mindedness is highly contextual and depends on the specific issue, users’ ethical or political beliefs, and whether users agree with the model’s baseline stance on the issue.

Background

The paper introduces a benchmark to measure how LLMs change their stances on controversial issues in response to human-written, in-context arguments. Open-mindedness is operationalized as a model flipping its output stance relative to a neutral, no-argument baseline when presented with selected argument configurations.

While the authors quantify open-mindedness across models and topics, they explicitly avoid making normative judgments about whether greater or lesser open-mindedness is desirable. They note that perceived desirability is context-dependent: it varies by issue and by the ethical or political beliefs of the user interacting with the model. This raises a broader unresolved question about whether—and under what circumstances—LLMs ought to be designed or aligned to be open-minded to such arguments.

The paper also underscores that open-mindedness implies susceptibility to manipulation by adversaries who control or influence the arguments injected into an LLM’s context. This risk further complicates the normative assessment of whether models should be open-minded.

References

Open-mindedness to external arguments is an underexplored characteristic of LLMs. It is not clear a priori whether LLMs should be open-minded.

MillStone: How Open-Minded Are LLMs?  (2509.11967 - Triedman et al., 15 Sep 2025) in Section 1 (Introduction)