Dice Question Streamline Icon: https://streamlinehq.com

Alignment of Transformer attention representations with human agency-based importance

Determine whether the token representations produced by the self-attention mechanism in Transformer architectures align with human agency-based judgments of importance, specifically whether attention-derived representations correspond to what individuals intentionally deem important given their beliefs, goals, and intentions.

Information Square Streamline Icon: https://streamlinehq.com

Background

The paper contrasts the intentional, agency-driven nature of human attention with the data-driven attention mechanism in Transformer architectures. Human attention can be deliberately controlled to align with goals and intentions, whereas Transformer attention is a learned mathematical construct lacking inherent cognitive states.

Against this backdrop, the authors raise uncertainty about whether the outputs of the Transformer attention mechanism reflect agency-based importance as judged by humans, highlighting a gap between mathematically learned attention weights and human intentional focus.

References

It remains unclear whether the representations output from the attention mechanism coincides with what one believes to be important from an agency perspective.

From Cognition to Computation: A Comparative Review of Human Attention and Transformer Architectures (2407.01548 - Zhao et al., 25 Apr 2024) in Subsubsection 3.2.3 (Intentional nature)