Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
120 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
3 tokens/sec
DeepSeek R1 via Azure Pro
55 tokens/sec
2000 character limit reached

Misaligned from Within: Large Language Models Reproduce Our Double-Loop Learning Blindness (2507.02283v1)

Published 3 Jul 2025 in cs.HC

Abstract: This paper examines a critical yet unexplored dimension of the AI alignment problem: the potential for LLMs to inherit and amplify existing misalignments between human espoused theories and theories-in-use. Drawing on action science research, we argue that LLMs trained on human-generated text likely absorb and reproduce Model 1 theories-in-use - a defensive reasoning pattern that both inhibits learning and creates ongoing anti-learning dynamics at the dyad, group, and organisational levels. Through a detailed case study of an LLM acting as an HR consultant, we show how its advice, while superficially professional, systematically reinforces unproductive problem-solving approaches and blocks pathways to deeper organisational learning. This represents a specific instance of the alignment problem where the AI system successfully mirrors human behaviour but inherits our cognitive blind spots. This poses particular risks if LLMs are integrated into organisational decision-making processes, potentially entrenching anti-learning practices while lending authority to them. The paper concludes by exploring the possibility of developing LLMs capable of facilitating Model 2 learning - a more productive theory-in-use - and suggests this effort could advance both AI alignment research and action science practice. This analysis reveals an unexpected symmetry in the alignment challenge: the process of developing AI systems properly aligned with human values could yield tools that help humans themselves better embody those same values.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.