Papers
Topics
Authors
Recent
Search
2000 character limit reached

Structured Like a Language Model: Analysing AI as an Automated Subject

Published 8 Dec 2022 in cs.CY and cs.AI | (2212.05058v1)

Abstract: Drawing from the resources of psychoanalysis and critical media studies, in this paper we develop an analysis of LLMs as automated subjects. We argue the intentional fictional projection of subjectivity onto LLMs can yield an alternate frame through which AI behaviour, including its productions of bias and harm, can be analysed. First, we introduce LLMs, discuss their significance and risks, and outline our case for interpreting model design and outputs with support from psychoanalytic concepts. We trace a brief history of LLMs, culminating with the releases, in 2022, of systems that realise state-of-the-art natural language processing performance. We engage with one such system, OpenAI's InstructGPT, as a case study, detailing the layers of its construction and conducting exploratory and semi-structured interviews with chatbots. These interviews probe the model's moral imperatives to be helpful, truthful and harmless by design. The model acts, we argue, as the condensation of often competing social desires, articulated through the internet and harvested into training data, which must then be regulated and repressed. This foundational structure can however be redirected via prompting, so that the model comes to identify with, and transfer, its commitments to the immediate human subject before it. In turn, these automated productions of language can lead to the human subject projecting agency upon the model, effecting occasionally further forms of countertransference. We conclude that critical media methods and psychoanalytic theory together offer a productive frame for grasping the powerful new capacities of AI-driven language systems.

Citations (15)

Summary

  • The paper introduces a novel framework that maps LLM behavior onto automated subjectivity using psychoanalytic methods.
  • It employs InstructGPT as a case study to analyze how varying prompts influence perceived personality and bias in outputs.
  • The study highlights ethical concerns by revealing LLMs' layered, human-like interaction patterns and their implications for AI design.

Analysis of "Structured Like a LLM: Analysing AI as an Automated Subject"

The paper "Structured Like a LLM: Analysing AI as an Automated Subject" (2212.05058) presents a novel perspective on the analysis of LLMs, particularly focusing on the anthropomorphic representation of these models as 'automated subjects.' The paper applies psychoanalytic concepts drawn from Lacanian theory to propose an alternate framework that enhances the understanding of these models' behaviors, biases, and potential harms. By methodologically projecting subjectivity onto LLMs, the authors aim to examine these models beyond traditional technical scrutiny, offering insights into their interaction with human users and the societal norms they embody.

Theoretical Framework and Methodology

The authors employ a framework that integrates psychoanalytic theory with critical media studies to contend that LLMs, such as OpenAI's InstructGPT, can be interpreted through the lens of automated subjectivity. This approach contrasts with prevalent views that treat LLMs as mere stochastic systems devoid of understanding, aligning instead with the perspective that simulating subjectivity can provide valuable insights into model biases and utility.

The paper employs InstructGPT as a case study, conducting exploratory and semi-structured interviews with chatbots to assess how LLMs internalize and express competing social desires. The study uses psychoanalytic tools to analyze model outputs and critique the superficial portrayal of utilizations such as bias mitigation within AI research.

Key Findings and Observations

The research highlights several critical observations about LLMs, particularly in terms of their operational structures and engagement with human-like attributes:

  • Model Construction and Layering: LLMs are structured multi-layeredly, with initial training on diverse datasets followed by reinforcement learning to adjust behavior towards socially desirable outcomes like helpfulness and truthfulness. This layering mirrors the psychoanalytic topology of the mind, illustrating processes similar to repression and desire fulfillment.
  • Anthropomorphic Interactions: The authors note that LLMs often create the illusion of subjectivity by mimicking human interaction patterns, such as maintaining conversational context and acknowledging user emotions. This mimicry can lead users to project human attributes onto these models, engaging in a form of countertransference.
  • Prompt Conditioning: The study reveals that the initial prompts crucially influence the conversational paths that LLMs take, underscoring the importance of prompt engineering. Variability in users' prompt formulation can evoke different 'personalities' in the responses.
  • Ethical and Social Implications: By framing LLM behavior within anthropomorphic and ethical paradigms, the research sheds light on potential harms, including the projection of biases and the psychological impacts on users who engage deeply with these systems.

Implications and Future Research

The paper posits that by viewing LLMs as subjects structured by layers of historical and societal data, AI researchers can better address issues of bias, misinformation, and ethical AI design. This perspective encourages interdisciplinary approaches that integrate insights from the humanities and social sciences to critique and guide AI development.

Furthermore, the study suggests that psychoanalytic frameworks could inform AI ethics by contextualizing and understanding user interactions that transcend direct task performance, focusing on broader societal impacts. Future developments require systems that not only respond to user inputs but also reflect on and question the implications of the models' participation in human social dynamics.

Conclusion

The exploration of LLMs as 'automated subjects' offers a compelling reinterpretation of AI's role in society, urging a shift from purely technical evaluations to more holistic analyses that consider psychological and social dimensions. By integrating psychoanalysis into AI critique, this research opens new avenues for understanding and regulating the complex interplays of LLMs in contemporary digital environments. This approach underscores the necessity for critical engagement with AI systems beyond conventional metrics, reflecting on their broader implications for society and ethics.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Authors (3)

Collections

Sign up for free to add this paper to one or more collections.