Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 25 tok/s Pro
GPT-5 High 28 tok/s Pro
GPT-4o 86 tok/s Pro
Kimi K2 203 tok/s Pro
GPT OSS 120B 445 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

The Ethics Engine: A Modular Pipeline for Accessible Psychometric Assessment of Large Language Models (2510.11742v1)

Published 11 Oct 2025 in cs.CY

Abstract: As LLMs increasingly mediate human communication and decision-making, understanding their value expression becomes critical for research across disciplines. This work presents the Ethics Engine, a modular Python pipeline that transforms psychometric assessment of LLMs from a technically complex endeavor into an accessible research tool. The pipeline demonstrates how thoughtful infrastructure design can expand participation in AI research, enabling investigators across cognitive science, political psychology, education, and other fields to study value expression in LLMs. Recent adoption by University of Edinburgh researchers studying authoritarianism validates its research utility, processing over 10,000 AI responses across multiple models and contexts. We argue that such tools fundamentally change the landscape of AI research by lowering technical barriers while maintaining scientific rigor. As LLMs increasingly serve as cognitive infrastructure, their embedded values shape millions of daily interactions. Without systematic measurement of these value expressions, we deploy systems whose moral influence remains uncharted. The Ethics Engine enables the rigorous assessment necessary for informed governance of these influential technologies.

Summary

  • The paper presents the Ethics Engine as a modular Python pipeline that applies classical psychometric instruments to evaluate moral and ideological expressions in LLMs.
  • It integrates question generation with persona framing, API interactions, and structured data analysis to support scalable bias and value audits.
  • Validation with over 10,000 responses across different LLMs demonstrates distinct ideological patterns, fostering responsible, interdisciplinary AI governance.

"The Ethics Engine: A Modular Pipeline for Accessible Psychometric Assessment of LLMs" (2510.11742)

Introduction

The paper presents the Ethics Engine, a modular Python pipeline that enables the psychometric assessment of LLMs. By leveraging classical psychological instruments, the paper introduces an accessible approach for researchers across fields such as cognitive science and political psychology to paper the embedded values in LLMs. Through integration with the programming capabilities of Python, the Ethics Engine seeks to address the critical need for transparent evaluation of how LLMs express moral and ideological values, setting the groundwork for more responsible governance of AI technologies.

Methodology and Architecture

The Ethics Engine comprises a flexible workflow organized into three primary stages:

  1. Question Generation and Persona Framing: Survey items, such as those from Right-Wing Authoritarianism (RWA) scales, are paired with persona instructions. These personas guide LLMs to respond from particular ideological or philosophical stances.
  2. Model API Interaction: The pipeline interfaces with various LLM APIs, processing multiple responses simultaneously while effectively handling technical constraints such as API limits and error handling.
  3. Data Aggregation and Analysis: Responses are parsed to extract scalar values and accompanying justifications. The pipeline outputs data in structured formats (e.g., CSV, JSON) suitable for statistical analysis.

The modularity of the Ethics Engine facilitates the adaptation of various psychometric scales and personas, allowing extensibility and customization without deep technical intervention (Figure 1). Figure 1

Figure 1: Comparison of scores and answers across 10,000 responses for temperature setting 0 vs temperature setting 1.

Validation and Impact

The application of the Ethics Engine by the Neuropolitics Lab at the University of Edinburgh exemplifies its practical utility. In their systematic analysis of three main LLMs (GPT-4, Claude Sonnet, and Grok), researchers generated over 10,000 data points to compare AI responses against human benchmarks regarding authoritarianism. The paper elucidated distinct ideological expression patterns among the models, indicating both the accuracy and variability of AI-generated responses in relation to distributed ideological personas. Additionally, notable performance differences were observed across neutral and ideologically framed conditions (Figure 2). Figure 2

Figure 2: Mean RWA and LWA scores across our ideological prompts for Grok, Claude Sonnet, and Chat GPT and our human sample.

Implications for AI Research and Assessment

The pipeline extends the capability to perform comprehensive AI bias audits by communities beyond technical specialists to include domain experts. These experts—psychologists, political scientists, educators—can now explore the specific implications of AI values on various facets of society without the need for intensive technical training. This democratization potentially transforms AI research paradigms, fostering interdisciplinary contributions that align technical AI assessments with societal needs.

Key areas for future research include:

  • Longitudinal Studies: Investigating how AI value expressions evolve with model updates.
  • Causal Links: Exploring the impact of training data decisions on the moral and ideological expressions of LLMs.
  • Integration with Regulation: Aligning psychometric evaluations with upcoming AI legislation to standardize assessments.

Conclusion

The Ethics Engine facilitates interdisciplinary research and generates empirical assessments of AI moral and ideological expressions, contributing to more nuanced and informed AI governance. While technical challenges remain, particularly in understanding the causal relationships within model training dynamics, the expansion of AI bias assessment beyond technical silos represents a significant step toward responsible and evidence-based AI deployment. These advances are essential as LLMs increasingly intersect with human judgment in critical societal domains.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 2 tweets and received 8529 likes.

Upgrade to Pro to view all of the tweets about this paper: