Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

When "A Helpful Assistant" Is Not Really Helpful: Personas in System Prompts Do Not Improve Performances of Large Language Models (2311.10054v3)

Published 16 Nov 2023 in cs.CL, cs.AI, cs.CY, cs.HC, and cs.LG

Abstract: Prompting serves as the major way humans interact with LLMs (LLM). Commercial AI systems commonly define the role of the LLM in system prompts. For example, ChatGPT uses ``You are a helpful assistant'' as part of its default system prompt. Despite current practices of adding personas to system prompts, it remains unclear how different personas affect a model's performance on objective tasks. In this study, we present a systematic evaluation of personas in system prompts. We curate a list of 162 roles covering 6 types of interpersonal relationships and 8 domains of expertise. Through extensive analysis of 4 popular families of LLMs and 2,410 factual questions, we demonstrate that adding personas in system prompts does not improve model performance across a range of questions compared to the control setting where no persona is added. Nevertheless, further analysis suggests that the gender, type, and domain of the persona can all influence the resulting prediction accuracies. We further experimented with a list of persona search strategies and found that, while aggregating results from the best persona for each question significantly improves prediction accuracy, automatically identifying the best persona is challenging, with predictions often performing no better than random selection. Overall, our findings suggest that while adding a persona may lead to performance gains in certain settings, the effect of each persona can be largely random. Code and data are available at https://github.com/Jiaxin-Pei/Prompting-with-Social-Roles.

Evaluation of Social Roles in System Prompts for LLMs

The paper "Is 'A Helpful Assistant' the Best Role for LLMs? A Systematic Evaluation of Social Roles in System Prompts" provides a meticulous analysis of how social roles embedded in system prompts can influence the performance of LLMs. The work systematically assesses whether defining LLMs as a particular role results in superior answers compared to the conventional "helpful assistant" role.

Methodology and Experimental Setup

The researchers experimented with three popular open-source LLMs: FLAN-T5-XXL, LLaMA2, and OPT-instruct, across 2,457 questions sourced from the MMLU dataset, which were categorized into various knowledge domains. Notably, they introduced 162 roles categorized into interpersonal relationships and occupations to explore the role's influence on LLM performance.

The experimental setup involved three types of prompts: Role Prompts, Audience Prompts, and Interpersonal Prompts. These were designed to either provide a role to the LLM, define its audience, or establish a specific interpersonal relationship context. This design allowed the authors to determine whether how a role is presented affects the accuracy of LLMs' responses.

Key Findings

The primary findings indicate that incorporating social roles into prompts significantly improves model performance by over 20% compared to neutral or control prompts. The nuances discovered include:

  1. Interpersonal Importance: Non-intimate interpersonal roles like "social", "school", and "work" tend to yield better results, contrary to more intimate roles like "family" and "romantic".
  2. Occupational Roles Variation: Occupations in politics, psychology, and law generally perform well, suggesting some roles align better with specific question domains.
  3. Gender Neutrality: Gender-neutral roles generally outperformed gendered roles, suggesting a bias favoring less explicitly gendered contexts.
  4. Role Specification Impact: Specifying the audience in prompts led to better performance than defining a role for the model or establishing interpersonal relationships. This highlights the effectiveness of instructing the model on the audience it addresses.

Analytical Insights

The paper thoroughly explored the mechanisms behind the improvements in performance through analyses of n-gram frequency, prompt-question similarity, and prompt perplexity. Notably, they observed weak correlations between n-gram frequency of roles and accuracy. Conversely, higher prompt-question similarity modestly correlated with improved performance, while lower perplexity suggested more coherent and reasonable prompt constructions enabled better responses.

Automatic Role Selection

Experimentation with automatic role prediction strategies revealed the challenge in reliably predicting the optimal role. Stochastic methods and classifiers performed better than random selection but fell short of predicting optimal roles effectively.

Implications and Future Work

The research provides substantial insights for designing system prompts in AI systems and opens avenues for role-centered conversational AI development. The demonstrated performance differences underscore the nuanced impacts of social roles, encouraging further exploration into personalizing AI interactions toward audience-specific contexts.

While the paper contributes comprehensively to understanding role-based prompting, challenges persist in automating the optimal role selection process. Future research might explore dependencies between task, role specificity, and response quality in broader contexts and LLMs, including closed-source models.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Mingqian Zheng (4 papers)
  2. Jiaxin Pei (26 papers)
  3. David Jurgens (69 papers)
  4. Lajanugen Logeswaran (30 papers)
  5. Moontae Lee (54 papers)
Citations (10)
Github Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com