Evaluation of Social Roles in System Prompts for LLMs
The paper "Is 'A Helpful Assistant' the Best Role for LLMs? A Systematic Evaluation of Social Roles in System Prompts" provides a meticulous analysis of how social roles embedded in system prompts can influence the performance of LLMs. The work systematically assesses whether defining LLMs as a particular role results in superior answers compared to the conventional "helpful assistant" role.
Methodology and Experimental Setup
The researchers experimented with three popular open-source LLMs: FLAN-T5-XXL, LLaMA2, and OPT-instruct, across 2,457 questions sourced from the MMLU dataset, which were categorized into various knowledge domains. Notably, they introduced 162 roles categorized into interpersonal relationships and occupations to explore the role's influence on LLM performance.
The experimental setup involved three types of prompts: Role Prompts, Audience Prompts, and Interpersonal Prompts. These were designed to either provide a role to the LLM, define its audience, or establish a specific interpersonal relationship context. This design allowed the authors to determine whether how a role is presented affects the accuracy of LLMs' responses.
Key Findings
The primary findings indicate that incorporating social roles into prompts significantly improves model performance by over 20% compared to neutral or control prompts. The nuances discovered include:
- Interpersonal Importance: Non-intimate interpersonal roles like "social", "school", and "work" tend to yield better results, contrary to more intimate roles like "family" and "romantic".
- Occupational Roles Variation: Occupations in politics, psychology, and law generally perform well, suggesting some roles align better with specific question domains.
- Gender Neutrality: Gender-neutral roles generally outperformed gendered roles, suggesting a bias favoring less explicitly gendered contexts.
- Role Specification Impact: Specifying the audience in prompts led to better performance than defining a role for the model or establishing interpersonal relationships. This highlights the effectiveness of instructing the model on the audience it addresses.
Analytical Insights
The paper thoroughly explored the mechanisms behind the improvements in performance through analyses of n-gram frequency, prompt-question similarity, and prompt perplexity. Notably, they observed weak correlations between n-gram frequency of roles and accuracy. Conversely, higher prompt-question similarity modestly correlated with improved performance, while lower perplexity suggested more coherent and reasonable prompt constructions enabled better responses.
Automatic Role Selection
Experimentation with automatic role prediction strategies revealed the challenge in reliably predicting the optimal role. Stochastic methods and classifiers performed better than random selection but fell short of predicting optimal roles effectively.
Implications and Future Work
The research provides substantial insights for designing system prompts in AI systems and opens avenues for role-centered conversational AI development. The demonstrated performance differences underscore the nuanced impacts of social roles, encouraging further exploration into personalizing AI interactions toward audience-specific contexts.
While the paper contributes comprehensively to understanding role-based prompting, challenges persist in automating the optimal role selection process. Future research might explore dependencies between task, role specificity, and response quality in broader contexts and LLMs, including closed-source models.