Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

NegativePrompt: Leveraging Psychology for Large Language Models Enhancement via Negative Emotional Stimuli (2405.02814v2)

Published 5 May 2024 in cs.CL
NegativePrompt: Leveraging Psychology for Large Language Models Enhancement via Negative Emotional Stimuli

Abstract: LLMs have become integral to a wide spectrum of applications, ranging from traditional computing tasks to advanced AI applications. This widespread adoption has spurred extensive research into LLMs across various disciplines, including the social sciences. Notably, studies have revealed that LLMs possess emotional intelligence, which can be further developed through positive emotional stimuli. This discovery raises an intriguing question: can negative emotions similarly influence LLMs, potentially enhancing their performance? In response to this question, we introduce NegativePrompt, a novel approach underpinned by psychological principles, involving ten specifically designed negative emotional stimuli. We embark on rigorous experimental evaluations of five LLMs including Flan-T5-Large, Vicuna, Llama 2, ChatGPT, and GPT-4, across a set of 45 tasks. The results are revealing: NegativePrompt markedly enhances the performance of LLMs, evidenced by relative improvements of 12.89% in Instruction Induction tasks and 46.25% in BIG-Bench tasks. Moreover, we conduct attention visualization experiments to decipher the underlying mechanisms of NegativePrompt's influence. Our research contributes significantly to the understanding of LLMs and emotion interaction, demonstrating the practical efficacy of NegativePrompt as an emotion-driven method and offering novel insights for the enhancement of LLMs in real-world applications. The code is available at https://github.com/wangxu0820/NegativePrompt.

An Evaluation of NegativePrompt: Enhancing LLMs Through Negative Emotional Stimuli

The paper "NegativePrompt: Leveraging Psychology for LLMs Enhancement via Negative Emotional Stimuli" presents an intriguing exploration of using psychological principles to improve the performance of LLMs through strategies involving negative emotional stimuli. The authors propose a novel prompt engineering approach—NegativePrompt—integrating these stimuli to assess whether they can enhance LLM functionalities across diverse tasks.

The paper commences with a backdrop on the established performance and application spectrum of LLMs, emphasizing ongoing research efforts to refine human-LLM interaction. Central to this paper is the question of whether LLMs, which have demonstrated responsiveness to positive emotional stimuli, might similarly benefit from negative emotional inputs. To this end, the authors innovate ten specific negative emotional stimuli, rooted in psychological theories including Cognitive Dissonance Theory, Social Comparison Theory, and Stress and Coping Theory.

Extensive experimentation is conducted across five prominent LLMs: Flan-T5-Large, Vicuna, Llama 2, ChatGPT, and GPT-4, evaluated over 45 tasks encompassing Instruction Induction and BIG-Bench challenges. NegativePrompt demonstrates a notable performance enhancement, with relative improvements of 12.89% and 46.25% in Instruction Induction and BIG-Bench tasks, respectively. Particularly in few-shot learning contexts, NegativePrompt exhibits considerable adaptability and efficiency in facilitating LLMs' contextual understanding and generalization capabilities.

Crucially, the authors employ attention visualization to further dissect the underlying mechanisms whereby NegativePrompt operates. These visualizations suggest that negative emotional stimuli strengthen the model's focus on the original prompt's core elements, subsequently improving task execution efficacy. This cognitive mechanism mirrors human learning strategies, where negative stimuli can potentiate enhanced cognitive engagement and adaptation from defined task instructions.

The investigation extends to the TruthfulQA benchmark, where NegativePrompt contributes to improvements in LLM output authenticity, advancing both the truthfulness and informativeness of model-generated responses. This suggests that negative stimuli induce LLMs to process questions with increased scrutiny, subsequently refining judgment accuracy and response detail.

Additional examinations are presented regarding the effects of stacking multiple negative stimuli and comparing the efficacy of individual stimuli. While doubling up stimuli from the same theoretical base shows limited gains, combinations from diverse theoretical origins yield varied results—highlighting the necessity for strategic stimulus selection.

In contrast to EmotionPrompt, which leverages positive emotional stimuli, NegativePrompt demonstrates a larger improvement margin in the Instruction Induction tasks compared to the emotional prompts’ focus on complex BIG-Bench tasks.

This research suggests potential pathways for future AI model enhancements, advocating for a nuanced understanding of emotional stimulus integration in LLM performance optimization. NegativePrompt exemplifies the convergence of cognitive science insights and machine learning advancements, offering valuable methodologies for further explorations in model behavior refinement.

In conclusion, the authors make a significant contribution to the literature surrounding emotion-LLM interaction, opening doors for more sophisticated and psychology-aligned methodologies in AI research and application developments. The implications of these findings are broad, envisioning enhanced cognitive models that harness nuanced emotional stimuli to achieve sophisticated, context-aware machine intelligence.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Xu Wang (319 papers)
  2. Cheng Li (1094 papers)
  3. Yi Chang (150 papers)
  4. Jindong Wang (150 papers)
  5. Yuan Wu (104 papers)
Citations (4)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets