Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
143 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Reasoning Implicit Sentiment with Chain-of-Thought Prompting (2305.11255v4)

Published 18 May 2023 in cs.CL

Abstract: While sentiment analysis systems try to determine the sentiment polarities of given targets based on the key opinion expressions in input texts, in implicit sentiment analysis (ISA) the opinion cues come in an implicit and obscure manner. Thus detecting implicit sentiment requires the common-sense and multi-hop reasoning ability to infer the latent intent of opinion. Inspired by the recent chain-of-thought (CoT) idea, in this work we introduce a Three-hop Reasoning (THOR) CoT framework to mimic the human-like reasoning process for ISA. We design a three-step prompting principle for THOR to step-by-step induce the implicit aspect, opinion, and finally the sentiment polarity. Our THOR+Flan-T5 (11B) pushes the state-of-the-art (SoTA) by over 6% F1 on supervised setup. More strikingly, THOR+GPT3 (175B) boosts the SoTA by over 50% F1 on zero-shot setting. Our code is open at https://github.com/scofield7419/THOR-ISA.

Citations (74)

Summary

  • The paper presents a novel three-hop reasoning framework that leverages chain-of-thought prompting to address implicit sentiment analysis.
  • It demonstrates significant improvements by boosting F1 scores by over 6% in supervised setups and achieving a 50% increase with GPT-3.
  • The study highlights the crucial role of model scale and self-consistency mechanisms in enhancing reasoning accuracy and sentiment prediction.

Introduction

Sentiment analysis has traditionally relied on deciphering explicit emotional expressions within texts. However, implicit sentiment analysis (ISA) presents a greater challenge due to its reliance on subtle cues and the necessity for multi-hop reasoning to uncover the underlying sentiment. Recent advancements in chain-of-thought (CoT) prompting with LLMs have opened new avenues for addressing ISA. This paper introduces a novel Three-hop Reasoning (THO R) framework built upon this CoT concept to better tackle the intricacies involved in ISA.

Three-hop Reasoning Framework

The THO R framework operates through a three-step prompting process, using LLMs to infer the implicit aspect, opinion, and ultimately the sentiment polarity. This progression mirrors a human-like approach to sentiment grasp, starting from the initial aspect and moving towards a sentiment conclusion. The paper describes a self-consistency mechanism, inspired by Wang et al. (2022b), which selects consistent candidate answers to improve reasoning accuracy. Furthermore, for supervised setups, the authors introduce a reasoning revision method where intermediate reasoning steps are utilized as model inputs, guided by gold labels to rectify the reasoning pathway.

Experimental Results

The THO R framework's effectiveness is evident, showcasing a significant improvement over existing methods. In supervised setups, Flan-T5 (11B), a THO R-enhanced LLM, surpassed the best-performing baseline by over 6% F1 score. When applied to GPT-3 (175B) without fine-tuning, THO R achieved an impressive 50% increase in the state-of-the-art F1 score. These results underline the importance of model size in achieving substantial gains with CoT-based methods, with larger LLMs benefiting more from THO R's reasoning capabilities.

Conclusion and Limitations

This paper contributes a pioneering approach to ISA by leveraging a CoT framework that mimics human thought processes. It establishes that reasoning through CoT not only improves attribute and sentiment prediction but also reveals the potential of LLM-based CoT frameworks in other NLP tasks. The primary limitation acknowledged is the diminished impact of THO R on smaller LLMs. This phenomenon stresses the significance of model scale in harnessing the full power of LLMs within the THO R framework. Future research could explore enhancing the efficacy of THO R on LLMs of various scales or applying the model to different NLP challenges. The supplementary material provided, including GitHub repository and real-case testing examples, indicates the thoroughness of the research and the practical application of the proposed framework.