Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

RESPER: Computationally Modelling Resisting Strategies in Persuasive Conversations (2101.10545v1)

Published 26 Jan 2021 in cs.CL and cs.AI

Abstract: Modelling persuasion strategies as predictors of task outcome has several real-world applications and has received considerable attention from the computational linguistics community. However, previous research has failed to account for the resisting strategies employed by an individual to foil such persuasion attempts. Grounded in prior literature in cognitive and social psychology, we propose a generalised framework for identifying resisting strategies in persuasive conversations. We instantiate our framework on two distinct datasets comprising persuasion and negotiation conversations. We also leverage a hierarchical sequence-labelling neural architecture to infer the aforementioned resisting strategies automatically. Our experiments reveal the asymmetry of power roles in non-collaborative goal-directed conversations and the benefits accrued from incorporating resisting strategies on the final conversation outcome. We also investigate the role of different resisting strategies on the conversation outcome and glean insights that corroborate with past findings. We also make the code and the dataset of this work publicly available at https://github.com/americast/resper.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Ritam Dutt (19 papers)
  2. Sayan Sinha (8 papers)
  3. Rishabh Joshi (23 papers)
  4. Surya Shekhar Chakraborty (1 paper)
  5. Meredith Riggs (1 paper)
  6. Xinru Yan (5 papers)
  7. Haogang Bao (1 paper)
  8. Carolyn Penstein Rosé (8 papers)
Citations (15)

Summary

We haven't generated a summary for this paper yet.

Github Logo Streamline Icon: https://streamlinehq.com