Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Self-Consistent Narrative Prompts on Abductive Natural Language Inference (2309.08303v1)

Published 15 Sep 2023 in cs.CL

Abstract: Abduction has long been seen as crucial for narrative comprehension and reasoning about everyday situations. The abductive natural language inference ($\alpha$NLI) task has been proposed, and this narrative text-based task aims to infer the most plausible hypothesis from the candidates given two observations. However, the inter-sentential coherence and the model consistency have not been well exploited in the previous works on this task. In this work, we propose a prompt tuning model $\alpha$-PACE, which takes self-consistency and inter-sentential coherence into consideration. Besides, we propose a general self-consistent framework that considers various narrative sequences (e.g., linear narrative and reverse chronology) for guiding the pre-trained LLM in understanding the narrative context of input. We conduct extensive experiments and thorough ablation studies to illustrate the necessity and effectiveness of $\alpha$-PACE. The performance of our method shows significant improvement against extensive competitive baselines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Chunkit Chan (19 papers)
  2. Xin Liu (820 papers)
  3. Tsz Ho Chan (30 papers)
  4. Jiayang Cheng (12 papers)
  5. Yangqiu Song (196 papers)
  6. Ginny Wong (2 papers)
  7. Simon See (74 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.