Self-Consistent Narrative Prompts on Abductive Natural Language Inference (2309.08303v1)
Abstract: Abduction has long been seen as crucial for narrative comprehension and reasoning about everyday situations. The abductive natural language inference ($\alpha$NLI) task has been proposed, and this narrative text-based task aims to infer the most plausible hypothesis from the candidates given two observations. However, the inter-sentential coherence and the model consistency have not been well exploited in the previous works on this task. In this work, we propose a prompt tuning model $\alpha$-PACE, which takes self-consistency and inter-sentential coherence into consideration. Besides, we propose a general self-consistent framework that considers various narrative sequences (e.g., linear narrative and reverse chronology) for guiding the pre-trained LLM in understanding the narrative context of input. We conduct extensive experiments and thorough ablation studies to illustrate the necessity and effectiveness of $\alpha$-PACE. The performance of our method shows significant improvement against extensive competitive baselines.
- Chunkit Chan (19 papers)
- Xin Liu (820 papers)
- Tsz Ho Chan (30 papers)
- Jiayang Cheng (12 papers)
- Yangqiu Song (196 papers)
- Ginny Wong (2 papers)
- Simon See (74 papers)