Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AWE: Asymmetric Word Embedding for Textual Entailment (1809.04047v2)

Published 11 Sep 2018 in cs.CL

Abstract: Textual entailment is a fundamental task in natural language processing. It refers to the directional relation between text fragments such that the "premise" can infer "hypothesis". In recent years deep learning methods have achieved great success in this task. Many of them have considered the inter-sentence word-word interactions between the premise-hypothesis pairs, however, few of them considered the "asymmetry" of these interactions. Different from paraphrase identification or sentence similarity evaluation, textual entailment is essentially determining a directional (asymmetric) relation between the premise and the hypothesis. In this paper, we propose a simple but effective way to enhance existing textual entailment algorithms by using asymmetric word embeddings. Experimental results on SciTail and SNLI datasets show that the learned asymmetric word embeddings could significantly improve the word-word interaction based textual entailment models. It is noteworthy that the proposed AWE-DeIsTe model can get 2.1% accuracy improvement over prior state-of-the-art on SciTail.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Tengfei Ma (73 papers)
  2. Chiamin Wu (2 papers)
  3. Cao Xiao (84 papers)
  4. Jimeng Sun (181 papers)
Citations (2)