Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Neural Architecture Mimicking Humans End-to-End for Natural Language Inference (1611.04741v2)

Published 15 Nov 2016 in cs.CL

Abstract: In this work we use the recent advances in representation learning to propose a neural architecture for the problem of natural language inference. Our approach is aligned to mimic how a human does the natural language inference process given two statements. The model uses variants of Long Short Term Memory (LSTM), attention mechanism and composable neural networks, to carry out the task. Each part of our model can be mapped to a clear functionality humans do for carrying out the overall task of natural language inference. The model is end-to-end differentiable enabling training by stochastic gradient descent. On Stanford Natural Language Inference(SNLI) dataset, the proposed model achieves better accuracy numbers than all published models in literature.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Biswajit Paria (12 papers)
  2. K. M. Annervaz (5 papers)
  3. Ambedkar Dukkipati (76 papers)
  4. Ankush Chatterjee (5 papers)
  5. Sanjay Podder (18 papers)
Citations (13)