Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Multi-granularity hierarchical attention fusion networks for reading comprehension and question answering (1811.11934v1)

Published 29 Nov 2018 in cs.CL

Abstract: This paper describes a novel hierarchical attention network for reading comprehension style question answering, which aims to answer questions for a given narrative paragraph. In the proposed method, attention and fusion are conducted horizontally and vertically across layers at different levels of granularity between question and paragraph. Specifically, it first encode the question and paragraph with fine-grained language embeddings, to better capture the respective representations at semantic level. Then it proposes a multi-granularity fusion approach to fully fuse information from both global and attended representations. Finally, it introduces a hierarchical attention network to focuses on the answer span progressively with multi-level softalignment. Extensive experiments on the large-scale SQuAD and TriviaQA datasets validate the effectiveness of the proposed method. At the time of writing the paper (Jan. 12th 2018), our model achieves the first position on the SQuAD leaderboard for both single and ensemble models. We also achieves state-of-the-art results on TriviaQA, AddSent and AddOne-Sent datasets.

Citations (170)

Summary

  • The paper proposes a Multi-Granularity Hierarchical Attention Fusion network using encoder, attention, and match layers to align question and passage representations for question answering.
  • Experiments on SQuAD and TriviaQA show the model achieved state-of-the-art performance with 79.2% EM and 86.6% F1 on the SQuAD hidden test set.
  • The model's approach to fine-grained attention fusion and strong performance offers implications for broader NLP tasks, AI data fusion, and creating robust systems.

Multi-Granularity Hierarchical Attention Fusion Networks for Reading Comprehension and Question Answering

This paper presents a sophisticated model leveraged for reading comprehension-style question answering tasks, specifically addressing the challenge of extracting meaningful answers from narrative paragraphs. The proposed model introduces a Multi-Granularity Hierarchical Attention Fusion network designed for conducting attention and fusion at varying levels of granularity, horizontally and vertically across layers. This approach aligns the representations between questions and narratives and refines them to accurately predict answer spans.

Methodology and Innovation

The network comprises three major components:

  1. Encoder Layer: This layer employs both pre-trained LLMs and recurrent neural networks to encode semantic representations of questions and passages separately. Incorporating Glove and ELMo embeddings allows the model to capture diverse semantic and syntactic information crucial for understanding the text.
  2. Attention Layer: A unique hierarchical attention structure is the core innovation. By utilizing co-attention and self-attention mechanisms, the network systematically aligns question and passage representations, layer by layer. The attention is refined using a fusion function to incorporate both local aligned and global original contexts, enhancing semantic understanding.
  3. Match Layer and Output Layer: Utilizes a bilinear matching function and pointer networks to predict answer boundaries in texts. The model dynamically calculates the start and end probabilities for answers within the passage, emphasizing the interpretative context derived from previous layers.

Experimental Validation

Extensive experimentation conducted on the SQuAD and TriviaQA and adversarial datasets demonstrated the superior performance of the proposed model. The model achieved significant improvements in EM (Exact Match) and F1 scores, leading the SQuAD leaderboard at the time of writing and surpassing human-level EM on the dataset.

  • For the single model, the scores reached 79.2% EM and 86.6% F1 on the hidden test set.
  • Ensemble models further increased performance to 82.4% EM and 88.6% F1.

These results confirm the network’s efficacy in capturing and reflecting complex semantic interactions between questions and passages, thus accurately determining answer spans.

Implications and Future Work

The implications of this research are multi-faceted:

  • Computational Linguistics: The fine-grained attention mechanisms integrated into neural networks can motivate enhancements in LLMs, benefitting various NLP tasks beyond reading comprehension.
  • AI Development: The hierarchical attention fusion approach can inspire novel architectures for tasks involving complex data fusion, reasoning, and inference.
  • Robustness: The model's performance under adversarial conditions opens avenues for developing robust question-answering systems resistant to malicious inputs.

Future research may focus on adapting this multi-granularity attention approach to broader applications in AI and NLP, examining scalability and adaptability to multimodal inputs or broader language tasks. Integrating additional context-aware embeddings or experimenting with transformer architectures could further refine the model’s performance and efficiency, contributing to the advancement in machine understanding and AI-driven content analysis.

This paper delineates a significant step in the evolution of reading comprehension systems, promising enhancements across both theoretical research and practical applications within natural language processing fields.