Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Relation-Aware Language-Graph Transformer for Question Answering (2212.00975v2)

Published 2 Dec 2022 in cs.CL and cs.AI

Abstract: Question Answering (QA) is a task that entails reasoning over natural language contexts, and many relevant works augment LLMs (LMs) with graph neural networks (GNNs) to encode the Knowledge Graph (KG) information. However, most existing GNN-based modules for QA do not take advantage of rich relational information of KGs and depend on limited information interaction between the LM and the KG. To address these issues, we propose Question Answering Transformer (QAT), which is designed to jointly reason over language and graphs with respect to entity relations in a unified manner. Specifically, QAT constructs Meta-Path tokens, which learn relation-centric embeddings based on diverse structural and semantic relations. Then, our Relation-Aware Self-Attention module comprehensively integrates different modalities via the Cross-Modal Relative Position Bias, which guides information exchange between relevant entites of different modalities. We validate the effectiveness of QAT on commonsense question answering datasets like CommonsenseQA and OpenBookQA, and on a medical question answering dataset, MedQA-USMLE. On all the datasets, our method achieves state-of-the-art performance. Our code is available at http://github.com/mlvlab/QAT.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Jinyoung Park (46 papers)
  2. Hyeong Kyu Choi (10 papers)
  3. Juyeon Ko (6 papers)
  4. Hyeonjin Park (6 papers)
  5. Ji-Hoon Kim (65 papers)
  6. Jisu Jeong (24 papers)
  7. Kyungmin Kim (37 papers)
  8. Hyunwoo J. Kim (70 papers)
Citations (10)

Summary

We haven't generated a summary for this paper yet.