Two-stage Generative Question Answering on Temporal Knowledge Graph Using Large Language Models (2402.16568v2)
Abstract: Temporal knowledge graph question answering (TKGQA) poses a significant challenge task, due to the temporal constraints hidden in questions and the answers sought from dynamic structured knowledge. Although LLMs have made considerable progress in their reasoning ability over structured data, their application to the TKGQA task is a relatively unexplored area. This paper first proposes a novel generative temporal knowledge graph question answering framework, GenTKGQA, which guides LLMs to answer temporal questions through two phases: Subgraph Retrieval and Answer Generation. First, we exploit LLM's intrinsic knowledge to mine temporal constraints and structural links in the questions without extra training, thus narrowing down the subgraph search space in both temporal and structural dimensions. Next, we design virtual knowledge indicators to fuse the graph neural network signals of the subgraph and the text representations of the LLM in a non-shallow way, which helps the open-source LLM deeply understand the temporal order and structural dependencies among the retrieved facts through instruction tuning. Experimental results on two widely used datasets demonstrate the superiority of our model.
- Knowledge-augmented language model prompting for zero-shot knowledge graph question answering. In Proceedings of the 1st Workshop on Natural Language Reasoning and Structured Explanations, pages 78–106.
- Icews coded event data. Harvard Dataverse, 12.
- A dataset for answering time-sensitive questions. In NeurIPS Datasets and Benchmarks.
- A dataset for answering time-sensitive questions. In NIPS.
- Multi-granularity temporal question answering over knowledge graphs. In ACL, pages 11378–11392.
- BERT: pre-training of deep bidirectional transformers for language understanding. In NAACL, pages 4171–4186.
- Time-aware language models as temporal knowledge bases. Transactions of the Association for Computational Linguistics, 10:257–273.
- Introducing wikidata to the linked data web. In ISWC, pages 50–65.
- Reasoning implicit sentiment with chain-of-thought prompting. In ACL, pages 1171–1182.
- Scalable multi-hop relational reasoning for knowledge-aware question answering. In EMNLP, pages 1295–1309.
- Chatdb: Augmenting llms with databases as their symbolic memory. CoRR, abs/2306.03901.
- Lora: Low-rank adaptation of large language models. In ICLR.
- Tempquestions: A benchmark for temporal question answering. In WWW, pages 1057–1062.
- Complex temporal question answering on knowledge graphs. In CIKM, pages 792–802.
- Structgpt: A general framework for large language model to reason over structured data. In EMNLP, pages 9237–9251.
- Unikgqa: Unified retrieval and reasoning for solving multi-hop question answering over knowledge graph. In ICLR.
- KG-GPT: A general framework for reasoning on knowledge graphs using large language models. In EMNLP (Findings), pages 9410–9421.
- Tensor decompositions for temporal knowledge base completion. In ICLR.
- Chain-of-knowledge: Grounding large language models via dynamic knowledge adapting over heterogeneous sources. In ICLR.
- Kagnet: Knowledge-aware graph networks for commonsense reasoning. In EMNLP-IJCNLP, pages 2829–2839.
- Streamingqa: A benchmark for adaptation to new knowledge over time in question answering models. In ICML, pages 13604–13622.
- Roberta: A robustly optimized BERT pretraining approach. CoRR.
- Time-aware multiway adaptive fusion network for temporal knowledge graph question answering. In ICASSP, pages 1–5.
- Local and global: Temporal question answering via information fusion. In IJCAI, pages 5141–5149.
- Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In ICLR.
- Chatkbqa: A generate-then-retrieve framework for knowledge base question answering with fine-tuned large language models. CoRR, abs/2310.08975.
- Doctime: A document-level temporal dependency graph parser. In NAACL, pages 993–1009.
- Tempoqr: Temporal question reasoning over knowledge graphs. In AAAI, pages 5825–5833.
- OpenAI. 2023. Introducing chatgpt.
- Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21:140:1–140:67.
- Question answering over temporal knowledge graphs. In ACL-IJCNLP, pages 6663–6676.
- Improving multi-hop question answering over knowledge graphs using knowledge base embeddings. In ACL, pages 4498–4507.
- Improving time sensitivity for question answering over temporal knowledge graphs. In ACL, pages 8017–8026.
- Fusing temporal graphs into transformers for time-sensitive question answering. In EMNLP (Findings), pages 948–966.
- Is chatgpt good at search? investigating large language models as re-ranking agent. In EMNLP.
- Towards benchmarking and improving the temporal reasoning capability of large language models. In ACL, pages 14820–14835.
- Llama 2: Open foundation and fine-tuned chat models. CoRR, abs/2307.09288.
- Graph attention networks. In ICLR.
- Enhancing pre-trained language representations with rich knowledge for machine reading comprehension. In ACL, pages 2346–2357.
- Give us the facts: Enhancing large language models with knowledge graphs for fact-aware language modeling. IEEE Transactions on Knowledge and Data Engineering, pages 1–20.
- Once upon a time in graph: Relative-time pretraining for complex temporal reasoning. In EMNLP, pages 11879–11895.
- QA-GNN: reasoning with language models and knowledge graphs for question answering. In NAACL-HLT, pages 535–546.
- Multi-source test-time adaptation as dueling bandits for extractive question answering. In ACL, pages 9647–9660.
- Subgraph retrieval enhanced model for multi-hop knowledge base question answering. In ACL, pages 5773–5784.
- Michael J. Q. Zhang and Eunsol Choi. 2021. Situatedqa: Incorporating extra-linguistic contexts into QA. In EMNLP, pages 7371–7387.
- Greaselm: Graph reasoning enhanced language models for question answering. In ICLR.
- Knowledgeable parameter efficient tuning network for commonsense question answering. In ACL, pages 9051–9063.
- Yifu Gao (5 papers)
- Linbo Qiao (18 papers)
- Zhigang Kan (5 papers)
- Zhihua Wen (7 papers)
- Yongquan He (9 papers)
- Dongsheng Li (240 papers)