Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

G-Retriever: Retrieval-Augmented Generation for Textual Graph Understanding and Question Answering (2402.07630v3)

Published 12 Feb 2024 in cs.LG
G-Retriever: Retrieval-Augmented Generation for Textual Graph Understanding and Question Answering

Abstract: Given a graph with textual attributes, we enable users to `chat with their graph': that is, to ask questions about the graph using a conversational interface. In response to a user's questions, our method provides textual replies and highlights the relevant parts of the graph. While existing works integrate LLMs and graph neural networks (GNNs) in various ways, they mostly focus on either conventional graph tasks (such as node, edge, and graph classification), or on answering simple graph queries on small or synthetic graphs. In contrast, we develop a flexible question-answering framework targeting real-world textual graphs, applicable to multiple applications including scene graph understanding, common sense reasoning, and knowledge graph reasoning. Toward this goal, we first develop a Graph Question Answering (GraphQA) benchmark with data collected from different tasks. Then, we propose our G-Retriever method, introducing the first retrieval-augmented generation (RAG) approach for general textual graphs, which can be fine-tuned to enhance graph understanding via soft prompting. To resist hallucination and to allow for textual graphs that greatly exceed the LLM's context window size, G-Retriever performs RAG over a graph by formulating this task as a Prize-Collecting Steiner Tree optimization problem. Empirical evaluations show that our method outperforms baselines on textual graph tasks from multiple domains, scales well with larger graph sizes, and mitigates hallucination.~\footnote{Our codes and datasets are available at: \url{https://github.com/XiaoxinHe/G-Retriever}}

G-Retriever: Retrieval-Augmented Generation for Textual Graph Understanding and Question Answering

The article introduces "G-Retriever," a novel framework designed for retrieval-augmented generation (RAG) to enhance the understanding and question-answering (QA) capabilities over textual graphs. With the integration of Graph Neural Networks (GNNs) and LLMs, the proposed approach aims to enable a conversational interface through which users can seamlessly interact with and inquire about complex real-world textual graphs.

Key Contributions

GraphQA Benchmark

The paper addresses a significant gap in QA benchmarks tailored to graph modalities by presenting the GraphQA benchmark. This benchmark encompasses a diverse set of datasets: ExplaGraphs for commonsense reasoning, SceneGraphs for visual question answering, and WebQSP for knowledge graph-based multi-hop question answering. The standardization and processing of these datasets into the GraphQA format allow a comprehensive evaluation of models in answering a wide array of questions related to real-world graph applications.

G-Retriever Architecture

G-Retriever is built upon the synergy of GNNs, LLMs, and RAG, fine-tuned to provide a robust QA framework that scales to larger textual graphs and resists hallucinations.

  1. Indexing: The approach begins with encoding node and edge attributes using a pre-trained LM (SentenceBert), creating embeddings stored in a nearest-neighbor data structure for efficient query processing.
  2. Retrieval: Utilizing cosine similarity, the system retrieves semantically relevant nodes and edges from the graph, conditioned on the query.
  3. Subgraph Construction: The retrieval task is cast as a Prize-Collecting Steiner Tree (PCST) optimization problem to construct an optimally connected subgraph, encompassing relevant nodes and edges while controlling for manageable graph size.
  4. Answer Generation: A GAT-based graph encoder models the retrieved subgraph. The output is projected into the LLM's vector space and combined with the query and a textualized form of the graph for final answer generation through a frozen LLM, augmented via soft prompting.

Experimental Evaluation

The empirical evaluation affirms G-Retriever's superior performance across the datasets in the GraphQA benchmark.

Main Results

The method outperforms baseline and state-of-the-art models in various configurations (Inference-Only, Frozen LLM + Prompt Tuning, and Tuned LLM):

  • On ExplaGraphs, SceneGraphs, and WebQSP datasets, G-Retriever achieved improvements (e.g., 47.99% in ExplaGraphs under prompt tuning) relative to baseline models.
  • The integration of RAG and graph-specific optimizations significantly enhanced model efficiency, reducing the average number of tokens and nodes processed by up to 99% on larger graphs like those in the WebQSP dataset.
  • G-Retriever demonstrated a substantial reduction in hallucinations, confirming its effectiveness in generating factually consistent answers by directly retrieving accurate graph information.

Ablation Study

The paper illustrates the contributions of each component in G-Retriever, revealing that omitting crucial elements, such as the graph encoder or textualized graph, results in considerable performance drops.

Implications and Future Work

Practical Implications

G-Retriever’s capability to handle complex and large-scale textual graphs extends its applicability to diverse fields including knowledge management, e-commerce, and scene analysis, thus being adaptable to numerous real-world applications that involve intricate graph-structured data.

Theoretical Implications

The incorporation of RAG into the graph domain underscores the effectiveness of retrieval-based approaches beyond conventional language tasks, presenting a transformative strategy that mitigates hallucination issues prevalent in both text and graph-based models.

Future Developments

Future work could delve into dynamic, trainable retrieval mechanisms within the RAG framework, potentially further optimizing the retrieval and generation process. Enhanced retrieval strategies may facilitate a more flexible and adaptive retrieval schema, catering to an increasingly broader array of graph-related tasks.

Conclusion

The presented work signifies a forward step in graph-based QA by blending graph-neural and LLMs, emphasizing the feasibility and advantages of the RAG approach in large and complex textual graph settings. G-Retriever's design and empirical success highlight its potential for advancing both human-computer interaction and automated understanding within graph-related AI applications.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Xiaoxin He (14 papers)
  2. Yijun Tian (29 papers)
  3. Yifei Sun (70 papers)
  4. Nitesh V. Chawla (111 papers)
  5. Thomas Laurent (35 papers)
  6. Yann LeCun (173 papers)
  7. Xavier Bresson (40 papers)
  8. Bryan Hooi (158 papers)
Citations (32)
Youtube Logo Streamline Icon: https://streamlinehq.com