Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning on Graphs with Large Language Models(LLMs): A Deep Dive into Model Robustness (2407.12068v2)

Published 16 Jul 2024 in cs.LG and cs.AI

Abstract: LLMs have demonstrated remarkable performance across various natural language processing tasks. Recently, several LLMs-based pipelines have been developed to enhance learning on graphs with text attributes, showcasing promising performance. However, graphs are well-known to be susceptible to adversarial attacks and it remains unclear whether LLMs exhibit robustness in learning on graphs. To address this gap, our work aims to explore the potential of LLMs in the context of adversarial attacks on graphs. Specifically, we investigate the robustness against graph structural and textual perturbations in terms of two dimensions: LLMs-as-Enhancers and LLMs-as-Predictors. Through extensive experiments, we find that, compared to shallow models, both LLMs-as-Enhancers and LLMs-as-Predictors offer superior robustness against structural and textual attacks.Based on these findings, we carried out additional analyses to investigate the underlying causes. Furthermore, we have made our benchmark library openly available to facilitate quick and fair evaluations, and to encourage ongoing innovative research in this field.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Kai Guo (38 papers)
  2. Zewen Liu (8 papers)
  3. Zhikai Chen (20 papers)
  4. Hongzhi Wen (14 papers)
  5. Wei Jin (84 papers)
  6. Jiliang Tang (204 papers)
  7. Yi Chang (150 papers)