Vul-LMGNNs: Fusing language models and online-distilled graph neural networks for code vulnerability detection (2404.14719v2)
Abstract: Code LLMs (codeLMs) and Graph Neural Networks (GNNs) are widely used in code vulnerability detection. However, GNNs often rely on aggregating information from adjacent nodes, limiting structural information propagation across layers. While codeLMs can supplement GNNs with semantic information, existing integration methods underexplore their collaborative potential. To address these challenges, we propose Vul-LMGNNs, integrating pre-trained codeLMs with GNNs to enable cross-layer propagation of semantic and structural information. Vul-LMGNNs leverage Code Property Graphs (CPGs) to incorporate syntax, control flow, and data dependencies, using gated GNNs for structural extraction. An online knowledge distillation (KD) mechanism allows a student GNN to capture structural information from a trained counterpart via alternating training. Additionally, an "implicit-explicit" joint training framework leverages codeLMs to initialize embeddings and propagate code semantics. In the explicit phase, it performs late fusion via linear interpolation. Evaluations on real-world vulnerability datasets show Vul-LMGNNs outperform 17 state-of-the-art approaches. Source code is available at: https://github.com/Vul-LMGNN/vul-LMGNN.