Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DEDGAT: Dual Embedding of Directed Graph Attention Networks for Detecting Financial Risk (2303.03933v1)

Published 6 Mar 2023 in cs.LG, cs.AI, and cs.DC

Abstract: Graph representation plays an important role in the field of financial risk control, where the relationship among users can be constructed in a graph manner. In practical scenarios, the relationships between nodes in risk control tasks are bidirectional, e.g., merchants having both revenue and expense behaviors. Graph neural networks designed for undirected graphs usually aggregate discriminative node or edge representations with an attention strategy, but cannot fully exploit the out-degree information when used for the tasks built on directed graph, which leads to the problem of a directional bias. To tackle this problem, we propose a Directed Graph ATtention network called DGAT, which explicitly takes out-degree into attention calculation. In addition to having directional requirements, the same node might have different representations of its input and output, and thus we further propose a dual embedding of DGAT, referred to as DEDGAT. Specifically, DEDGAT assigns in-degree and out-degree representations to each node and uses these two embeddings to calculate the attention weights of in-degree and out-degree nodes, respectively. Experiments performed on the benchmark datasets show that DGAT and DEDGAT obtain better classification performance compared to undirected GAT. Also,the visualization results demonstrate that our methods can fully use both in-degree and out-degree information.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Jiafu Wu (11 papers)
  2. Mufeng Yao (5 papers)
  3. Dong Wu (62 papers)
  4. Mingmin Chi (24 papers)
  5. Baokun Wang (9 papers)
  6. Ruofan Wu (33 papers)
  7. Xin Fu (49 papers)
  8. Changhua Meng (27 papers)
  9. Weiqiang Wang (171 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.