Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Socialformer: Social Network Inspired Long Document Modeling for Document Ranking (2202.10870v1)

Published 22 Feb 2022 in cs.IR

Abstract: Utilizing pre-trained LLMs has achieved great success for neural document ranking. Limited by the computational and memory requirements, long document modeling becomes a critical issue. Recent works propose to modify the full attention matrix in Transformer by designing sparse attention patterns. However, most of them only focus on local connections of terms within a fixed-size window. How to build suitable remote connections between terms to better model document representation remains underexplored. In this paper, we propose the model Socialformer, which introduces the characteristics of social networks into designing sparse attention patterns for long document modeling in document ranking. Specifically, we consider several attention patterns to construct a graph like social networks. Endowed with the characteristic of social networks, most pairs of nodes in such a graph can reach with a short path while ensuring the sparsity. To facilitate efficient calculation, we segment the graph into multiple subgraphs to simulate friend circles in social scenarios. Experimental results confirm the effectiveness of our model on long document modeling.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Yujia Zhou (34 papers)
  2. Zhicheng Dou (113 papers)
  3. Huaying Yuan (9 papers)
  4. Zhengyi Ma (6 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.