Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Document-aware Positional Encoding and Linguistic-guided Encoding for Abstractive Multi-document Summarization (2209.05929v1)

Published 13 Sep 2022 in cs.CL

Abstract: One key challenge in multi-document summarization is to capture the relations among input documents that distinguish between single document summarization (SDS) and multi-document summarization (MDS). Few existing MDS works address this issue. One effective way is to encode document positional information to assist models in capturing cross-document relations. However, existing MDS models, such as Transformer-based models, only consider token-level positional information. Moreover, these models fail to capture sentences' linguistic structure, which inevitably causes confusions in the generated summaries. Therefore, in this paper, we propose document-aware positional encoding and linguistic-guided encoding that can be fused with Transformer architecture for MDS. For document-aware positional encoding, we introduce a general protocol to guide the selection of document encoding functions. For linguistic-guided encoding, we propose to embed syntactic dependency relations into the dependency relation mask with a simple but effective non-linear encoding learner for feature learning. Extensive experiments show the proposed model can generate summaries with high quality.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Congbo Ma (23 papers)
  2. Wei Emma Zhang (46 papers)
  3. Pitawelayalage Dasun Dileepa Pitawela (2 papers)
  4. Yutong Qu (2 papers)
  5. Haojie Zhuang (3 papers)
  6. Hu Wang (79 papers)
Citations (1)