Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Tree-structured Attention with Hierarchical Accumulation (2002.08046v1)

Published 19 Feb 2020 in cs.LG and cs.CL

Abstract: Incorporating hierarchical structures like constituency trees has been shown to be effective for various NLP tasks. However, it is evident that state-of-the-art (SOTA) sequence-based models like the Transformer struggle to encode such structures inherently. On the other hand, dedicated models like the Tree-LSTM, while explicitly modeling hierarchical structures, do not perform as efficiently as the Transformer. In this paper, we attempt to bridge this gap with "Hierarchical Accumulation" to encode parse tree structures into self-attention at constant time complexity. Our approach outperforms SOTA methods in four IWSLT translation tasks and the WMT'14 English-German translation task. It also yields improvements over Transformer and Tree-LSTM on three text classification tasks. We further demonstrate that using hierarchical priors can compensate for data shortage, and that our model prefers phrase-level attentions over token-level attentions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Xuan-Phi Nguyen (22 papers)
  2. Shafiq Joty (187 papers)
  3. Steven C. H. Hoi (94 papers)
  4. Richard Socher (115 papers)
Citations (75)