Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Syntax-BERT: Improving Pre-trained Transformers with Syntax Trees (2103.04350v1)

Published 7 Mar 2021 in cs.CL

Abstract: Pre-trained LLMs like BERT achieve superior performances in various NLP tasks without explicit consideration of syntactic information. Meanwhile, syntactic information has been proved to be crucial for the success of NLP applications. However, how to incorporate the syntax trees effectively and efficiently into pre-trained Transformers is still unsettled. In this paper, we address this problem by proposing a novel framework named Syntax-BERT. This framework works in a plug-and-play mode and is applicable to an arbitrary pre-trained checkpoint based on Transformer architecture. Experiments on various datasets of natural language understanding verify the effectiveness of syntax trees and achieve consistent improvement over multiple pre-trained models, including BERT, RoBERTa, and T5.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Jiangang Bai (4 papers)
  2. Yujing Wang (53 papers)
  3. Yiren Chen (13 papers)
  4. Yaming Yang (39 papers)
  5. Jing Bai (46 papers)
  6. Jing Yu (99 papers)
  7. Yunhai Tong (69 papers)
Citations (97)

Summary

We haven't generated a summary for this paper yet.