Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

EncT5: A Framework for Fine-tuning T5 as Non-autoregressive Models (2110.08426v2)

Published 16 Oct 2021 in cs.CL

Abstract: Pre-trained encoder-decoder transformer architectures have become increasingly popular recently with the advent of T5 models. T5 has also become more favorable over other architectures like BERT due to the amount of data that it is pre-trained on, increased scale of model parameter sizes and easy applicability to a diverse set of tasks due to the generative nature of the model. While being able to generalize to a wide variety of tasks, it is not clear that encoder-decoder architectures are the most efficient for fine-tuning tasks that don't require auto-regressive decoding. In this work, we study fine-tuning pre-trained encoder-decoder models for tasks such as classification, multi-label classification, and structured prediction. We propose \textbf{EncT5}, a framework for these problems, and illustrate instantiations for these tasks. Our experiment results show that EncT5 has advantages over T5 such as efficiency and usability out performs BERT when evaluated on publicly available pre-trained checkpoints.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Frederick Liu (27 papers)
  2. Terry Huang (2 papers)
  3. Shihang Lyu (2 papers)
  4. Siamak Shakeri (29 papers)
  5. Hongkun Yu (17 papers)
  6. Jing Li (621 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.