Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

DACT-BERT: Differentiable Adaptive Computation Time for an Efficient BERT Inference (2109.11745v1)

Published 24 Sep 2021 in cs.CL and cs.AI

Abstract: Large-scale pre-trained LLMs have shown remarkable results in diverse NLP applications. Unfortunately, these performance gains have been accompanied by a significant increase in computation time and model size, stressing the need to develop new or complementary strategies to increase the efficiency of these models. In this paper we propose DACT-BERT, a differentiable adaptive computation time strategy for BERT-like models. DACT-BERT adds an adaptive computational mechanism to BERT's regular processing pipeline, which controls the number of Transformer blocks that need to be executed at inference time. By doing this, the model learns to combine the most appropriate intermediate representations for the task at hand. Our experiments demonstrate that our approach, when compared to the baselines, excels on a reduced computational regime and is competitive in other less restrictive ones.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Cristóbal Eyzaguirre (14 papers)
  2. Vladimir Araujo (25 papers)
  3. Felipe del Río (3 papers)
  4. Álvaro Soto (7 papers)
Citations (7)