Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Learning to Win Lottery Tickets in BERT Transfer via Task-agnostic Mask Training (2204.11218v2)

Published 24 Apr 2022 in cs.CL

Abstract: Recent studies on the lottery ticket hypothesis (LTH) show that pre-trained LLMs (PLMs) like BERT contain matching subnetworks that have similar transfer learning performance as the original PLM. These subnetworks are found using magnitude-based pruning. In this paper, we find that the BERT subnetworks have even more potential than these studies have shown. Firstly, we discover that the success of magnitude pruning can be attributed to the preserved pre-training performance, which correlates with the downstream transferability. Inspired by this, we propose to directly optimize the subnetwork structure towards the pre-training objectives, which can better preserve the pre-training performance. Specifically, we train binary masks over model weights on the pre-training tasks, with the aim of preserving the universal transferability of the subnetwork, which is agnostic to any specific downstream tasks. We then fine-tune the subnetworks on the GLUE benchmark and the SQuAD dataset. The results show that, compared with magnitude pruning, mask training can effectively find BERT subnetworks with improved overall performance on downstream tasks. Moreover, our method is also more efficient in searching subnetworks and more advantageous when fine-tuning within a certain range of data scarcity. Our code is available at https://github.com/llyx97/TAMT.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Yuanxin Liu (28 papers)
  2. Fandong Meng (174 papers)
  3. Zheng Lin (104 papers)
  4. Peng Fu (43 papers)
  5. Yanan Cao (34 papers)
  6. Weiping Wang (123 papers)
  7. Jie Zhou (687 papers)
Citations (11)