Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Lottery Ticket Hypothesis for Pre-trained BERT Networks (2007.12223v2)

Published 23 Jul 2020 in cs.LG, cs.CL, cs.NE, and stat.ML
The Lottery Ticket Hypothesis for Pre-trained BERT Networks

Abstract: In NLP, enormous pre-trained models like BERT have become the standard starting point for training on a range of downstream tasks, and similar trends are emerging in other areas of deep learning. In parallel, work on the lottery ticket hypothesis has shown that models for NLP and computer vision contain smaller matching subnetworks capable of training in isolation to full accuracy and transferring to other tasks. In this work, we combine these observations to assess whether such trainable, transferrable subnetworks exist in pre-trained BERT models. For a range of downstream tasks, we indeed find matching subnetworks at 40% to 90% sparsity. We find these subnetworks at (pre-trained) initialization, a deviation from prior NLP research where they emerge only after some amount of training. Subnetworks found on the masked LLMing task (the same task used to pre-train the model) transfer universally; those found on other tasks transfer in a limited fashion if at all. As large-scale pre-training becomes an increasingly central paradigm in deep learning, our results demonstrate that the main lottery ticket observations remain relevant in this context. Codes available at https://github.com/VITA-Group/BERT-Tickets.

An Examination of the Lottery Ticket Hypothesis in Pre-trained BERT Networks

The paper "The Lottery Ticket Hypothesis for Pre-trained BERT Networks" explores the applicability of the lottery ticket hypothesis (LTH) within the context of large-scale, pre-trained BERT models. The authors address whether it is feasible to identify sparse yet efficient subnetworks within pre-trained models that can be utilized for various downstream tasks in NLP.

Overview

In the field of NLP, BERT and other extensive pre-trained models have emerged as fundamental components, primarily due to their efficacy in a range of downstream applications. These models, often characterized by immense parameter scales, aid in significantly reducing the effort required for task-specific training by offering robust initializations. Concurrently, the LTH suggests that within these expansive models are smaller subnetworks that, if correctly identified, can independently achieve comparable performance levels. This research seeks to investigate whether such subnetworks are present in BERT models and if they can be effectively transferred across different tasks.

Key Findings

The paper presents several notable findings:

  1. Existence and Identification of Winning Tickets: By applying iterative magnitude pruning (IMP), the researchers successfully identified winning tickets within BERT models at varying levels of sparsity, ranging from 40% to 90% depending on the downstream task. This contradicts previous NLP findings, where such subnetworks typically became apparent only after significant training.
  2. Transferrability Across Tasks: Subnetworks discovered through the masked LLMing (MLM) task—integral to BERT’s pre-training—demonstrated universal transferability when applied to other tasks. The paper observed that these subnetworks could achieve full accuracy by themselves, emphasizing the broader applicability of these frozen subnetworks.
  3. Role of Pre-trained Initialization: The authors provide evidence that, unlike earlier research, matching subnetworks in this setting can be found directly at the pre-trained initialization without requiring additional training steps, reinforcing the value of using BERT's pre-trained weights as an effective starting point.
  4. Performance Comparisons: When comparing IMP-derived subnetworks to those generated by standard post-training pruning, results were mixed. Standard pruning sometimes surpassed, and at other times underperformed, the IMP method, particularly in small-data scenarios where overfitting may have been a concern.

Implications and Speculations

This work underscores the potential of utilizing smaller, resource-efficient subnetworks within massive pre-trained models without sacrificing performance, making AI systems more accessible and cost-effective. The practical implications are substantial in terms of computational resources and energy efficiency, particularly in applications where deployment on edge devices or lower-end hardware is required.

Theoretically, these findings extend the LTH into the domain of large-scale, pre-trained models, suggesting that the initial training phase establishes a weight distribution conducive to identifying useful subnetworks right from initialization. This could influence how pre-training and pruning strategies evolve, potentially guiding new architectures and training methodologies.

Future research may explore methods of identifying these winning tickets more efficiently, assessing their transferability across diverse datasets or tasks beyond NLP, and leveraging finer pruning techniques to achieve even greater sparsity while maintaining task performance.

The findings open avenues for optimizing both neural architecture design and the training paradigm, potentially impacting the broader landscape of AI model development heavily reliant on pre-trained frameworks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Tianlong Chen (202 papers)
  2. Jonathan Frankle (37 papers)
  3. Shiyu Chang (120 papers)
  4. Sijia Liu (204 papers)
  5. Yang Zhang (1129 papers)
  6. Zhangyang Wang (374 papers)
  7. Michael Carbin (45 papers)
Citations (356)