Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
126 tokens/sec
GPT-4o
47 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Empirical Study of Pre-Trained Model Reuse in the Hugging Face Deep Learning Model Registry (2303.02552v1)

Published 5 Mar 2023 in cs.SE, cs.AI, and cs.LG

Abstract: Deep Neural Networks (DNNs) are being adopted as components in software systems. Creating and specializing DNNs from scratch has grown increasingly difficult as state-of-the-art architectures grow more complex. Following the path of traditional software engineering, machine learning engineers have begun to reuse large-scale pre-trained models (PTMs) and fine-tune these models for downstream tasks. Prior works have studied reuse practices for traditional software packages to guide software engineers towards better package maintenance and dependency management. We lack a similar foundation of knowledge to guide behaviors in pre-trained model ecosystems. In this work, we present the first empirical investigation of PTM reuse. We interviewed 12 practitioners from the most popular PTM ecosystem, Hugging Face, to learn the practices and challenges of PTM reuse. From this data, we model the decision-making process for PTM reuse. Based on the identified practices, we describe useful attributes for model reuse, including provenance, reproducibility, and portability. Three challenges for PTM reuse are missing attributes, discrepancies between claimed and actual performance, and model risks. We substantiate these identified challenges with systematic measurements in the Hugging Face ecosystem. Our work informs future directions on optimizing deep learning ecosystems by automated measuring useful attributes and potential attacks, and envision future research on infrastructure and standardization for model registries.

Citations (51)

Summary

  • The paper investigates PTM reuse by interviewing 12 practitioners and analyzing data from over 63,000 models to uncover prevalent reuse practices.
  • The paper identifies critical PTM attributes such as provenance, reproducibility, and portability that guide effective model selection.
  • The paper reveals challenges like security risks and performance discrepancies through quantitative STRIDE analysis, urging improved reuse strategies.

An Empirical Study of Pre-Trained Model Reuse in the Hugging Face Deep Learning Model Registry

In recent years, the adoption of Deep Neural Networks (DNNs) in software systems has surged, predominantly driven by advancements in Pre-Trained Models (PTMs). The paper "An Empirical Study of Pre-Trained Model Reuse in the Hugging Face Deep Learning Model Registry" embarks on understanding the reuse of PTMs within the Hugging Face ecosystem, providing a comprehensive investigation into practices, challenges, and possibilities for future enhancements.

Overview

The paper presents the first empirical analysis focused on PTM reuse, involving interviews with 12 practitioners from Hugging Face—a significant PTM platform—to explore practices and challenges in PTM reuse. Additionally, the paper models decision-making processes in PTM reuse and identifies useful PTM attributes aiding in reuse, namely provenance, reproducibility, and portability. The challenges identified involve missing attributes, discrepancies between claimed and actual performance, and risks associated with models, such as reliability and security concerns.

Further analysis includes quantitative measurements employing the STRIDE methodology, which elucidates threats and risks in the reuse process. The findings are supported by a data set comprising information on over 63,182 PTM packages, obtained from Hugging Face model registries.

Key Findings and Challenges

The paper provides insights into how engineers select PTMs, illustrating a workflow that includes reusability assessment, model selection, downstream evaluation, and deployment. The detailed model selection process incorporates aspects such as architecture suitability and ease of reuse facilitated by model registries versus reliance on GitHub repositories. Popular PTM reuse scenarios involve transfer learning and quantization.

Within the domain, PTM attributes significantly influence reuse decisions—popularity being a primary indicator from traditional software packages. Additionally, DL-specific attributes such as provenance, reproducibility, and portability are emphasized as enhancing PTM reuse. However, missing attributes, discrepancies between claimed and actual performance, and security implications pose substantial challenges in reuse practices.

The quantitative paper using the STRIDE methodology highlights several risks affecting reuse practices, notably spoofing identity, data tampering, repudiation, and elevation of privilege. It divulges inadequacies in mitigation strategies within Hugging Face, raising concerns about potential malicious models compromising security and reliability.

Implications for Future Research

The research outcomes guide future directions for enhancing DL model registries. It underscores the necessity for automated measurement mechanisms and standardization of PTM attributes, encouraging improved infrastructure and tooling for PTM audit, standardization, and adversarial detection mechanisms. Addressing the identified risks and challenges promises to optimize reuse processes and safeguard the integrity of PTM ecosystems.

Summary

This paper serves as a vital resource for researchers and practitioners aiming to understand the dynamics and intricacies of PTM reuse in DL ecosystems. Through a methodical investigation, it sets a foundational understanding of prevailing practices and challenges, contributing significantly to the discourse on software engineering and deep learning integration. Further exploration based on this paper presents opportunities to bolster the reliability and efficiency of PTM implementation across diverse applications and platforms.

In conclusion, the paper provides a rich repository of insights into the necessity of comprehensive PTM reuse strategies, paving the way for extended research and enhanced practical implementations in deep learning ecosystems. The empirical data and observations shared offer substantial groundwork for evolving methodologies that augment machine learning practices in the industry.

Youtube Logo Streamline Icon: https://streamlinehq.com