Emergent Mind

SILO Language Models: Isolating Legal Risk In a Nonparametric Datastore

(2308.04430)
Published Aug 8, 2023 in cs.CL , cs.AI , and cs.LG

Abstract

The legality of training language models (LMs) on copyrighted or otherwise restricted data is under intense debate. However, as we show, model performance significantly degrades if trained only on low-risk text (e.g., out-of-copyright books or government documents), due to its limited size and domain coverage. We present SILO, a new language model that manages this risk-performance tradeoff during inference. SILO is built by (1) training a parametric LM on Open License Corpus (OLC), a new corpus we curate with 228B tokens of public domain and permissively licensed text and (2) augmenting it with a more general and easily modifiable nonparametric datastore (e.g., containing copyrighted books or news) that is only queried during inference. The datastore allows use of high-risk data without training on it, supports sentence-level data attribution, and enables data producers to opt out from the model by removing content from the store. These capabilities can foster compliance with data-use regulations such as the fair use doctrine in the United States and the GDPR in the European Union. Our experiments show that the parametric LM struggles on domains not covered by OLC. However, access to the datastore greatly improves out of domain performance, closing 90% of the performance gap with an LM trained on the Pile, a more diverse corpus with mostly high-risk text. We also analyze which nonparametric approach works best, where the remaining errors lie, and how performance scales with datastore size. Our results suggest that it is possible to build high quality language models while mitigating their legal risk.

Comparison of LM techniques across five domains using Pythia and Silo, reported using perplexity on validation data.

Overview

  • The paper introduces SILO, a methodology developed to address legal issues in LLMs by separating training data into high-risk and low-risk categories, uniquely handling each for compliance.

  • SILO's architecture consists of a parametric language model trained on low-risk data and a dynamic nonparametric datastore for high-risk data, aiming to mitigate copyright and privacy concerns.

  • Empirical evaluation of SILO demonstrates its potential in nearly matching the performance of traditional models trained on a broader data set, highlighting the importance of the nonparametric datastore.

  • Future directions suggested include optimizing the datastore, extending SILO's approach to other data types, and exploring novel data licensing models for better legal and ethical alignment.

Isolating Legal Risks in Training LLMs with a Nonparametric Datastore

Introduction

The deployment and development of LLMs are increasingly scrutinized for potential legal issues, particularly concerning the use of copyrighted content. Prevailing methods have overlooked critical aspects of data utilization that could infringe upon intellectual property rights, incurring legal and ethical consequences. Recognizing the imperative need for compliance, this paper presents a novel approach, termed SILO (Separate Inference with Legal Optimization), developed by Sewon Min, Suchin Gururangan, Eric Wallace, Hannaneh Hajishirzi, Noah A. Smith, and Luke Zettlemoyer. SILO ambitiously addresses the legal conundrums by partitioning training data into high-risk and low-risk categories, coupled with a distinctive operational mechanism for inference.

Methodology

The essence of SILO lies in its innovative architecture, comprising two core elements: a parametric language model and a dynamic nonparametric datastore. The approach is distinctive, training the parametric component exclusively on data categorized under "low-risk," encompassing public domain texts and content under permissive licenses. Concurrently, the high-risk data, characterized by potential copyrights or privacy concerns, are relegated to the nonparametric datastore, which is dynamically queried during inference. This dual-structure design inherently embeds mechanisms for data attribution at the sentence level and facilitates data opt-outs, thereby reinforcing alignment with legal frameworks like copyright laws and privacy regulations such as GDPR.

Empirical Evaluation

The implementation of SILO was rigorously evaluated against a baseline model, Pythia, trained on a wider range of data, including copyrighted content. The assessment, primarily focusing on language modeling perplexity across fourteen diverse domains, delineates the capability of SILO in bridging 90% of the performance gap identified with Pythia. Notably, the introduction of the nonparametric datastore markedly augments SILO's performance, particularly in domains unfamiliar to the model. The experimentation underscores the vital role of the datastore's scale in enhancing the model's outcomes, with the k-nearest neighbors (kNN) retrieval method showing considerable promise in optimizing performance.

Future Directions

While SILO marks a significant stride towards mitigating legal risks associated with LLM training, it also opens avenues for future exploration. Critical considerations include refining the nonparametric datastore's scale and efficiency, extending the SILO concept to other modalities beyond text, and investigating the balance between legal compliance and model fairness. Additionally, the paper suggests the potential development of novel data licensing models to further align legal and ethical considerations with technological advancements.

Conclusion

SILO represents a critical milestone in the endeavor to harmonize LLM development with legal and ethical standards. Its innovative approach, characterized by the separation of training data based on risk assessment and the incorporation of a nonparametric datastore, not only mitigates legal risks but also opens a discourse on responsible AI development. Through empirical evidence, SILO demonstrates its efficacy in bridging performance gaps, paving the way for future research endeavors aimed at enhancing legal compliance and operational efficiency of LLMs.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.