2000 character limit reached
The (ab)use of Open Source Code to Train Large Language Models (2302.13681v2)
Published 27 Feb 2023 in cs.SE and cs.AI
Abstract: In recent years, LLMs have gained significant popularity due to their ability to generate human-like text and their potential applications in various fields, such as Software Engineering. LLMs for Code are commonly trained on large unsanitized corpora of source code scraped from the Internet. The content of these datasets is memorized and emitted by the models, often in a verbatim manner. In this work, we will discuss the security, privacy, and licensing implications of memorization. We argue why the use of copyleft code to train LLMs is a legal and ethical dilemma. Finally, we provide four actionable recommendations to address this issue.