Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

How to Protect Copyright Data in Optimization of Large Language Models? (2308.12247v1)

Published 23 Aug 2023 in cs.LG and cs.CL

Abstract: LLMs and generative AI have played a transformative role in computer research and applications. Controversy has arisen as to whether these models output copyrighted data, which can occur if the data the models are trained on is copyrighted. LLMs are built on the transformer neural network architecture, which in turn relies on a mathematical computation called Attention that uses the softmax function. In this paper, we show that LLM training and optimization can be seen as a softmax regression problem. We then establish a method of efficiently performing softmax regression, in a way that prevents the regression function from generating copyright data. This establishes a theoretical method of training LLMs in a way that avoids generating copyright data.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Timothy Chu (11 papers)
  2. Zhao Song (253 papers)
  3. Chiwun Yang (14 papers)
Citations (25)

Summary

We haven't generated a summary for this paper yet.