On the Amplification of Linguistic Bias through Unintentional Self-reinforcement Learning by Generative Language Models -- A Perspective (2306.07135v1)
Abstract: Generative LLMs (GLMs) have the potential to significantly shape our linguistic landscape due to their expansive use in various digital applications. However, this widespread adoption might inadvertently trigger a self-reinforcement learning cycle that can amplify existing linguistic biases. This paper explores the possibility of such a phenomenon, where the initial biases in GLMs, reflected in their generated text, can feed into the learning material of subsequent models, thereby reinforcing and amplifying these biases. Moreover, the paper highlights how the pervasive nature of GLMs might influence the linguistic and cognitive development of future generations, as they may unconsciously learn and reproduce these biases. The implications of this potential self-reinforcement cycle extend beyond the models themselves, impacting human language and discourse. The advantages and disadvantages of this bias amplification are weighed, considering educational benefits and ease of future GLM learning against threats to linguistic diversity and dependence on initial GLMs. This paper underscores the need for rigorous research to understand and address these issues. It advocates for improved model transparency, bias-aware training techniques, development of methods to distinguish between human and GLM-generated text, and robust measures for fairness and bias evaluation in GLMs. The aim is to ensure the effective, safe, and equitable use of these powerful technologies, while preserving the richness and diversity of human language.
- OpenAI. Gpt-4 technical report. OpenAI Technical Report, 2023.
- Language models are unsupervised multitask learners. OpenAI Technical Report, 2019.
- Improving language understanding by generative pre-training. OpenAI Technical Report, 2018.
- Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
- A survey on bias and fairness in machine learning. ACM Computing Surveys (CSUR), 54(6):1–35, 2021.
- Minhyeok Lee. A mathematical investigation of hallucination and creativity in gpt models. Mathematics, 11(10), 2023a. ISSN 2227-7390.
- Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems, 35:38274–38290, 2022.
- Minhyeok Lee. A mathematical interpretation of autoregressive generative pre-trained transformer and self-supervised learning. Mathematics, 11(11):2451, 2023b.
- Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
- Knowledge distillation: A survey. International Journal of Computer Vision, 129:1789–1819, 2021.
- On the efficacy of knowledge distillation. In Proceedings of the IEEE/CVF international conference on computer vision, pages 4794–4802, 2019.
- Sean P O’Neill. Sapir–whorf hypothesis. The international encyclopedia of language and social interaction, pages 1–10, 2015.
- What is the sapir–whorf hypothesis? American anthropologist, 86(1):65–79, 1984.
- Minhyeok Lee (47 papers)