Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
96 tokens/sec
Gemini 2.5 Pro Premium
44 tokens/sec
GPT-5 Medium
18 tokens/sec
GPT-5 High Premium
24 tokens/sec
GPT-4o
105 tokens/sec
DeepSeek R1 via Azure Premium
83 tokens/sec
GPT OSS 120B via Groq Premium
462 tokens/sec
Kimi K2 via Groq Premium
254 tokens/sec
2000 character limit reached

Data Feedback Loops: Model-driven Amplification of Dataset Biases (2209.03942v1)

Published 8 Sep 2022 in cs.LG, cs.AI, cs.CL, cs.CV, and stat.ML

Abstract: Datasets scraped from the internet have been critical to the successes of large-scale machine learning. Yet, this very success puts the utility of future internet-derived datasets at potential risk, as model outputs begin to replace human annotations as a source of supervision. In this work, we first formalize a system where interactions with one model are recorded as history and scraped as training data in the future. We then analyze its stability over time by tracking changes to a test-time bias statistic (e.g. gender bias of model predictions). We find that the degree of bias amplification is closely linked to whether the model's outputs behave like samples from the training distribution, a behavior which we characterize and define as consistent calibration. Experiments in three conditional prediction scenarios - image classification, visual role-labeling, and language generation - demonstrate that models that exhibit a sampling-like behavior are more calibrated and thus more stable. Based on this insight, we propose an intervention to help calibrate and stabilize unstable feedback systems. Code is available at https://github.com/rtaori/data_feedback.

Citations (32)

Summary

Data Feedback Loops: Model-driven Amplification of Dataset Biases

The paper "Data Feedback Loops: Model-driven Amplification of Dataset Biases" presents a rigorous examination of how machine learning models experiencing feedback can amplify biases through their outputs. The authors have formalized a data feedback system whereby interactions with model outputs replace human annotations, thus impacting the training data used in successive iterations of models. This research focuses on the implications of such feedback loops concerning bias amplification.

Problem Formulation and Feedback Dynamics

At the core of this paper is the feedback loop where model outputs start dominating the data on which subsequent models are trained, instead of new human annotations. The authors propose a framework for modeling this feedback process. They offer insights into how certain biases in data formulations become amplified over time with each iteration of retraining models using internet-derived data. Specifically, bias amplification is influenced by the behavior of model outputs in relation to the training distribution, with the concept of consistent calibration being crucial to understanding stability within these systems.

Key Findings

The paper conducts experiments within three scenarios: image classification, visual role-labeling, and language generation. Results consistently demonstrate that models behaving like samples from the training distribution have better calibration and stability. One of the novel findings is that models that generalize distributionally will inherently have limits to bias amplification due to their ability to mimic existing biases rather than enhance them beyond the norm. Furthermore, it is shown that bias amplification metrics can be quantitatively bounded, dependent on the calibration of the models and the proportion of human-annotated samples present.

For instance, in the image classification scenario, despite improvements in accuracy from leveraging larger datasets containing model-annotated samples, there exists a substantial tradeoff with bias escalation—a phenomenon that is widespread across different task settings. In the role-labeling task vis-à-vis gender bias, previous research identifying model bias against stereotypical gender roles is corroborated and amplified through time, consistent with the models’ miss-calibration.

Practical and Theoretical Implications

The theoretical implications of the work give rise to recommendations centered around the calibration of learning systems. Practically, these models necessitate calibration techniques to ensure stability against bias amplification as they progress iteratively. This insight is vital for institutions that harness internet-derived datasets for training, necessitating strategic interventions to mitigate unwanted bias amplification.

Speculating on future developments, the paper posits that understanding and controlling feedback loops through responsible data governance or improved model training calibration could offer robust advancements in the field. As AI systems increasingly permeate decision-making processes, keeping biases in check remains an essential aspect of ethical AI deployment.

Conclusion

The paper represents a significant contribution to understanding the intricate dynamics of bias amplification through feedback loops in AI systems. By formalizing data feedback processes and empirically measuring their impacts, the researchers provide clarity on the interaction of models with their biased data origins—a subject of paramount importance as machine learning systems continue integrating deeper into society’s fabric. Further investigation into methodologies to preempt these biases from amplifying, perhaps through improved model architectures or advanced calibration methods, remains a promising avenue toward more equitable AI systems.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube