The Mirrored Influence Hypothesis: Efficient Data Influence Estimation by Harnessing Forward Passes (2402.08922v2)
Abstract: Large-scale black-box models have become ubiquitous across numerous applications. Understanding the influence of individual training data sources on predictions made by these models is crucial for improving their trustworthiness. Current influence estimation techniques involve computing gradients for every training point or repeated training on different subsets. These approaches face obvious computational challenges when scaled up to large datasets and models. In this paper, we introduce and explore the Mirrored Influence Hypothesis, highlighting a reciprocal nature of influence between training and test data. Specifically, it suggests that evaluating the influence of training data on test predictions can be reformulated as an equivalent, yet inverse problem: assessing how the predictions for training samples would be altered if the model were trained on specific test samples. Through both empirical and theoretical validations, we demonstrate the wide applicability of our hypothesis. Inspired by this, we introduce a new method for estimating the influence of training data, which requires calculating gradients for specific test samples, paired with a forward pass for each training point. This approach can capitalize on the common asymmetry in scenarios where the number of test samples under concurrent examination is much smaller than the scale of the training dataset, thus gaining a significant improvement in efficiency compared to existing approaches. We demonstrate the applicability of our method across a range of scenarios, including data attribution in diffusion models, data leakage detection, analysis of memorization, mislabeled data detection, and tracing behavior in LLMs. Our code will be made available at https://github.com/ruoxi-jia-group/Forward-INF.
- Handbook of mathematical functions with formulas, graphs, and mathematical tables. US Government printing office, 1964.
- Second-order stochastic optimization for machine learning in linear time. The Journal of Machine Learning Research, 18(1):4148–4187, 2017.
- Tracing knowledge in language models back to the training data. arXiv preprint arXiv:2205.11482, 2022.
- Do we train on test data? purging cifar of near-duplicates. Journal of Imaging, 6(6):41, 2020.
- Influence functions in deep learning are fragile. arXiv preprint arXiv:2006.14651, 2020.
- Residuals and influence in regression. New York: Chapman and Hall, 1982.
- Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
- An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
- T-rex: A large scale alignment of natural language with knowledge base triples. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), 2018.
- Human-level play in the game of diplomacy by combining language models with strategic reasoning. Science, 378(6624):1067–1074, 2022.
- What neural networks memorize and why: Discovering the long tail via influence estimation. Advances in Neural Information Processing Systems, 33:2881–2891, 2020.
- Data shapley: Equitable valuation of data for machine learning. In International Conference on Machine Learning, pages 2242–2251. PMLR, 2019.
- Training data influence analysis and estimation: A survey. arXiv preprint arXiv:2212.04612, 2022.
- Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
- Datamodels: Predicting predictions from training data. arXiv preprint arXiv:2202.00622, 2022.
- Efficient task-specific data valuation for nearest neighbor algorithms. arXiv preprint arXiv:1908.08619, 2019a.
- Towards efficient data valuation based on the shapley value. In The 22nd International Conference on Artificial Intelligence and Statistics, pages 1167–1176. PMLR, 2019b.
- Lava: Data valuation without pre-specified learning algorithms. arXiv preprint arXiv:2305.00054, 2023.
- Deep learning with noisy labels: Exploring techniques and remedies in medical image analysis. Medical image analysis, 65:101759, 2020.
- Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13):3521–3526, 2017.
- Understanding black-box predictions via influence functions. In International Conference on Machine Learning, pages 1885–1894. PMLR, 2017.
- Learning multiple layers of features from tiny images. 2009.
- Beta shapley: a unified and noise-reduced data valuation framework for machine learning. arXiv preprint arXiv:2110.14049, 2021.
- On the limited memory bfgs method for large scale optimization. Mathematical programming, 45(1-3):503–528, 1989.
- Fine-tuning language models with just forward passes. arXiv preprint arXiv:2305.17333, 2023.
- Trak: Attributing model behavior at scale. arXiv preprint arXiv:2303.14186, 2023.
- Language models as knowledge bases? arXiv preprint arXiv:1909.01066, 2019.
- Estimating training data influence by tracing gradient descent. Advances in Neural Information Processing Systems, 33:19920–19930, 2020.
- Okapi at trec-3. Nist Special Publication Sp, 109:109, 1995.
- High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10684–10695, 2022.
- Laion-5b: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems, 35:25278–25294, 2022.
- Blenderbot 3: a deployed conversational agent that continually learns to responsibly engage. arXiv preprint arXiv:2208.03188, 2022.
- Lamda: Language models for dialog applications. arXiv preprint arXiv:2201.08239, 2022.
- Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(11), 2008.
- Data banzhaf: A robust data valuation framework for machine learning. International Conference on Artificial Intelligence and Statistics, 2023a.
- Data banzhaf: A robust data valuation framework for machine learning. In International Conference on Artificial Intelligence and Statistics, pages 6388–6421. PMLR, 2023b.
- Evaluating data attribution for text-to-image models. arXiv preprint arXiv:2306.09345, 2023.
- Post-training detection of backdoor attacks for two-class and multi-attack scenarios. In International Conference on Learning Representations, 2021.
- mt5: A massively multilingual pre-trained text-to-text transformer. arXiv preprint arXiv:2010.11934, 2020.
- Adversarial unlearning of backdoors via implicit hypergradient. In International Conference on Learning Representations, 2021.