Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
139 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Geometric Algorithms for $k$-NN Poisoning (2306.12377v1)

Published 21 Jun 2023 in cs.LG, cs.CG, and cs.CR

Abstract: We propose a label poisoning attack on geometric data sets against $k$-nearest neighbor classification. We provide an algorithm that can compute an $\varepsilon n$-additive approximation of the optimal poisoning in $n\cdot 2{2{O(d+k/\varepsilon)}}$ time for a given data set $X \in \mathbb{R}d$, where $|X| = n$. Our algorithm achieves its objectives through the application of multi-scale random partitions.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (20)
  1. Recent advances in adversarial training for adversarial robustness. In Z. Zhou, editor, Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI 2021, Virtual Event / Montreal, Canada, 19-27 August 2021, pages 4312–4321. ijcai.org, 2021.
  2. B. Biggio and F. Roli. Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recognit., 84:317–331, 2018.
  3. N. Carlini. Poisoning the unlabeled dataset of semi-supervised learning. In M. Bailey and R. Greenstadt, editors, 30th USENIX Security Symposium, USENIX Security 2021, August 11-13, 2021, pages 1577–1592. USENIX Association, 2021.
  4. Approximating a finite metric by a small number of tree metrics. In Proceedings 39th Annual Symposium on Foundations of Computer Science (Cat. No. 98CB36280), pages 379–388. IEEE, 1998.
  5. Suspicion-free adversarial attacks on clustering algorithms. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, New York, February 7-12, 2020, pages 3625–3632. AAAI Press, 2020.
  6. Fairness degrading adversarial attacks against clustering algorithms. CoRR, abs/2110.12020, 2021.
  7. A black-box adversarial attack for poisoning clustering. Pattern Recognition, 122:108306, 2022.
  8. E. Fix and J. L. Hodges. Discriminatory analysis. nonparametric discrimination: Consistency properties. International Statistical Review/Revue Internationale de Statistique, 57(3):238–247, 1989.
  9. Explaining and harnessing adversarial examples. In International Conference on Learning Representations, 2015.
  10. Multilayer feedforward networks are universal approximators. Neural Networks, 2(5):359–366, 1989.
  11. Adversarial machine learning. In Proceedings of the 4th ACM workshop on Security and artificial intelligence, pages 43–58, 2011.
  12. Certified robustness of nearest neighbors against data poisoning attacks. CoRR, abs/2012.03765, 2020.
  13. J. R. Lee and A. Naor. Extending lipschitz functions via random metric partitions. Inventiones mathematicae, 160(1):59–95, 2005.
  14. J. R. Lee and A. Sidiropoulos. On the geometry of graphs with a forbidden minor. In Proceedings of the forty-first annual ACM symposium on Theory of computing, pages 245–254, 2009.
  15. Towards deep learning models resistant to adversarial attacks, 2019.
  16. Label sanitization against label flipping poisoning attacks. In Joint European conference on machine learning and knowledge discovery in databases, pages 5–15. Springer, 2018.
  17. A taxonomy and survey of attacks against machine learning. Comput. Sci. Rev., 34, 2019.
  18. Certified robustness to label-flipping attacks via randomized smoothing. In International Conference on Machine Learning, pages 8230–8241. PMLR, 2020.
  19. A system-driven taxonomy of attacks and defenses in adversarial machine learning. IEEE Trans. Emerg. Top. Comput. Intell., 4(4):450–467, 2020.
  20. Efficient label contamination attacks against black-box learning models. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17, pages 3945–3951, 2017.

Summary

We haven't generated a summary for this paper yet.