Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
153 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Chordal Sparsity for SDP-based Neural Network Verification (2206.03482v3)

Published 7 Jun 2022 in cs.LG and math.OC

Abstract: Neural networks are central to many emerging technologies, but verifying their correctness remains a major challenge. It is known that network outputs can be sensitive and fragile to even small input perturbations, thereby increasing the risk of unpredictable and undesirable behavior. Fast and accurate verification of neural networks is therefore critical to their widespread adoption, and in recent years, various methods have been developed as a response to this problem. In this paper, we focus on improving semidefinite programming (SDP) based techniques for neural network verification. Such techniques offer the power of expressing complex geometric constraints while retaining a convex problem formulation, but scalability remains a major issue in practice. Our starting point is the DeepSDP framework proposed by Fazlyab et al., which uses quadratic constraints to abstract the verification problem into a large-scale SDP. However, solving this SDP quickly becomes intractable when the network grows. Our key observation is that by leveraging chordal sparsity, we can decompose the primary computational bottleneck of DeepSDP -- a large linear matrix inequality (LMI) -- into an equivalent collection of smaller LMIs. We call our chordally sparse optimization program Chordal-DeepSDP and prove that its construction is identically expressive as that of DeepSDP. Moreover, we show that additional analysis of Chordal-DeepSDP allows us to further rewrite its collection of LMIs in a second level of decomposition that we call Chordal-DeepSDP-2 -- which results in another significant computational gain. Finally, we provide numerical experiments on real networks of learned cart-pole dynamics, showcasing the computational advantage of Chordal-DeepSDP and Chordal-DeepSDP-2 over DeepSDP.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (48)
  1. E. Yeung, S. Kundu, and N. Hodas, “Learning deep neural network representations for koopman operators of nonlinear dynamical systems,” in 2019 American Control Conference (ACC).   IEEE, 2019, pp. 4832–4839.
  2. A. Dosovitskiy, G. Ros, F. Codevilla, A. Lopez, and V. Koltun, “Carla: An open urban driving simulator,” in Conference on robot learning.   PMLR, 2017, pp. 1–16.
  3. I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014.
  4. G. Katz, D. A. Huang, D. Ibeling, K. Julian, C. Lazarus, R. Lim, P. Shah, S. Thakoor, H. Wu, A. Zeljić et al., “The marabou framework for verification and analysis of deep neural networks,” in International Conference on Computer Aided Verification.   Springer, 2019, pp. 443–452.
  5. M. N. Müller, G. Makarchuk, G. Singh, M. Püschel, and M. Vechev, “Prima: general and precise neural network certification via scalable convex hull approximations,” Proceedings of the ACM on Programming Languages, vol. 6, no. POPL, pp. 1–33, 2022.
  6. V. Tjeng, K. Xiao, and R. Tedrake, “Evaluating robustness of neural networks with mixed integer programming,” arXiv preprint arXiv:1711.07356, 2017.
  7. M. Fazlyab, M. Morari, and G. J. Pappas, “Safety verification and robustness analysis of neural networks via quadratic constraints and semidefinite programming,” IEEE Transactions on Automatic Control, 2020.
  8. M. Newton and A. Papachristodoulou, “Exploiting sparsity for neural network verification,” in Learning for Dynamics and Control.   PMLR, 2021, pp. 715–727.
  9. J. Agler, W. Helton, S. McCullough, and L. Rodman, “Positive semidefinite matrices with a given sparsity pattern,” Linear algebra and its applications, vol. 107, pp. 101–149, 1988.
  10. L. Vandenberghe and M. S. Andersen, “Chordal graphs and semidefinite optimization,” Foundations and Trends in Optimization, vol. 1, no. 4, pp. 241–433, 2015.
  11. Y. Zheng, “Chordal sparsity in control and optimization of large-scale systems,” Ph.D. dissertation, University of Oxford, 2019.
  12. S. Bak, C. Liu, and T. Johnson, “The second international verification of neural networks competition (vnn-comp 2021): Summary and results,” arXiv preprint arXiv:2109.00498, 2021.
  13. C. Liu, T. Arnon, C. Lazarus, C. Strong, C. Barrett, and M. J. Kochenderfer, “Algorithms for verifying deep neural networks,” arXiv preprint arXiv:1903.06758, 2019.
  14. G. Katz, C. Barrett, D. L. Dill, K. Julian, and M. J. Kochenderfer, “Reluplex: An efficient smt solver for verifying deep neural networks,” in International Conference on Computer Aided Verification.   Springer, 2017, pp. 97–117.
  15. X. Song, E. Manino, L. Sena, E. Alves, I. Bessa, M. Lujan, L. Cordeiro et al., “Qnnverifier: A tool for verifying neural networks using smt-based model checking,” arXiv preprint arXiv:2111.13110, 2021.
  16. L. Sena, X. Song, E. Alves, I. Bessa, E. Manino, L. Cordeiro et al., “Verifying quantized neural networks using smt-based model checking,” arXiv preprint arXiv:2106.05997, 2021.
  17. A. Lomuscio and L. Maganti, “An approach to reachability analysis for feed-forward relu neural networks,” arXiv preprint arXiv:1706.07351, 2017.
  18. R. Ivanov, J. Weimer, R. Alur, G. J. Pappas, and I. Lee, “Verisig: verifying safety properties of hybrid systems with neural network controllers,” in Proceedings of the 22nd ACM International Conference on Hybrid Systems: Computation and Control, 2019, pp. 169–178.
  19. H.-D. Tran, X. Yang, D. M. Lopez, P. Musau, L. V. Nguyen, W. Xiang, S. Bak, and T. T. Johnson, “Nnv: The neural network verification tool for deep neural networks and learning-enabled cyber-physical systems,” in International Conference on Computer Aided Verification.   Springer, 2020, pp. 3–17.
  20. W. Xiang, H.-D. Tran, and T. T. Johnson, “Output reachable set estimation and verification for multilayer neural networks,” IEEE transactions on neural networks and learning systems, vol. 29, no. 11, pp. 5777–5783, 2018.
  21. M. Everett, “Neural network verification in control,” arXiv preprint arXiv:2110.01388, 2021.
  22. T. Gehr, M. Mirman, D. Drachsler-Cohen, P. Tsankov, S. Chaudhuri, and M. Vechev, “Ai2: Safety and robustness certification of neural networks with abstract interpretation,” in 2018 IEEE Symposium on Security and Privacy (SP).   IEEE, 2018, pp. 3–18.
  23. S. Chen, E. Wong, J. Z. Kolter, and M. Fazlyab, “Deepsplit: Scalable verification of deep neural networks via operator splitting,” arXiv preprint arXiv:2106.09117, 2021.
  24. E. Wong and Z. Kolter, “Provable defenses against adversarial examples via the convex outer adversarial polytope,” in International Conference on Machine Learning.   PMLR, 2018, pp. 5286–5295.
  25. S. Wang, H. Zhang, K. Xu, X. Lin, S. Jana, C.-J. Hsieh, and J. Z. Kolter, “Beta-CROWN: Efficient bound propagation with per-neuron split constraints for complete and incomplete neural network verification,” Advances in Neural Information Processing Systems, vol. 34, 2021.
  26. K. Dvijotham, R. Stanforth, S. Gowal, T. A. Mann, and P. Kohli, “A dual approach to scalable verification of deep networks.” in UAI, vol. 1, 2018, p. 3.
  27. M. Fazlyab, M. Morari, and G. J. Pappas, “An introduction to neural network analysis via semidefinite programming,” in 2021 60th IEEE Conference on Decision and Control (CDC).   IEEE, 2021, pp. 6341–6350.
  28. A. Raghunathan, J. Steinhardt, and P. Liang, “Semidefinite relaxations for certifying robustness to adversarial examples,” arXiv preprint arXiv:1811.01057, 2018.
  29. H. Chen, H.-T. D. Liu, A. Jacobson, and D. I. Levin, “Chordal decomposition for spectral coarsening,” arXiv preprint arXiv:2009.02294, 2020.
  30. L. P. Ihlenfeld and G. H. Oliveira, “A faster passivity enforcement method via chordal sparsity,” Electric Power Systems Research, vol. 204, p. 107706, 2022.
  31. R. P. Mason and A. Papachristodoulou, “Chordal sparsity, decomposing sdps and the lyapunov equation,” in 2014 American Control Conference.   IEEE, 2014, pp. 531–537.
  32. B. Batten, P. Kouvaros, A. Lomuscio, and Y. Zheng, “Efficient neural network verification via layer-based semidefinite relaxations and linear cuts,” in International Joint Conference on Artificial Intelligence (IJCAI21), 2021, pp. 2184–2190.
  33. J. Lan, A. Lomuscio, and Y. Zheng, “Tight neural network verification via semidefinite relaxations and linear reformulations,” in Proccedings of the 36th AAAI Conference on Artificial Intelligence (AAAI22), 2021.
  34. R. A. Brown, E. Schmerling, N. Azizan, and M. Pavone, “A unified view of sdp-based neural network verification through completely positive programming,” in International Conference on Artificial Intelligence and Statistics.   PMLR, 2022, pp. 9334–9355.
  35. T. Chen, J. B. Lasserre, V. Magron, and E. Pauwels, “Semialgebraic optimization for lipschitz constants of relu networks,” Advances in Neural Information Processing Systems, vol. 33, pp. 19 189–19 200, 2020.
  36. M. Newton and A. Papachristodoulou, “Neural network verification using polynomial optimisation,” in 2021 60th IEEE Conference on Decision and Control (CDC).   IEEE, 2021, pp. 5092–5097.
  37. ——, “Sparse polynomial optimisation for neural network verification,” arXiv preprint arXiv:2202.02241, 2022.
  38. A. Xue, L. Lindemann, A. Robey, H. Hassani, G. J. Pappas, and R. Alur, “Chordal sparsity for lipschitz constant estimation of deep neural networks,” arXiv preprint arXiv:2204.00846, 2022.
  39. M. Fazlyab, A. Robey, H. Hassani, M. Morari, and G. Pappas, “Efficient and accurate estimation of lipschitz constants for deep neural networks,” Advances in Neural Information Processing Systems, vol. 32, pp. 11 427–11 438, 2019.
  40. P. Pauli, A. Koch, J. Berberich, P. Kohler, and F. Allgöwer, “Training robust neural networks using lipschitz bounds,” IEEE Control Systems Letters, vol. 6, pp. 121–126, 2021.
  41. A. Griewank and P. L. Toint, “On the existence of convex decompositions of partially separable functions,” Mathematical Programming, vol. 28, no. 1, pp. 25–49, 1984.
  42. J. Lofberg, “Pre-and post-processing sum-of-squares programs in practice,” IEEE transactions on automatic control, vol. 54, no. 5, pp. 1007–1011, 2009.
  43. K. Xu, Z. Shi, H. Zhang, Y. Wang, K.-W. Chang, M. Huang, B. Kailkhura, X. Lin, and C.-J. Hsieh, “Automatic perturbation analysis for scalable certified robustness and beyond,” Advances in Neural Information Processing Systems, vol. 33, pp. 1129–1141, 2020.
  44. R. V. Florian, “Correct equations for the dynamics of the cart-pole system,” Center for Cognitive and Neural Studies (Coneural), Romania, 2007.
  45. S. Chen, V. M. Preciado, and M. Fazlyab, “One-shot reachability analysis of neural network dynamical systems,” arXiv preprint arXiv:2209.11827, 2022.
  46. J. Bezanson, A. Edelman, S. Karpinski, and V. B. Shah, “Julia: A fresh approach to numerical computing,” SIAM review, vol. 59, no. 1, pp. 65–98, 2017.
  47. E. D. Andersen and K. D. Andersen, “The mosek interior point optimizer for linear programming: an implementation of the homogeneous algorithm,” in High performance optimization.   Springer, 2000, pp. 197–232.
  48. I. Dunning, J. Huchette, and M. Lubin, “Jump: A modeling language for mathematical optimization,” SIAM Review, vol. 59, no. 2, pp. 295–320, 2017.
Citations (1)

Summary

We haven't generated a summary for this paper yet.