Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Accelerating Optimal Power Flow with GPUs: SIMD Abstraction of Nonlinear Programs and Condensed-Space Interior-Point Methods (2307.16830v2)

Published 31 Jul 2023 in math.OC and cs.DC

Abstract: This paper introduces a framework for solving alternating current optimal power flow (ACOPF) problems using graphics processing units (GPUs). While GPUs have demonstrated remarkable performance in various computing domains, their application in ACOPF has been limited due to challenges associated with porting sparse automatic differentiation (AD) and sparse linear solver routines to GPUs. We address these issues with two key strategies. First, we utilize a single-instruction, multiple-data abstraction of nonlinear programs. This approach enables the specification of model equations while preserving their parallelizable structure and, in turn, facilitates the parallel AD implementation. Second, we employ a condensed-space interior-point method (IPM) with an inequality relaxation. This technique involves condensing the Karush--Kuhn--Tucker (KKT) system into a positive definite system. This strategy offers the key advantage of being able to factorize the KKT matrix without numerical pivoting, which has hampered the parallelization of the IPM algorithm. By combining these strategies, we can perform the majority of operations on GPUs while keeping the data residing in the device memory only. Comprehensive numerical benchmark results showcase the advantage of our approach. Remarkably, our implementations -- MadNLP.jl and ExaModels.jl -- running on NVIDIA GPUs achieve an order of magnitude speedup compared with state-of-the-art tools running on contemporary CPUs.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (30)
  1. M. Anitescu, K. Kim, Y. Kim, A. Maldonado, F. Pacaud, V. Rao, M. Schanen, S. Shin, and A. Subramanian, “Targeting Exascale with Julia on GPUs for multiperiod optimization with scenario constraints,” SIAG/OPT Views and News, 2021.
  2. S. Shin, “ExaModels.jl.” https://github.com/sshin23/ExaModels.jl.
  3. I. Dunning, J. Huchette, and M. Lubin, “JuMP: A modeling language for mathematical optimization,” SIAM review, vol. 59, no. 2, pp. 295–320, 2017.
  4. R. Fourer, D. M. Gay, and B. W. Kernighan, “A modeling language for mathematical programming,” Management Science, vol. 36, no. 5, pp. 519–554, 1990.
  5. K. Świrydowicz, E. Darve, W. Jones, J. Maack, S. Regev, M. A. Saunders, S. J. Thomas, and S. Peleš, “Linear solvers for power grid optimization problems: a review of GPU-accelerated linear solvers,” Parallel Computing, vol. 111, p. 102870, 2022.
  6. J. Nocedal and S. J. Wright, Numerical optimization. Springer, 2006.
  7. S. Babaeinejadsarookolaee, A. Birchfield, R. D. Christie, C. Coffrin, C. DeMarco, R. Diao, M. Ferris, S. Fliscounakis, S. Greene, R. Huang, et al., “The power grid library for benchmarking AC optimal power flow algorithms,” arXiv preprint arXiv:1908.02788, 2019.
  8. Y. Cao, A. Seth, and C. D. Laird, “An augmented Lagrangian interior-point approach for large-scale NLP problems on graphics processing units,” Computers & Chemical Engineering, vol. 85, pp. 76–83, 2016.
  9. F. Pacaud, M. Schanen, S. Shin, D. A. Maldonado, and M. Anitescu, “Parallel interior-point solver for block-structured nonlinear programs on SIMD/GPU architectures,” arXiv preprint arXiv:2301.04869, 2023.
  10. F. Pacaud, D. A. Maldonado, S. Shin, M. Schanen, and M. Anitescu, “A feasible reduced space method for real-time optimal power flow,” Electric Power Systems Research, vol. 212, p. 108268, 2022.
  11. F. Pacaud, S. Shin, M. Schanen, D. A. Maldonado, and M. Anitescu, “Accelerating condensed interior-point methods on SIMD/GPU architectures,” Journal of Optimization Theory and Applications, pp. 1–20, 2023.
  12. C.-J. Lin and J. J. Moré, “Newton’s method for large bound-constrained optimization problems,” SIAM Journal on Optimization, vol. 9, no. 4, pp. 1100–1127, 1999.
  13. Y. Kim and K. Kim, “Accelerated computation and tracking of AC optimal power flow solutions using GPUs,” in Workshop Proceedings of the 51st International Conference on Parallel Processing, pp. 1–8, 2022.
  14. Y. Kim, F. Pacaud, K. Kim, and M. Anitescu, “Leveraging GPU batching for scalable nonlinear programming through massive lagrangian decomposition,” arXiv preprint arXiv:2106.14995, 2021.
  15. C. G. Petra, N. Chiang, and J. Wang, “HiOp – User Guide,” Tech. Rep. LLNL-SM-743591, Center for Applied Scientific Computing, Lawrence Livermore National Laboratory, 2018.
  16. S. Regev, N.-Y. Chiang, E. Darve, C. G. Petra, M. A. Saunders, K. Świrydowicz, and S. Peleš, “HyKKT: a hybrid direct-iterative method for solving KKT linear systems,” Optimization Methods and Software, vol. 38, no. 2, pp. 332–355, 2023.
  17. H. Hijazi, G. Wang, and C. Coffrin, “Gravity: A mathematical modeling language for optimization and machine learning,” Machine Learning Open Source Software Workshop at NeurIPS 2018, 2018. Available at www.gravityopt.com.
  18. J. L. Jerez, E. C. Kerrigan, and G. A. Constantinides, “A sparse and condensed QP formulation for predictive control of LTI systems,” Automatica, vol. 48, no. 5, pp. 999–1002, 2012.
  19. D. Cole, S. Shin, F. Pacaud, V. M. Zavala, and M. Anitescu, “Exploiting GPU/SIMD architectures for solving linear-quadratic MPC problems,” in 2023 American Control Conference (ACC), pp. 3995–4000, IEEE, 2023.
  20. N. Chiang, C. G. Petra, and V. M. Zavala, “Structured nonconvex optimization of large-scale energy systems using PIPS-NLP,” in 2014 Power Systems Computation Conference, pp. 1–7, IEEE, 2014.
  21. J. S. Rodriguez, R. B. Parker, C. D. Laird, B. L. Nicholson, J. D. Siirola, and M. L. Bynum, “Scalable parallel nonlinear optimization with PyNumero and Parapint,” INFORMS Journal on Computing, vol. 35, no. 2, pp. 509–517, 2023.
  22. S. Shin, C. Coffrin, K. Sundar, and V. M. Zavala, “Graph-based modeling and decomposition of energy infrastructures,” IFAC-PapersOnLine, vol. 54, no. 3, pp. 693–698, 2021.
  23. A. Wächter and L. T. Biegler, “On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming,” Mathematical Programming, vol. 106, pp. 25–57, 2006.
  24. J. Bezanson, A. Edelman, S. Karpinski, and V. B. Shah, “Julia: A fresh approach to numerical computing,” SIAM Review, vol. 59, no. 1, pp. 65–98, 2017.
  25. P. R. Amestoy, T. A. Davis, and I. S. Duff, “An Approximate Minimum Degree Ordering Algorithm,” SIAM Journal on Matrix Analysis and Applications, vol. 17, pp. 886–905, Oct. 1996.
  26. A. Montoison, D. Orban, A. S. Siqueira, and contributors, “AMD.jl: A Julia interface to the AMD library of Amestoy, Davis and Duff,” May 2020.
  27. “rosetta-opf.” https://github.com/lanl-ansi/rosetta-opf.
  28. C. Coffrin, R. Bent, K. Sundar, Y. Ng, and M. Lubin, “PowerModels.jl: An open-source framework for exploring power flow formulations,” in 2018 Power Systems Computation Conference (PSCC), pp. 1–8, June 2018.
  29. Y. Chen, T. A. Davis, W. W. Hager, and S. Rajamanickam, “Algorithm 887: CHOLMOD, supernodal sparse Cholesky factorization and update/downdate,” ACM Transactions on Mathematical Software (TOMS), vol. 35, no. 3, pp. 1–14, 2008.
  30. L. Pineda, T. Fan, M. Monge, S. Venkataraman, P. Sodhi, R. T. Chen, J. Ortiz, D. DeTone, A. Wang, S. Anderson, et al., “Theseus: A library for differentiable nonlinear optimization,” Advances in Neural Information Processing Systems, vol. 35, pp. 3801–3818, 2022.
Citations (10)

Summary

We haven't generated a summary for this paper yet.