Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Anderson acceleration for iteratively reweighted $\ell_1$ algorithm (2403.07271v1)

Published 12 Mar 2024 in math.OC, cs.AI, cs.LG, and eess.SP

Abstract: Iteratively reweighted L1 (IRL1) algorithm is a common algorithm for solving sparse optimization problems with nonconvex and nonsmooth regularization. The development of its acceleration algorithm, often employing Nesterov acceleration, has sparked significant interest. Nevertheless, the convergence and complexity analysis of these acceleration algorithms consistently poses substantial challenges. Recently, Anderson acceleration has gained prominence owing to its exceptional performance for speeding up fixed-point iteration, with numerous recent studies applying it to gradient-based algorithms. Motivated by the powerful impact of Anderson acceleration, we propose an Anderson-accelerated IRL1 algorithm and establish its local linear convergence rate. We extend this convergence result, typically observed in smooth settings, to a nonsmooth scenario. Importantly, our theoretical results do not depend on the Kurdyka-Lojasiewicz condition, a necessary condition in existing Nesterov acceleration-based algorithms. Furthermore, to ensure global convergence, we introduce a globally convergent Anderson accelerated IRL1 algorithm by incorporating a classical nonmonotone line search condition. Experimental results indicate that our algorithm outperforms existing Nesterov acceleration-based algorithms.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (46)
  1. Tropp, J.A.: Just relax: Convex programming methods for identifying sparse signals in noise. IEEE Transactions on Information Theory 52(3), 1030–1051 (2006) Tsuruoka et al. [2007] Tsuruoka, Y., McNaught, J., Tsujii, J.c., Ananiadou, S.: Learning string similarity measures for gene/protein name dictionary look-up using logistic regression. Bioinformatics 23(20), 2768–2774 (2007) Liao and Chin [2007] Liao, J., Chin, K.-V.: Logistic regression for disease classification using microarray data: model selection in a large p and small n case. Bioinformatics 23(15), 1945–1951 (2007) Mairal et al. [2010] Mairal, J., Bach, F., Ponce, J., Sapiro, G.: Online learning for matrix factorization and sparse coding. Journal of Machine Learning Research 11(1) (2010) Scardapane et al. [2017] Scardapane, S., Comminiello, D., Hussain, A., Uncini, A.: Group sparse regularization for deep neural networks. Neurocomputing 241, 81–89 (2017) Bradley et al. [1998] Bradley, P.S., Mangasarian, O.L., Street, W.N.: Feature selection via mathematical programming. INFORMS Journal on Computing 10(2), 209–217 (1998) Fazel et al. [2003] Fazel, M., Hindi, H., Boyd, S.P.: Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices. In: Proceedings of the 2003 American Control Conference, 2003., vol. 3, pp. 2156–2162 (2003). IEEE Lobo et al. [2007] Lobo, M.S., Fazel, M., Boyd, S.: Portfolio optimization with linear and fixed transaction costs. Annals of Operations Research 152, 341–365 (2007) Candes et al. [2008] Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. Journal of Fourier Analysis and Applications 14, 877–905 (2008) Fan and Li [2001] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Tsuruoka, Y., McNaught, J., Tsujii, J.c., Ananiadou, S.: Learning string similarity measures for gene/protein name dictionary look-up using logistic regression. Bioinformatics 23(20), 2768–2774 (2007) Liao and Chin [2007] Liao, J., Chin, K.-V.: Logistic regression for disease classification using microarray data: model selection in a large p and small n case. Bioinformatics 23(15), 1945–1951 (2007) Mairal et al. [2010] Mairal, J., Bach, F., Ponce, J., Sapiro, G.: Online learning for matrix factorization and sparse coding. Journal of Machine Learning Research 11(1) (2010) Scardapane et al. [2017] Scardapane, S., Comminiello, D., Hussain, A., Uncini, A.: Group sparse regularization for deep neural networks. Neurocomputing 241, 81–89 (2017) Bradley et al. [1998] Bradley, P.S., Mangasarian, O.L., Street, W.N.: Feature selection via mathematical programming. INFORMS Journal on Computing 10(2), 209–217 (1998) Fazel et al. [2003] Fazel, M., Hindi, H., Boyd, S.P.: Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices. In: Proceedings of the 2003 American Control Conference, 2003., vol. 3, pp. 2156–2162 (2003). IEEE Lobo et al. [2007] Lobo, M.S., Fazel, M., Boyd, S.: Portfolio optimization with linear and fixed transaction costs. Annals of Operations Research 152, 341–365 (2007) Candes et al. [2008] Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. Journal of Fourier Analysis and Applications 14, 877–905 (2008) Fan and Li [2001] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Liao, J., Chin, K.-V.: Logistic regression for disease classification using microarray data: model selection in a large p and small n case. Bioinformatics 23(15), 1945–1951 (2007) Mairal et al. [2010] Mairal, J., Bach, F., Ponce, J., Sapiro, G.: Online learning for matrix factorization and sparse coding. Journal of Machine Learning Research 11(1) (2010) Scardapane et al. [2017] Scardapane, S., Comminiello, D., Hussain, A., Uncini, A.: Group sparse regularization for deep neural networks. Neurocomputing 241, 81–89 (2017) Bradley et al. [1998] Bradley, P.S., Mangasarian, O.L., Street, W.N.: Feature selection via mathematical programming. INFORMS Journal on Computing 10(2), 209–217 (1998) Fazel et al. [2003] Fazel, M., Hindi, H., Boyd, S.P.: Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices. In: Proceedings of the 2003 American Control Conference, 2003., vol. 3, pp. 2156–2162 (2003). IEEE Lobo et al. [2007] Lobo, M.S., Fazel, M., Boyd, S.: Portfolio optimization with linear and fixed transaction costs. Annals of Operations Research 152, 341–365 (2007) Candes et al. [2008] Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. Journal of Fourier Analysis and Applications 14, 877–905 (2008) Fan and Li [2001] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mairal, J., Bach, F., Ponce, J., Sapiro, G.: Online learning for matrix factorization and sparse coding. Journal of Machine Learning Research 11(1) (2010) Scardapane et al. [2017] Scardapane, S., Comminiello, D., Hussain, A., Uncini, A.: Group sparse regularization for deep neural networks. Neurocomputing 241, 81–89 (2017) Bradley et al. [1998] Bradley, P.S., Mangasarian, O.L., Street, W.N.: Feature selection via mathematical programming. INFORMS Journal on Computing 10(2), 209–217 (1998) Fazel et al. [2003] Fazel, M., Hindi, H., Boyd, S.P.: Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices. In: Proceedings of the 2003 American Control Conference, 2003., vol. 3, pp. 2156–2162 (2003). IEEE Lobo et al. [2007] Lobo, M.S., Fazel, M., Boyd, S.: Portfolio optimization with linear and fixed transaction costs. Annals of Operations Research 152, 341–365 (2007) Candes et al. [2008] Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. Journal of Fourier Analysis and Applications 14, 877–905 (2008) Fan and Li [2001] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scardapane, S., Comminiello, D., Hussain, A., Uncini, A.: Group sparse regularization for deep neural networks. Neurocomputing 241, 81–89 (2017) Bradley et al. [1998] Bradley, P.S., Mangasarian, O.L., Street, W.N.: Feature selection via mathematical programming. INFORMS Journal on Computing 10(2), 209–217 (1998) Fazel et al. [2003] Fazel, M., Hindi, H., Boyd, S.P.: Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices. In: Proceedings of the 2003 American Control Conference, 2003., vol. 3, pp. 2156–2162 (2003). IEEE Lobo et al. [2007] Lobo, M.S., Fazel, M., Boyd, S.: Portfolio optimization with linear and fixed transaction costs. Annals of Operations Research 152, 341–365 (2007) Candes et al. [2008] Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. Journal of Fourier Analysis and Applications 14, 877–905 (2008) Fan and Li [2001] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bradley, P.S., Mangasarian, O.L., Street, W.N.: Feature selection via mathematical programming. INFORMS Journal on Computing 10(2), 209–217 (1998) Fazel et al. [2003] Fazel, M., Hindi, H., Boyd, S.P.: Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices. In: Proceedings of the 2003 American Control Conference, 2003., vol. 3, pp. 2156–2162 (2003). IEEE Lobo et al. [2007] Lobo, M.S., Fazel, M., Boyd, S.: Portfolio optimization with linear and fixed transaction costs. Annals of Operations Research 152, 341–365 (2007) Candes et al. [2008] Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. Journal of Fourier Analysis and Applications 14, 877–905 (2008) Fan and Li [2001] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Fazel, M., Hindi, H., Boyd, S.P.: Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices. In: Proceedings of the 2003 American Control Conference, 2003., vol. 3, pp. 2156–2162 (2003). IEEE Lobo et al. [2007] Lobo, M.S., Fazel, M., Boyd, S.: Portfolio optimization with linear and fixed transaction costs. Annals of Operations Research 152, 341–365 (2007) Candes et al. [2008] Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. Journal of Fourier Analysis and Applications 14, 877–905 (2008) Fan and Li [2001] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Lobo, M.S., Fazel, M., Boyd, S.: Portfolio optimization with linear and fixed transaction costs. Annals of Operations Research 152, 341–365 (2007) Candes et al. [2008] Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. Journal of Fourier Analysis and Applications 14, 877–905 (2008) Fan and Li [2001] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. Journal of Fourier Analysis and Applications 14, 877–905 (2008) Fan and Li [2001] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)
  2. Tsuruoka, Y., McNaught, J., Tsujii, J.c., Ananiadou, S.: Learning string similarity measures for gene/protein name dictionary look-up using logistic regression. Bioinformatics 23(20), 2768–2774 (2007) Liao and Chin [2007] Liao, J., Chin, K.-V.: Logistic regression for disease classification using microarray data: model selection in a large p and small n case. Bioinformatics 23(15), 1945–1951 (2007) Mairal et al. [2010] Mairal, J., Bach, F., Ponce, J., Sapiro, G.: Online learning for matrix factorization and sparse coding. Journal of Machine Learning Research 11(1) (2010) Scardapane et al. [2017] Scardapane, S., Comminiello, D., Hussain, A., Uncini, A.: Group sparse regularization for deep neural networks. Neurocomputing 241, 81–89 (2017) Bradley et al. [1998] Bradley, P.S., Mangasarian, O.L., Street, W.N.: Feature selection via mathematical programming. INFORMS Journal on Computing 10(2), 209–217 (1998) Fazel et al. [2003] Fazel, M., Hindi, H., Boyd, S.P.: Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices. In: Proceedings of the 2003 American Control Conference, 2003., vol. 3, pp. 2156–2162 (2003). IEEE Lobo et al. [2007] Lobo, M.S., Fazel, M., Boyd, S.: Portfolio optimization with linear and fixed transaction costs. Annals of Operations Research 152, 341–365 (2007) Candes et al. [2008] Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. Journal of Fourier Analysis and Applications 14, 877–905 (2008) Fan and Li [2001] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Liao, J., Chin, K.-V.: Logistic regression for disease classification using microarray data: model selection in a large p and small n case. Bioinformatics 23(15), 1945–1951 (2007) Mairal et al. [2010] Mairal, J., Bach, F., Ponce, J., Sapiro, G.: Online learning for matrix factorization and sparse coding. Journal of Machine Learning Research 11(1) (2010) Scardapane et al. [2017] Scardapane, S., Comminiello, D., Hussain, A., Uncini, A.: Group sparse regularization for deep neural networks. Neurocomputing 241, 81–89 (2017) Bradley et al. [1998] Bradley, P.S., Mangasarian, O.L., Street, W.N.: Feature selection via mathematical programming. INFORMS Journal on Computing 10(2), 209–217 (1998) Fazel et al. [2003] Fazel, M., Hindi, H., Boyd, S.P.: Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices. In: Proceedings of the 2003 American Control Conference, 2003., vol. 3, pp. 2156–2162 (2003). IEEE Lobo et al. [2007] Lobo, M.S., Fazel, M., Boyd, S.: Portfolio optimization with linear and fixed transaction costs. Annals of Operations Research 152, 341–365 (2007) Candes et al. [2008] Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. Journal of Fourier Analysis and Applications 14, 877–905 (2008) Fan and Li [2001] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mairal, J., Bach, F., Ponce, J., Sapiro, G.: Online learning for matrix factorization and sparse coding. Journal of Machine Learning Research 11(1) (2010) Scardapane et al. [2017] Scardapane, S., Comminiello, D., Hussain, A., Uncini, A.: Group sparse regularization for deep neural networks. Neurocomputing 241, 81–89 (2017) Bradley et al. [1998] Bradley, P.S., Mangasarian, O.L., Street, W.N.: Feature selection via mathematical programming. INFORMS Journal on Computing 10(2), 209–217 (1998) Fazel et al. [2003] Fazel, M., Hindi, H., Boyd, S.P.: Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices. In: Proceedings of the 2003 American Control Conference, 2003., vol. 3, pp. 2156–2162 (2003). IEEE Lobo et al. [2007] Lobo, M.S., Fazel, M., Boyd, S.: Portfolio optimization with linear and fixed transaction costs. Annals of Operations Research 152, 341–365 (2007) Candes et al. [2008] Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. Journal of Fourier Analysis and Applications 14, 877–905 (2008) Fan and Li [2001] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scardapane, S., Comminiello, D., Hussain, A., Uncini, A.: Group sparse regularization for deep neural networks. Neurocomputing 241, 81–89 (2017) Bradley et al. [1998] Bradley, P.S., Mangasarian, O.L., Street, W.N.: Feature selection via mathematical programming. INFORMS Journal on Computing 10(2), 209–217 (1998) Fazel et al. [2003] Fazel, M., Hindi, H., Boyd, S.P.: Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices. In: Proceedings of the 2003 American Control Conference, 2003., vol. 3, pp. 2156–2162 (2003). IEEE Lobo et al. [2007] Lobo, M.S., Fazel, M., Boyd, S.: Portfolio optimization with linear and fixed transaction costs. Annals of Operations Research 152, 341–365 (2007) Candes et al. [2008] Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. Journal of Fourier Analysis and Applications 14, 877–905 (2008) Fan and Li [2001] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bradley, P.S., Mangasarian, O.L., Street, W.N.: Feature selection via mathematical programming. INFORMS Journal on Computing 10(2), 209–217 (1998) Fazel et al. [2003] Fazel, M., Hindi, H., Boyd, S.P.: Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices. In: Proceedings of the 2003 American Control Conference, 2003., vol. 3, pp. 2156–2162 (2003). IEEE Lobo et al. [2007] Lobo, M.S., Fazel, M., Boyd, S.: Portfolio optimization with linear and fixed transaction costs. Annals of Operations Research 152, 341–365 (2007) Candes et al. [2008] Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. Journal of Fourier Analysis and Applications 14, 877–905 (2008) Fan and Li [2001] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Fazel, M., Hindi, H., Boyd, S.P.: Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices. In: Proceedings of the 2003 American Control Conference, 2003., vol. 3, pp. 2156–2162 (2003). IEEE Lobo et al. [2007] Lobo, M.S., Fazel, M., Boyd, S.: Portfolio optimization with linear and fixed transaction costs. Annals of Operations Research 152, 341–365 (2007) Candes et al. [2008] Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. Journal of Fourier Analysis and Applications 14, 877–905 (2008) Fan and Li [2001] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Lobo, M.S., Fazel, M., Boyd, S.: Portfolio optimization with linear and fixed transaction costs. Annals of Operations Research 152, 341–365 (2007) Candes et al. [2008] Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. Journal of Fourier Analysis and Applications 14, 877–905 (2008) Fan and Li [2001] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. Journal of Fourier Analysis and Applications 14, 877–905 (2008) Fan and Li [2001] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)
  3. Liao, J., Chin, K.-V.: Logistic regression for disease classification using microarray data: model selection in a large p and small n case. Bioinformatics 23(15), 1945–1951 (2007) Mairal et al. [2010] Mairal, J., Bach, F., Ponce, J., Sapiro, G.: Online learning for matrix factorization and sparse coding. Journal of Machine Learning Research 11(1) (2010) Scardapane et al. [2017] Scardapane, S., Comminiello, D., Hussain, A., Uncini, A.: Group sparse regularization for deep neural networks. Neurocomputing 241, 81–89 (2017) Bradley et al. [1998] Bradley, P.S., Mangasarian, O.L., Street, W.N.: Feature selection via mathematical programming. INFORMS Journal on Computing 10(2), 209–217 (1998) Fazel et al. [2003] Fazel, M., Hindi, H., Boyd, S.P.: Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices. In: Proceedings of the 2003 American Control Conference, 2003., vol. 3, pp. 2156–2162 (2003). IEEE Lobo et al. [2007] Lobo, M.S., Fazel, M., Boyd, S.: Portfolio optimization with linear and fixed transaction costs. Annals of Operations Research 152, 341–365 (2007) Candes et al. [2008] Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. Journal of Fourier Analysis and Applications 14, 877–905 (2008) Fan and Li [2001] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mairal, J., Bach, F., Ponce, J., Sapiro, G.: Online learning for matrix factorization and sparse coding. Journal of Machine Learning Research 11(1) (2010) Scardapane et al. [2017] Scardapane, S., Comminiello, D., Hussain, A., Uncini, A.: Group sparse regularization for deep neural networks. Neurocomputing 241, 81–89 (2017) Bradley et al. [1998] Bradley, P.S., Mangasarian, O.L., Street, W.N.: Feature selection via mathematical programming. INFORMS Journal on Computing 10(2), 209–217 (1998) Fazel et al. [2003] Fazel, M., Hindi, H., Boyd, S.P.: Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices. In: Proceedings of the 2003 American Control Conference, 2003., vol. 3, pp. 2156–2162 (2003). IEEE Lobo et al. [2007] Lobo, M.S., Fazel, M., Boyd, S.: Portfolio optimization with linear and fixed transaction costs. Annals of Operations Research 152, 341–365 (2007) Candes et al. [2008] Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. Journal of Fourier Analysis and Applications 14, 877–905 (2008) Fan and Li [2001] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scardapane, S., Comminiello, D., Hussain, A., Uncini, A.: Group sparse regularization for deep neural networks. Neurocomputing 241, 81–89 (2017) Bradley et al. [1998] Bradley, P.S., Mangasarian, O.L., Street, W.N.: Feature selection via mathematical programming. INFORMS Journal on Computing 10(2), 209–217 (1998) Fazel et al. [2003] Fazel, M., Hindi, H., Boyd, S.P.: Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices. In: Proceedings of the 2003 American Control Conference, 2003., vol. 3, pp. 2156–2162 (2003). IEEE Lobo et al. [2007] Lobo, M.S., Fazel, M., Boyd, S.: Portfolio optimization with linear and fixed transaction costs. Annals of Operations Research 152, 341–365 (2007) Candes et al. [2008] Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. Journal of Fourier Analysis and Applications 14, 877–905 (2008) Fan and Li [2001] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bradley, P.S., Mangasarian, O.L., Street, W.N.: Feature selection via mathematical programming. INFORMS Journal on Computing 10(2), 209–217 (1998) Fazel et al. [2003] Fazel, M., Hindi, H., Boyd, S.P.: Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices. In: Proceedings of the 2003 American Control Conference, 2003., vol. 3, pp. 2156–2162 (2003). IEEE Lobo et al. [2007] Lobo, M.S., Fazel, M., Boyd, S.: Portfolio optimization with linear and fixed transaction costs. Annals of Operations Research 152, 341–365 (2007) Candes et al. [2008] Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. Journal of Fourier Analysis and Applications 14, 877–905 (2008) Fan and Li [2001] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Fazel, M., Hindi, H., Boyd, S.P.: Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices. In: Proceedings of the 2003 American Control Conference, 2003., vol. 3, pp. 2156–2162 (2003). IEEE Lobo et al. [2007] Lobo, M.S., Fazel, M., Boyd, S.: Portfolio optimization with linear and fixed transaction costs. Annals of Operations Research 152, 341–365 (2007) Candes et al. [2008] Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. Journal of Fourier Analysis and Applications 14, 877–905 (2008) Fan and Li [2001] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Lobo, M.S., Fazel, M., Boyd, S.: Portfolio optimization with linear and fixed transaction costs. Annals of Operations Research 152, 341–365 (2007) Candes et al. [2008] Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. Journal of Fourier Analysis and Applications 14, 877–905 (2008) Fan and Li [2001] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. Journal of Fourier Analysis and Applications 14, 877–905 (2008) Fan and Li [2001] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)
  4. Mairal, J., Bach, F., Ponce, J., Sapiro, G.: Online learning for matrix factorization and sparse coding. Journal of Machine Learning Research 11(1) (2010) Scardapane et al. [2017] Scardapane, S., Comminiello, D., Hussain, A., Uncini, A.: Group sparse regularization for deep neural networks. Neurocomputing 241, 81–89 (2017) Bradley et al. [1998] Bradley, P.S., Mangasarian, O.L., Street, W.N.: Feature selection via mathematical programming. INFORMS Journal on Computing 10(2), 209–217 (1998) Fazel et al. [2003] Fazel, M., Hindi, H., Boyd, S.P.: Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices. In: Proceedings of the 2003 American Control Conference, 2003., vol. 3, pp. 2156–2162 (2003). IEEE Lobo et al. [2007] Lobo, M.S., Fazel, M., Boyd, S.: Portfolio optimization with linear and fixed transaction costs. Annals of Operations Research 152, 341–365 (2007) Candes et al. [2008] Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. Journal of Fourier Analysis and Applications 14, 877–905 (2008) Fan and Li [2001] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scardapane, S., Comminiello, D., Hussain, A., Uncini, A.: Group sparse regularization for deep neural networks. Neurocomputing 241, 81–89 (2017) Bradley et al. [1998] Bradley, P.S., Mangasarian, O.L., Street, W.N.: Feature selection via mathematical programming. INFORMS Journal on Computing 10(2), 209–217 (1998) Fazel et al. [2003] Fazel, M., Hindi, H., Boyd, S.P.: Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices. In: Proceedings of the 2003 American Control Conference, 2003., vol. 3, pp. 2156–2162 (2003). IEEE Lobo et al. [2007] Lobo, M.S., Fazel, M., Boyd, S.: Portfolio optimization with linear and fixed transaction costs. Annals of Operations Research 152, 341–365 (2007) Candes et al. [2008] Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. Journal of Fourier Analysis and Applications 14, 877–905 (2008) Fan and Li [2001] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bradley, P.S., Mangasarian, O.L., Street, W.N.: Feature selection via mathematical programming. INFORMS Journal on Computing 10(2), 209–217 (1998) Fazel et al. [2003] Fazel, M., Hindi, H., Boyd, S.P.: Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices. In: Proceedings of the 2003 American Control Conference, 2003., vol. 3, pp. 2156–2162 (2003). IEEE Lobo et al. [2007] Lobo, M.S., Fazel, M., Boyd, S.: Portfolio optimization with linear and fixed transaction costs. Annals of Operations Research 152, 341–365 (2007) Candes et al. [2008] Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. Journal of Fourier Analysis and Applications 14, 877–905 (2008) Fan and Li [2001] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Fazel, M., Hindi, H., Boyd, S.P.: Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices. In: Proceedings of the 2003 American Control Conference, 2003., vol. 3, pp. 2156–2162 (2003). IEEE Lobo et al. [2007] Lobo, M.S., Fazel, M., Boyd, S.: Portfolio optimization with linear and fixed transaction costs. Annals of Operations Research 152, 341–365 (2007) Candes et al. [2008] Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. Journal of Fourier Analysis and Applications 14, 877–905 (2008) Fan and Li [2001] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Lobo, M.S., Fazel, M., Boyd, S.: Portfolio optimization with linear and fixed transaction costs. Annals of Operations Research 152, 341–365 (2007) Candes et al. [2008] Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. Journal of Fourier Analysis and Applications 14, 877–905 (2008) Fan and Li [2001] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. Journal of Fourier Analysis and Applications 14, 877–905 (2008) Fan and Li [2001] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)
  5. Scardapane, S., Comminiello, D., Hussain, A., Uncini, A.: Group sparse regularization for deep neural networks. Neurocomputing 241, 81–89 (2017) Bradley et al. [1998] Bradley, P.S., Mangasarian, O.L., Street, W.N.: Feature selection via mathematical programming. INFORMS Journal on Computing 10(2), 209–217 (1998) Fazel et al. [2003] Fazel, M., Hindi, H., Boyd, S.P.: Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices. In: Proceedings of the 2003 American Control Conference, 2003., vol. 3, pp. 2156–2162 (2003). IEEE Lobo et al. [2007] Lobo, M.S., Fazel, M., Boyd, S.: Portfolio optimization with linear and fixed transaction costs. Annals of Operations Research 152, 341–365 (2007) Candes et al. [2008] Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. Journal of Fourier Analysis and Applications 14, 877–905 (2008) Fan and Li [2001] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bradley, P.S., Mangasarian, O.L., Street, W.N.: Feature selection via mathematical programming. INFORMS Journal on Computing 10(2), 209–217 (1998) Fazel et al. [2003] Fazel, M., Hindi, H., Boyd, S.P.: Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices. In: Proceedings of the 2003 American Control Conference, 2003., vol. 3, pp. 2156–2162 (2003). IEEE Lobo et al. [2007] Lobo, M.S., Fazel, M., Boyd, S.: Portfolio optimization with linear and fixed transaction costs. Annals of Operations Research 152, 341–365 (2007) Candes et al. [2008] Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. Journal of Fourier Analysis and Applications 14, 877–905 (2008) Fan and Li [2001] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Fazel, M., Hindi, H., Boyd, S.P.: Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices. In: Proceedings of the 2003 American Control Conference, 2003., vol. 3, pp. 2156–2162 (2003). IEEE Lobo et al. [2007] Lobo, M.S., Fazel, M., Boyd, S.: Portfolio optimization with linear and fixed transaction costs. Annals of Operations Research 152, 341–365 (2007) Candes et al. [2008] Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. Journal of Fourier Analysis and Applications 14, 877–905 (2008) Fan and Li [2001] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Lobo, M.S., Fazel, M., Boyd, S.: Portfolio optimization with linear and fixed transaction costs. Annals of Operations Research 152, 341–365 (2007) Candes et al. [2008] Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. Journal of Fourier Analysis and Applications 14, 877–905 (2008) Fan and Li [2001] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. Journal of Fourier Analysis and Applications 14, 877–905 (2008) Fan and Li [2001] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)
  6. Bradley, P.S., Mangasarian, O.L., Street, W.N.: Feature selection via mathematical programming. INFORMS Journal on Computing 10(2), 209–217 (1998) Fazel et al. [2003] Fazel, M., Hindi, H., Boyd, S.P.: Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices. In: Proceedings of the 2003 American Control Conference, 2003., vol. 3, pp. 2156–2162 (2003). IEEE Lobo et al. [2007] Lobo, M.S., Fazel, M., Boyd, S.: Portfolio optimization with linear and fixed transaction costs. Annals of Operations Research 152, 341–365 (2007) Candes et al. [2008] Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. Journal of Fourier Analysis and Applications 14, 877–905 (2008) Fan and Li [2001] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Fazel, M., Hindi, H., Boyd, S.P.: Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices. In: Proceedings of the 2003 American Control Conference, 2003., vol. 3, pp. 2156–2162 (2003). IEEE Lobo et al. [2007] Lobo, M.S., Fazel, M., Boyd, S.: Portfolio optimization with linear and fixed transaction costs. Annals of Operations Research 152, 341–365 (2007) Candes et al. [2008] Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. Journal of Fourier Analysis and Applications 14, 877–905 (2008) Fan and Li [2001] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Lobo, M.S., Fazel, M., Boyd, S.: Portfolio optimization with linear and fixed transaction costs. Annals of Operations Research 152, 341–365 (2007) Candes et al. [2008] Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. Journal of Fourier Analysis and Applications 14, 877–905 (2008) Fan and Li [2001] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. Journal of Fourier Analysis and Applications 14, 877–905 (2008) Fan and Li [2001] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)
  7. Fazel, M., Hindi, H., Boyd, S.P.: Log-det heuristic for matrix rank minimization with applications to hankel and euclidean distance matrices. In: Proceedings of the 2003 American Control Conference, 2003., vol. 3, pp. 2156–2162 (2003). IEEE Lobo et al. [2007] Lobo, M.S., Fazel, M., Boyd, S.: Portfolio optimization with linear and fixed transaction costs. Annals of Operations Research 152, 341–365 (2007) Candes et al. [2008] Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. Journal of Fourier Analysis and Applications 14, 877–905 (2008) Fan and Li [2001] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Lobo, M.S., Fazel, M., Boyd, S.: Portfolio optimization with linear and fixed transaction costs. Annals of Operations Research 152, 341–365 (2007) Candes et al. [2008] Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. Journal of Fourier Analysis and Applications 14, 877–905 (2008) Fan and Li [2001] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. Journal of Fourier Analysis and Applications 14, 877–905 (2008) Fan and Li [2001] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)
  8. Lobo, M.S., Fazel, M., Boyd, S.: Portfolio optimization with linear and fixed transaction costs. Annals of Operations Research 152, 341–365 (2007) Candes et al. [2008] Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. Journal of Fourier Analysis and Applications 14, 877–905 (2008) Fan and Li [2001] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. Journal of Fourier Analysis and Applications 14, 877–905 (2008) Fan and Li [2001] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)
  9. Candes, E.J., Wakin, M.B., Boyd, S.P.: Enhancing sparsity by reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT minimization. Journal of Fourier Analysis and Applications 14, 877–905 (2008) Fan and Li [2001] Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)
  10. Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360 (2001) Zhang [2007] Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)
  11. Zhang, C.H.: Penalized linear unbiased selection. Department of Statistics and Bioinformatics, Rutgers University 3(2010), 894–942 (2007) Wang et al. [2021] Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)
  12. Wang, H., Zhang, F., Shi, Y., Hu, Y.: Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. Journal of Global Optimization 81, 717–748 (2021) Gong et al. [2013] Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)
  13. Gong, P., Zhang, C., Lu, Z., Huang, J., Ye, J.: A general iterative shrinkage and thresholding algorithm for non-convex regularized optimization problems. In: International Conference on Machine Learning, pp. 37–45 (2013). PMLR Lu [2014] Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)
  14. Lu, Z.: Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming. Mathematical Programming 147(1-2), 277–307 (2014) Wang et al. [2015] Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)
  15. Wang, H., Li, D.-H., Zhang, X.-J., Wu, L.: Optimality conditions for the constrained ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT-regularization. Optimization 64(10), 2183–2197 (2015) Yang et al. [2022] Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)
  16. Yang, X., Wang, J., Wang, H.: Towards an efficient approach for the nonconvex ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ball projection: algorithm and analysis. The Journal of Machine Learning Research 23(1), 4346–4376 (2022) Wang et al. [2022] Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)
  17. Wang, H., Zeng, H., Wang, J.: An extrapolated iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT method with complexity analysis. Computational Optimization and Applications 83(3), 967–997 (2022) Yu and Pong [2019] Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)
  18. Yu, P., Pong, T.K.: Iteratively reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT algorithms with extrapolation. Computational Optimization and Applications 73(2), 353–386 (2019) Chartrand and Yin [2008] Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)
  19. Chartrand, R., Yin, W.: Iteratively reweighted algorithms for compressive sensing. In: 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3869–3872 (2008). IEEE Lu et al. [2014] Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)
  20. Lu, C., Wei, Y., Lin, Z., Yan, S.: Proximal iteratively reweighted algorithm with multiple splitting for nonconvex sparsity optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28 (2014) Chen and Zhou [2010] Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)
  21. Chen, X., Zhou, W.: Convergence of reweighted l1 minimization algorithms and unique solution of truncated lp minimization. Department of Applied Mathematics, The Hong Kong Polytechnic University (2010) Nesterov [1983] Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)
  22. Nesterov, Y.E.: A method of solving a convex programming problem with convergence rate O(1/k2)1superscript𝑘2(1/k^{2})( 1 / italic_k start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ). In: Doklady Akademii Nauk, vol. 269, pp. 543–547 (1983). Russian Academy of Sciences Beck and Teboulle [2009] Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)
  23. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences 2(1), 183–202 (2009) Smith et al. [1987] Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)
  24. Smith, D.A., Ford, W.F., Sidi, A.: Extrapolation methods for vector sequences. SIAM Review 29(2), 199–233 (1987) Eddy [1979] Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)
  25. Eddy, R.: Extrapolating to the limit of a vector sequence. Information Linkage Between Applied Mathematics and Industry. Elsevier (1979) Anderson [1965] Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)
  26. Anderson, D.G.: Iterative procedures for nonlinear integral equations. Journal of the ACM (JACM) 12(4), 547–560 (1965) Walker and Ni [2011] Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)
  27. Walker, H.F., Ni, P.: Anderson acceleration for fixed-point iterations. SIAM Journal on Numerical Analysis 49(4), 1715–1735 (2011) Toth and Kelley [2015] Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)
  28. Toth, A., Kelley, C.T.: Convergence analysis for anderson acceleration. SIAM Journal on Numerical Analysis 53(2), 805–819 (2015) Fu et al. [2020] Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)
  29. Fu, A., Zhang, J., Boyd, S.: Anderson accelerated douglas–rachford splitting. SIAM Journal on Scientific Computing 42(6), 3560–3583 (2020) Poon and Liang [2019] Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)
  30. Poon, C., Liang, J.: Trajectory of alternating direction method of multipliers and adaptive acceleration. Advances in Neural Information Processing Systems 32 (2019) Chen and Kelley [2019] Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)
  31. Chen, X., Kelley, C.T.: Convergence of the ediis algorithm for nonlinear equations. SIAM Journal on Scientific Computing 41(1), 365–379 (2019) Toth et al. [2017] Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)
  32. Toth, A., Ellis, J.A., Evans, T., Hamilton, S., Kelley, C.T., Pawlowski, R., Slattery, S.: Local improvement results for anderson acceleration with inaccurate function evaluations. SIAM Journal on Scientific Computing 39(5), 47–65 (2017) Bian et al. [2021] Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)
  33. Bian, W., Chen, X., Kelley, C.T.: Anderson acceleration for a class of nonsmooth fixed-point problems. SIAM Journal on Scientific Computing 43(5), 1–20 (2021) Bian and Chen [2022] Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)
  34. Bian, W., Chen, X.: Anderson acceleration for nonsmooth fixed point problems. SIAM Journal on Numerical Analysis 60(5), 2565–2591 (2022) Mai and Johansson [2020] Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)
  35. Mai, V., Johansson, M.: Anderson acceleration of proximal gradient methods. In: International Conference on Machine Learning, pp. 6620–6629 (2020). PMLR Zhang and Hager [2004] Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)
  36. Zhang, H., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM Journal on Optimization 14(4), 1043–1056 (2004) Wang et al. [2021] Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)
  37. Wang, H., Zeng, H., Wang, J., Wu, Q.: Relating ℓpsubscriptℓ𝑝\ell_{p}roman_ℓ start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT regularization and reweighted ℓ1subscriptℓ1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT regularization. Optimization Letters 15(8), 2639–2660 (2021) Scieur et al. [2017] Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)
  38. Scieur, D., Bach, F., d’Aspremont, A.: Nonlinear acceleration of stochastic algorithms. Advances in Neural Information Processing Systems 30 (2017) Scieur et al. [2016] Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)
  39. Scieur, D., d’Aspremont, A., Bach, F.: Regularized nonlinear acceleration. Advances In Neural Information Processing Systems 29 (2016) KELLY [1995] KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)
  40. KELLY, C.: Iterative methods for linear and nonlinear equations. Frontiers in Applied Mathematics 16, 71–78 (1995) Zhang et al. [2020] Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)
  41. Zhang, J., O’Donoghue, B., Boyd, S.: Globally convergent type-i anderson acceleration for nonsmooth fixed-point iterations. SIAM Journal on Optimization 30(4), 3170–3197 (2020) Grippo et al. [1986] Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)
  42. Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for newton’s method. SIAM Journal on Numerical Analysis 23(4), 707–716 (1986) Liuzzi et al. [2020] Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)
  43. Liuzzi, G., Lucidi, S., Rinaldi, F.: An algorithmic framework based on primitive directions and nonmonotone line searches for black-box optimization problems with integer variables. Mathematical Programming Computation 12, 673–702 (2020) Mita et al. [2019] Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)
  44. Mita, K., Fukuda, E.H., Yamashita, N.: Nonmonotone line searches for unconstrained multiobjective optimization problems. Journal of Global Optimization 75, 63–90 (2019) Ferreira et al. [2023] Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)
  45. Ferreira, O., Grapiglia, G., Santos, E., Souza, J.: A subgradient method with non-monotone line search. Computational Optimization and Applications 84(2), 397–420 (2023) Wen et al. [2018] Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018) Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)
  46. Wen, B., Chen, X., Pong, T.K.: A proximal difference-of-convex algorithm with extrapolation. Computational Optimization and Applications 69, 297–324 (2018)

Summary

We haven't generated a summary for this paper yet.