Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Non-stationary Anderson acceleration with optimized damping (2202.05295v1)

Published 10 Feb 2022 in math.NA and cs.NA

Abstract: Anderson acceleration (AA) has a long history of use and a strong recent interest due to its potential ability to dramatically improve the linear convergence of the fixed-point iteration. Most authors are simply using and analyzing the stationary version of Anderson acceleration (sAA) with a constant damping factor or without damping. Little attention has been paid to nonstationary algorithms. However, damping can be useful and is sometimes crucial for simulations in which the underlying fixed-point operator is not globally contractive. The role of this damping factor has not been fully understood. In the present work, we consider the non-stationary Anderson acceleration algorithm with optimized damping (AAoptD) in each iteration to further speed up linear and nonlinear iterations by applying one extra inexpensive optimization. We analyze this procedure and develop an efficient and inexpensive implementation scheme. We also show that, compared with the stationary Anderson acceleration with fixed window size sAA(m), optimizing the damping factors is related to dynamically packaging sAA(m) and sAA(1) in each iteration (alternating window size $m$ is another direction of producing non-stationary AA). Moreover, we show by extensive numerical experiments that the proposed non-stationary Anderson acceleration with optimized damping procedure often converges much faster than stationary AA with constant damping or without damping.

Citations (10)

Summary

We haven't generated a summary for this paper yet.