Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
127 tokens/sec
GPT-4o
11 tokens/sec
Gemini 2.5 Pro Pro
53 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
4 tokens/sec
DeepSeek R1 via Azure Pro
33 tokens/sec
2000 character limit reached

Modular data assimilation for flow prediction (2506.19002v2)

Published 23 Jun 2025 in math.NA and cs.NA

Abstract: This report develops several modular, 2-step realizations (inspired by Kalman filter algorithms) of nudging-based data assimilation $$Step \ 1 \quad \frac{\widetilde {v}{n+1}-v{n}}{k}+v{n}\cdot \nabla \widetilde {v}{n+1}-\nu \triangle \widetilde {v}{n+1}+\nabla q{n+1}=f(x)$$ $$\nabla \cdot \widetilde {v}{n+1}=0$$ $$Step \ 2 \quad \frac{v{n+1}-\widetilde {v}{n+1}}{k}-\chi I_{H}(u(t{n+1})-v{n+1})=0.$$ Several variants of this algorithm are developed. Three main results are developed. The first is that if $I_{H}{2}=I_{H}$, then Step 2 can be rewritten as the explicit step $$v{n+1}=\widetilde {v}{n+1}+\frac{k\chi }{1+k\chi }[I_{H}u(t{n+1})-I_{H} \widetilde {v}{n+1}].$$ This means Step 2 has the greater stability of an implicit update and the lesser complexity of an explicit analysis step. The second is that the basic result of nudging (that for $H$ small enough and $\chi$ large enough predictability horizons are infinite) holds for one variant of the modular algorithm. The third is that, for any $H>0$ and any $\chi>0$, one step of the modular algorithm decreases the next step's error and increases (an estimate of) predictability horizons. A method synthesizing assimilation with eddy viscosity models of turbulence is also presented. Numerical tests are given, confirming the effectiveness of the modular assimilation algorithm. The conclusion is that the modular, 2-step method overcomes many algorithmic inadequacies of standard nudging methods and retains a robust mathematical foundation.

Summary

We haven't generated a summary for this paper yet.