Learning the climate of dynamical systems with state-space systems (2512.15530v1)
Abstract: State-space systems encompass a broad class of algorithms used for modeling and forecasting time series. For such systems to be effective, two objectives must be met: (i) accurate point forecasts of the time series must be produced, and (ii) the long-term statistical behaviour of the underlying data-generating process must be replicated. The latter objective, often referred to as learning the climate, is closely related to the task of producing accurate distribution forecasts. Empirical evidence shows that distribution forecasts are far more stable than point forecasts, which are sensitive to initial conditions. In this work, we rigorously study this phenomenon for state-space systems. The main result shows that, if the underlying data-generating process is structurally stable and possesses a mixing or an attracting measure, then a sufficiently regular initial probability distribution remains close to the true future distribution at arbitrarily long time horizons when forecasted by a $C1-$close state-space proxy. Thus, under these conditions, learning the climate of a dynamic process with a universal family of state-space systems is feasible with arbitrarily high accuracy.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.