Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 165 tok/s
Gemini 2.5 Pro 57 tok/s Pro
GPT-5 Medium 39 tok/s Pro
GPT-5 High 37 tok/s Pro
GPT-4o 106 tok/s Pro
Kimi K2 185 tok/s Pro
GPT OSS 120B 445 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Optimal Linear Estimation of Functionals

Updated 29 October 2025
  • The paper derives explicit formulas for the optimal linear estimator of a functional from noisy, incomplete observations using Hilbert space projections and spectral analysis.
  • It presents a mean-square error minimization framework that extends classical Wiener theory to accommodate missing data and spectral uncertainty.
  • The robust minimax approach identifies least favorable spectral densities, ensuring reliable estimation under varying noise and signal conditions.

the mean-square optimal linear estimation of the functional Aξ=Rsa(t)ξ(t)dtA\xi = \int_{R^s} a(t) \xi(-t) dt for a stationary process ξ(t)\xi(t) observed with noise and missing data. The summary highlights both the spectral certainty and minimax (robust) approaches, including explicit formulas and the implications for the general problem.

1. Problem Setting

  • Signal: ξ(t)\xi(t), a real stationary process with known or partially known spectral density f(λ)f(\lambda).
  • Noise: η(t)\eta(t), an uncorrelated stationary process with spectral density g(λ)g(\lambda).
  • Observations: y(t)=ξ(t)+η(t)y(t) = \xi(t) + \eta(t) are available for tRSt \in \mathbb{R} \setminus S, i.e., with missing values on a set SS.
  • Functional to estimate: Aξ=Rsa(t)ξ(t)dtA\xi = \int_{R^s} a(t) \xi(-t) dt where a(t)a(t) is a given function.
  • Goal: Find the linear estimator of AξA\xi from the observations, minimizing mean-square error. This generalizes classical Wiener filtering to functionals and missing data.

2. Spectral Certainty: Explicit Estimator and Error Formulas

Key Method: Hilbert Space Projection

The problem is solved via Hilbert space projection methods, leveraging the spectral (Fourier) representation of stationary processes.

  • Spectral representation:

ξ(t)=eitλZξ(dλ),η(t)=eitλZη(dλ)\xi(t) = \int_{-\infty}^\infty e^{i t \lambda} Z_\xi(d\lambda),\quad \eta(t) = \int_{-\infty}^\infty e^{i t \lambda} Z_\eta(d\lambda)

  • Fourier Transform of the functional:

A(eiλ)=Rsa(t)eitλdtA(e^{i\lambda}) = \int_{R^s} a(t) e^{-i t \lambda} dt

Spectral Characteristic and Estimator

  • Optimal linear estimator Aξ^\widehat{A\xi}:

Aξ^=h(eiλ)(Zξ(dλ)+Zη(dλ))\widehat{A\xi} = \int_{-\infty}^{\infty} h(e^{i \lambda}) (Z_\xi(d\lambda) + Z_\eta(d\lambda))

where the filter h(eiλ)h(e^{i\lambda}) (spectral characteristic) is given by:

h(eiλ)=A(eiλ)f(λ)C(eiλ)f(λ)+g(λ)h(e^{i \lambda}) = \frac{A(e^{i \lambda}) f(\lambda) - C(e^{i \lambda})}{f(\lambda) + g(\lambda)}

  • Correction term accounting for missing data:

C(eiλ)=l=1sMlNlMl(B1Ra)(t)eitλdt+0(B1Ra)(t)eitλdtC(e^{i \lambda}) = \sum_{l=1}^{s} \int_{-M_l-N_l}^{-M_l} (B^{-1} R a)(t) e^{i t \lambda} dt + \int_0^\infty (B^{-1} R a)(t) e^{i t \lambda} dt

Mean-Square Error:

Δ(h;f,g)=12πA(eiλ)g(λ)+C(eiλ)2f(λ)+g(λ)2f(λ)dλ+12πA(eiλ)f(λ)C(eiλ)2f(λ)+g(λ)2g(λ)dλ\boxed{ \Delta(h;f,g) = \frac{1}{2\pi}\int_{-\infty}^\infty \frac{|A(e^{i\lambda})g(\lambda) + C(e^{i\lambda})|^2}{|f(\lambda)+g(\lambda)|^2} f(\lambda)d\lambda + \frac{1}{2\pi}\int_{-\infty}^\infty \frac{|A(e^{i\lambda})f(\lambda) - C(e^{i\lambda})|^2}{|f(\lambda)+g(\lambda)|^2} g(\lambda)d\lambda }

3. Minimax (Robust) Filtering under Spectral Uncertainty

Motivation and Approach

In practice, spectral densities f(λ)f(\lambda) and g(λ)g(\lambda) are often not known exactly. When they belong to certain admissible sets (Df,DgD_f, D_g), a minimax approach is used.

  • Objective: Find the estimator minimizing the worst-case mean-square error over all allowed spectral densities.
  • Least Favorable Spectral Densities (f0,g0)(f_0, g_0):

Δ(f0,g0)=max(f,g)Df×DgΔ(h(f0,g0);f,g)\Delta(f_0, g_0) = \max_{(f,g)\in D_f \times D_g} \Delta(h(f_0, g_0); f, g)

  • Minimax Spectral Characteristic:

h0(eiλ)=h(f0,g0)h^0(e^{i\lambda}) = h(f_0, g_0)

Formulas and Conditions for Finding Least Favorable Densities

The optimal densities satisfy: Δ(h(f0,g0);f0,g0)=max(f,g)Df×DgΔ(h(f0,g0);f,g)\Delta(h(f_0, g_0); f_0, g_0) = \max_{(f,g) \in D_f \times D_g} \Delta(h(f_0, g_0); f, g) Using constraints (e.g., L1L_1 or L2L_2 bounds on deviations from nominal spectral densities).

4. Implications & Generalizations

  • Extends classical Wiener-Kolmogorov theory to include missing data and observation noise.
  • Detailed treatment of spectral uncertainty helps tailor filters that are robust to model errors.
  • The approach allows computation in broad settings, including block-missing observations and robust filtering against imprecise knowledge of spectra.

5. Summary of Key Formulas

Spectral Characteristic: h(eiλ)=A(eiλ)f(λ)C(eiλ)f(λ)+g(λ)h(e^{i \lambda}) = \frac{A(e^{i \lambda}) f(\lambda) - C(e^{i \lambda})}{f(\lambda) + g(\lambda)}

Mean-Square Error: Δ(h;f,g)=Ra,B1Ra+Qa,a\Delta(h;f,g) = \langle Ra, B^{-1}Ra \rangle + \langle Qa, a \rangle

Minimax Filtering:

Find (f0,g0)(f_0, g_0) minimizing worst-case MSE; use h0(eiλ)h^0(e^{i\lambda}) built from these.

Conclusion

The paper provides a robust estimation framework for the linear functional of stationary processes with missing data, extending classical approaches by integrating spectral uncertainty in the filter design. This minimizes error in practical applications where precise spectral information may be unattainable.

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Optimal Linear Estimation of Functionals.