Papers
Topics
Authors
Recent
2000 character limit reached

Mosco Convergence of Functionals

Updated 21 November 2025
  • Mosco convergence is a variational convergence concept defined via dual conditions that balance weak and strong convergence to accurately track functional minimizers.
  • It plays a pivotal role in convex optimization by stabilizing minimizers and ensuring strong convergence of resolvents and subdifferentials.
  • Applications include analyzing under-relaxed nonexpansive operator cycles in fixed point theory, nonlinear PDEs, and evolutionary variational inequalities.

Mosco convergence is a notion of variational convergence for sequences of functionals on Hilbert or Banach spaces, designed to provide optimal tools for capturing the asymptotic behavior of minimizing points and minimizers. It has become central in the analysis of evolutionary variational inequalities, nonlinear partial differential equations, and optimization, particularly in the presence of convexity and nonexpansive operator frameworks.

1. Precise Definition and Framework

Let HH be a real Hilbert space and let (Fn)nN(F_n)_{n\in\mathbb N}, FF be a sequence and a limiting functionals, typically from HH to (,+](-\infty,+\infty]. Mosco convergence is defined via dual variational conditions:

  • Lower bound: For every xHx\in H, and for every sequence (xn)(x_n) with xnxx_n \rightharpoonup x (weak convergence),

lim infnFn(xn)F(x).\liminf_{n\to\infty} F_n(x_n) \geq F(x).

  • Upper bound: For every xHx\in H, there exists a sequence xnxx_n \to x (strong convergence), such that

lim supnFn(xn)F(x).\limsup_{n\to\infty} F_n(x_n) \leq F(x).

Thus, Mosco convergence is a hybrid of the lim inf under weak convergence and lim sup under strong convergence, which optimally balances "stability of minimizers" and "approximation of values". It lies between the classic weak and strong Γ\Gamma-convergence and is strictly stronger than weak Γ\Gamma-convergence, but weaker than strong Γ\Gamma-convergence.

2. Main Properties and Consequences

Mosco convergence is tailored to convex (but not necessarily lower semicontinuous) functionals or indicators of convex sets. Key variational principles and results include:

  • Stability of minimizers: If (Fn)(F_n) Mosco converges to FF and xnx_n is a minimizer of FnF_n, any weak cluster point of (xn)(x_n) minimizes FF.
  • Closure under convex/monotone operations: Mosco convergence is preserved under addition of continuous quadratic forms, or more generally under strongly continuous perturbations and under convexification.
  • Convergence of resolvents and subdifferentials: If FnMFF_n \to_M F, then the resolvents (I+λFn)1(I+\lambda\partial F_n)^{-1} converge strongly to (I+λF)1(I+\lambda\partial F)^{-1} for all λ>0\lambda>0 (Minty monotonicity principle). This is crucial for evolutionary equations and flows.
  • Resolution of singular limits: Mosco convergence is effective for studying limits in evolutionary inclusions with parameter-dependent convex sets or operators.

3. Applications in Nonexpansive Operator Theory

A central context for Mosco convergence is the asymptotic analysis of fixed points and cycles of nonexpansive operators. In (Baillon et al., 2013), Bauschke, Combettes, and Noll analyze the asymptotic behavior of compositions of under-relaxed nonexpansive operators in Hilbert space.

Given nonexpansive maps Ti:DDT_i: D \to D on a closed convex domain and their under-relaxed compositions RλR_\lambda, the cycles (xiλ)(x_i^\lambda) (solutions of xiλ=Tiλxi1λx_i^\lambda = T_i^\lambda x_{i-1}^\lambda) exhibit collapse, as λ0\lambda\to0, towards the fixed point set of the averaged operator T=1miTi\overline{T} = \frac1m\sum_i T_i. The proof relies on properties that are, in the convex setting, underpinned by Mosco convergence: the erased distinction between weak and strong convergence of the (approximate) minimizers characterizes precisely what Mosco convergence is designed to track.

A schematic link is as follows:

  • For indicator functionals of moving convex sets CnC_n, Mosco convergence ICnMICI_{C_n} \to_M I_C is equivalent to CnCC_n \to C in the Mosco sense (i.e., Painlevé–Kuratowski or variational convergence).
  • In projection algorithms (method of alternating projections, under-relaxed cyclic projections), the variational collapse of cycles under small relaxation recovers fixed points of the averaged mapping—the minimizers of the sum-of-squares distance Φ(x)=(1/2m)dCi2(x)\Phi(x) = (1/2m)\sum d_{C_i}^2(x)—by the asymptotic Mosco convergence of associated functionals.

This approach resolves deep conjectures such as De Pierro’s, showing that under boundedness (Assumption H), the periodic projection cycles contract, via Mosco convergence of indicator functions and their Moreau envelops, to the least-squares solution set (Baillon et al., 2013).

4. Examples: Geometric and Functional Settings

Standard instances demonstrating the power of Mosco convergence include:

  • Varying convex sets: If CnC_n are convex sets in HH, the Mosco limit CC consists of all points xx such that every weak cluster of approximating sequences in CnC_n lands in CC and each xx in CC is strongly approximable by points from CnC_n.
  • Sum of convex constraints: In feasibility or split problem settings, Mosco convergence tracks passage to the "limiting" feasibility set even under singular limit operations (e.g., vanishing relaxation or constraint removal).
  • Convex regularizations: Quadratic or more general L2L^2-perturbations preserve Mosco convergence, critical in Tikhonov regularization and variational PDE theory.

5. Counterexamples and Limitations

Mosco convergence is strictly weaker than strong Γ\Gamma-convergence but is not implied by weak convergence of functionals. For example, in [(Baillon et al., 2013), Ex 2.3–2.4, 3.8]:

  • There exist three-set configurations in R2\mathbb{R}^2 where the fixed-point set of the under-relaxed cyclic composition fails to persist for large relaxation, and cycles do not collapse even as the relaxation parameter vanishes.
  • Sequence of convex sets may converge in a weak set topology but fail Mosco convergence if strong approximability is lost—demonstrating necessity for the dual variational conditions.

These pathologies justify the necessity for the precise variational topology provided by Mosco.

6. Connections to Variational Analysis and Optimization

Mosco convergence generalizes and sharpens classical convergence concepts for operators and sets, making it ideal for modern variational analysis and monotone operator theory:

  • Key in maximal monotone operator theory: the graphs of monotone operators converge in the sense of Mosco if and only if their resolvents converge strongly, underpinning convergence analysis for nonlinear PDEs, evolution equations, and splitting algorithms.
  • Directly linked to Trotter–Kato product formulae, showing convergence of discrete/iterative schemes to continuous flows.
  • Central in the stability theory for convex minimization and feasibility problems, especially in infinite-dimensional settings where compactness is weak and minimizing sequences only provide weak convergence.

In summary, Mosco convergence is optimal for studying stability of variational problems under general perturbations, especially in convex, monotone, or nonexpansive frameworks, and is essential for the modern foundations of convergence in optimization, PDEs, and fixed-point theory (Baillon et al., 2013).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)
Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Mosco Convergence of Functionals.