Mosco Convergence of Functionals
- Mosco convergence is a variational convergence concept defined via dual conditions that balance weak and strong convergence to accurately track functional minimizers.
- It plays a pivotal role in convex optimization by stabilizing minimizers and ensuring strong convergence of resolvents and subdifferentials.
- Applications include analyzing under-relaxed nonexpansive operator cycles in fixed point theory, nonlinear PDEs, and evolutionary variational inequalities.
Mosco convergence is a notion of variational convergence for sequences of functionals on Hilbert or Banach spaces, designed to provide optimal tools for capturing the asymptotic behavior of minimizing points and minimizers. It has become central in the analysis of evolutionary variational inequalities, nonlinear partial differential equations, and optimization, particularly in the presence of convexity and nonexpansive operator frameworks.
1. Precise Definition and Framework
Let be a real Hilbert space and let , be a sequence and a limiting functionals, typically from to . Mosco convergence is defined via dual variational conditions:
- Lower bound: For every , and for every sequence with (weak convergence),
- Upper bound: For every , there exists a sequence (strong convergence), such that
Thus, Mosco convergence is a hybrid of the lim inf under weak convergence and lim sup under strong convergence, which optimally balances "stability of minimizers" and "approximation of values". It lies between the classic weak and strong -convergence and is strictly stronger than weak -convergence, but weaker than strong -convergence.
2. Main Properties and Consequences
Mosco convergence is tailored to convex (but not necessarily lower semicontinuous) functionals or indicators of convex sets. Key variational principles and results include:
- Stability of minimizers: If Mosco converges to and is a minimizer of , any weak cluster point of minimizes .
- Closure under convex/monotone operations: Mosco convergence is preserved under addition of continuous quadratic forms, or more generally under strongly continuous perturbations and under convexification.
- Convergence of resolvents and subdifferentials: If , then the resolvents converge strongly to for all (Minty monotonicity principle). This is crucial for evolutionary equations and flows.
- Resolution of singular limits: Mosco convergence is effective for studying limits in evolutionary inclusions with parameter-dependent convex sets or operators.
3. Applications in Nonexpansive Operator Theory
A central context for Mosco convergence is the asymptotic analysis of fixed points and cycles of nonexpansive operators. In (Baillon et al., 2013), Bauschke, Combettes, and Noll analyze the asymptotic behavior of compositions of under-relaxed nonexpansive operators in Hilbert space.
Given nonexpansive maps on a closed convex domain and their under-relaxed compositions , the cycles (solutions of ) exhibit collapse, as , towards the fixed point set of the averaged operator . The proof relies on properties that are, in the convex setting, underpinned by Mosco convergence: the erased distinction between weak and strong convergence of the (approximate) minimizers characterizes precisely what Mosco convergence is designed to track.
A schematic link is as follows:
- For indicator functionals of moving convex sets , Mosco convergence is equivalent to in the Mosco sense (i.e., Painlevé–Kuratowski or variational convergence).
- In projection algorithms (method of alternating projections, under-relaxed cyclic projections), the variational collapse of cycles under small relaxation recovers fixed points of the averaged mapping—the minimizers of the sum-of-squares distance —by the asymptotic Mosco convergence of associated functionals.
This approach resolves deep conjectures such as De Pierro’s, showing that under boundedness (Assumption H), the periodic projection cycles contract, via Mosco convergence of indicator functions and their Moreau envelops, to the least-squares solution set (Baillon et al., 2013).
4. Examples: Geometric and Functional Settings
Standard instances demonstrating the power of Mosco convergence include:
- Varying convex sets: If are convex sets in , the Mosco limit consists of all points such that every weak cluster of approximating sequences in lands in and each in is strongly approximable by points from .
- Sum of convex constraints: In feasibility or split problem settings, Mosco convergence tracks passage to the "limiting" feasibility set even under singular limit operations (e.g., vanishing relaxation or constraint removal).
- Convex regularizations: Quadratic or more general -perturbations preserve Mosco convergence, critical in Tikhonov regularization and variational PDE theory.
5. Counterexamples and Limitations
Mosco convergence is strictly weaker than strong -convergence but is not implied by weak convergence of functionals. For example, in [(Baillon et al., 2013), Ex 2.3–2.4, 3.8]:
- There exist three-set configurations in where the fixed-point set of the under-relaxed cyclic composition fails to persist for large relaxation, and cycles do not collapse even as the relaxation parameter vanishes.
- Sequence of convex sets may converge in a weak set topology but fail Mosco convergence if strong approximability is lost—demonstrating necessity for the dual variational conditions.
These pathologies justify the necessity for the precise variational topology provided by Mosco.
6. Connections to Variational Analysis and Optimization
Mosco convergence generalizes and sharpens classical convergence concepts for operators and sets, making it ideal for modern variational analysis and monotone operator theory:
- Key in maximal monotone operator theory: the graphs of monotone operators converge in the sense of Mosco if and only if their resolvents converge strongly, underpinning convergence analysis for nonlinear PDEs, evolution equations, and splitting algorithms.
- Directly linked to Trotter–Kato product formulae, showing convergence of discrete/iterative schemes to continuous flows.
- Central in the stability theory for convex minimization and feasibility problems, especially in infinite-dimensional settings where compactness is weak and minimizing sequences only provide weak convergence.
In summary, Mosco convergence is optimal for studying stability of variational problems under general perturbations, especially in convex, monotone, or nonexpansive frameworks, and is essential for the modern foundations of convergence in optimization, PDEs, and fixed-point theory (Baillon et al., 2013).