Validity of convex combinations of update rules from different majorizers

Establish whether, in the MM framework for Sparse Bayesian Learning, a convex combination of update rules derived from different convex majorizers is itself a valid update rule that guarantees descent, beyond the specific sufficient conditions provided in the paper.

Background

The authors prove that when multiple update rules minimize the same convex majorizer, any convex combination of those updates remains a valid update rule that ensures descent. They then consider combining updates from different majorizers and provide sufficient conditions (convexity of each majorizer and convexity of their pointwise minimum) under which the convex combination remains valid.

However, they explicitly state that the general validity of such convex combinations across different majorizers is still an open question, motivating further theoretical work.

References

However, the validity of the convex combination of update rules corresponding to different majorizers is still open.

Sparse Bayesian Learning Algorithms Revisited: From Learning Majorizers to Structured Algorithmic Learning using Neural Networks  (2604.02513 - Balaji et al., 2 Apr 2026) in Section 4.1, Convex Combination of Update Rules (immediately before Theorem 2)