Untangling Lariats: Subgradient Following of Variationally Penalized Objectives
Abstract: We describe an apparatus for subgradient-following of the optimum of convex problems with variational penalties. In this setting, we receive a sequence $y_i,\ldots,y_n$ and seek a smooth sequence $x_1,\ldots,x_n$. The smooth sequence needs to attain the minimum Bregman divergence to an input sequence with additive variational penalties in the general form of $\sum_i{}g_i(x_{i+1}-x_i)$. We derive known algorithms such as the fused lasso and isotonic regression as special cases of our approach. Our approach also facilitates new variational penalties such as non-smooth barrier functions. We then derive a novel lattice-based procedure for subgradient following of variational penalties characterized through the output of arbitrary convolutional filters. This paradigm yields efficient solvers for high-order filtering problems of temporal sequences in which sparse discrete derivatives such as acceleration and jerk are desirable. We also introduce and analyze new multivariate problems in which $\mathbf{x}i,\mathbf{y}_i\in\mathbb{R}d$ with variational penalties that depend on $|\mathbf{x}{i+1}-\mathbf{x}i|$. The norms we consider are $\ell_2$ and $\ell\infty$ which promote group sparsity.
- Modular proximal optimization for multidimensional total-variation regularization. arXiv preprint arXiv:1411.0589, 2014.
- Lev M. Bregman. The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. USSR computational mathematics and mathematical physics, 7(3):200–217, 1967.
- Laurent Condat. A direct algorithm for 1-d total variation denoising. IEEE Signal Processing Letters, 20(11):1054–1057, 2013.
- Local extremes, runs, strings and multiresolution. The Annals of Statistics, 29(1):1–65, 2001.
- Adaptive subgradient methods for online learning and stochastic optimization. Journal of machine learning research, 12(7):2121–2159, 2011.
- Norm-product belief propagation: Primal-dual message-passing for approximate inference. IEEE Transactions on Information Theory, 56(12):6294–6316, 2010.
- Dorit S. Hochbaum. An efficient algorithm for image segmentation, markov random fields and related problems. Journal of the ACM, 48(4):686–701, 2001.
- Holger Hoefling. A path algorithm for the fused lasso signal approximator. Journal of Computational and Graphical Statistics, 19(4):984–1006, 2010.
- John M. Hollerbach. An oscillation theory of handwriting. Biological Cybernetics, 39:139–156, 1981.
- Peter J. Huber. Robust estimation of a location parameter. In Breakthroughs in statistics: Methodology and distribution, pages 492–518. Springer, 1992.
- Nicholas A. Johnson. A dynamic programming algorithm for the fused lasso and ℓ0subscriptℓ0\ell_{0}roman_ℓ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT-segmentation. Journal of Computational and Graphical Statistics, 22(2):246–260, 2013.
- Total variation on a tree. SIAM Journal on Imaging Sciences, 9(2):605–636, 2016.
- The dfs fused lasso: Linear-time denoising over general graphs. Journal of Machine Learning Research, 18(176):1–36, 2018.
- William Pugh. Skip lists: a probabilistic alternative to balanced trees. Communications of the ACM, 33(6):668–676, 1990.
- Shai Shalev-Shwartz. Online Learning: Theory, Algorithms, and Applications. PhD thesis, The Hebrew University, 2007.
- Sparsity and smoothness via the fused lasso. Journal of the Royal Statistical Society Series B: Statistical Methodology, 67(1):91–108, 2005.
- Nearly-isotonic regression. Technometrics, 53(1):54–61, 2011.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.