Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
93 tokens/sec
Gemini 2.5 Pro Premium
47 tokens/sec
GPT-5 Medium
32 tokens/sec
GPT-5 High Premium
29 tokens/sec
GPT-4o
87 tokens/sec
DeepSeek R1 via Azure Premium
93 tokens/sec
GPT OSS 120B via Groq Premium
483 tokens/sec
Kimi K2 via Groq Premium
203 tokens/sec
2000 character limit reached

Frank-Wolfe Network: An Interpretable Deep Structure for Non-Sparse Coding (1802.10252v4)

Published 28 Feb 2018 in cs.CV

Abstract: The problem of $L_p$-norm constrained coding is to convert signal into code that lies inside an $L_p$-ball and most faithfully reconstructs the signal. Previous works under the name of sparse coding considered the cases of $L_0$ and $L_1$ norms. The cases with $p>1$ values, i.e. non-sparse coding studied in this paper, remain a difficulty. We propose an interpretable deep structure namely Frank-Wolfe Network (F-W Net), whose architecture is inspired by unrolling and truncating the Frank-Wolfe algorithm for solving an $L_p$-norm constrained problem with $p\geq 1$. We show that the Frank-Wolfe solver for the $L_p$-norm constraint leads to a novel closed-form nonlinear unit, which is parameterized by $p$ and termed $pool_p$. The $pool_p$ unit links the conventional pooling, activation, and normalization operations, making F-W Net distinct from existing deep networks either heuristically designed or converted from projected gradient descent algorithms. We further show that the hyper-parameter $p$ can be made learnable instead of pre-chosen in F-W Net, which gracefully solves the non-sparse coding problem even with unknown $p$. We evaluate the performance of F-W Net on an extensive range of simulations as well as the task of handwritten digit recognition, where F-W Net exhibits strong learning capability. We then propose a convolutional version of F-W Net, and apply the convolutional F-W Net into image denoising and super-resolution tasks, where F-W Net all demonstrates impressive effectiveness, flexibility, and robustness.

Citations (12)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube