Dice Question Streamline Icon: https://streamlinehq.com

Reliable convergence of WARP-LUT training with four-input LUTs

Determine whether gradient-based training of Walsh-Assisted Relaxation for Probabilistic Look-Up Tables (WARP-LUTs) using four-input look-up tables, parameterized via 16 Walsh–Hadamard coefficients per node, converges reliably on standard benchmarks.

Information Square Streamline Icon: https://streamlinehq.com

Background

The paper introduces WARP-LUTs, a differentiable, Walsh–Hadamard–based parameterization of weightless neural networks that scales exponentially with the number of LUT inputs, in contrast to Differentiable Logic Gate Networks (DLGNs), which scale doubly exponentially. This improved scaling suggests feasibility of training higher-input logic blocks, such as four-input LUTs, with only 16 trainable parameters per node.

Despite promising two-input results, the authors explicitly note uncertainty about whether WARP-LUT training remains reliable when extended to higher-input LUTs on standard benchmarks. Establishing reliable convergence for such models is crucial for leveraging modern FPGA primitives (e.g., LUT-6) and achieving practical deployment in resource-constrained environments.

References

While DLGNs scale doubly exponentially with the number of gate inputs, WARP-LUTs scale only exponentially, making it, in principle, possible to learn four-input logic blocks with only 16 trainable parameters per node instead of roughly 65,000. Whether training such models converges reliably on standard benchmarks remains to be shown.

WARP-LUTs - Walsh-Assisted Relaxation for Probabilistic Look Up Tables (2510.15655 - Gerlach et al., 17 Oct 2025) in Section "Limitations and Future Work"