Reliable convergence of WARP-LUT training with four-input LUTs
Determine whether gradient-based training of Walsh-Assisted Relaxation for Probabilistic Look-Up Tables (WARP-LUTs) using four-input look-up tables, parameterized via 16 Walsh–Hadamard coefficients per node, converges reliably on standard benchmarks.
References
While DLGNs scale doubly exponentially with the number of gate inputs, WARP-LUTs scale only exponentially, making it, in principle, possible to learn four-input logic blocks with only 16 trainable parameters per node instead of roughly 65,000. Whether training such models converges reliably on standard benchmarks remains to be shown.
— WARP-LUTs - Walsh-Assisted Relaxation for Probabilistic Look Up Tables
(2510.15655 - Gerlach et al., 17 Oct 2025) in Section "Limitations and Future Work"