Generalization of ReSU Training to Deep Architectures

Determine whether past–future canonical correlation analysis-based self-supervised training of Rectified Spectral Unit (ReSU) networks extends beyond the demonstrated two-layer architecture to deeper networks that reliably learn progressively more complex hierarchical features.

Background

The authors demonstrate that a two-layer ReSU network trained self-supervised on natural stimuli reproduces key physiological and anatomical properties of the Drosophila motion detection pathway. They suggest stacking ReSU layers as a path to deep, biologically plausible feature learning.

Despite a reported extension to three layers in follow-up work, the paper explicitly states that it remains unresolved whether the approach generally scales to deeper architectures, highlighting a key open direction for theoretical and empirical validation.

References

We demonstrated a self-supervised learning of non-trivial features in a two-layer ReSU network, but whether this approach generalizes to deeper networks remains an open question.