Dice Question Streamline Icon: https://streamlinehq.com

Generality of Columnar-Constructive Networks Relative to Standard RNNs

Determine whether the architectural restrictions imposed by Columnar-Constructive networks (learning multiple independent columns in stages with previously learned features frozen) reduce the representational generality of recurrent neural networks, by characterizing the subclass of functions learnable by Columnar-Constructive networks and comparing it to the function class of unconstrained LSTM-based recurrent neural networks.

Information Square Streamline Icon: https://streamlinehq.com

Background

Columnar-Constructive networks (CCNs) are proposed to make Real-Time Recurrent Learning (RTRL) scalable by limiting the function class through two constraints: independent columnar recurrent features and staged, constructive learning with freezing of previously learned features. These design choices allow unbiased, efficient gradient computation but may restrict the overall expressiveness of the resulting recurrent architecture.

The authors highlight the need for a theoretical understanding of what functions CCNs can learn to assess whether these constraints render CCNs less general than standard recurrent neural networks (e.g., fully connected LSTM-based RNNs trained without CCN restrictions). Establishing this characterization would clarify when CCNs might be insufficient for particular prediction or state-construction tasks.

References

Another open question in this work is to investigate if the restrictions introduced by CCNs make RNNs less general.

Scalable Real-Time Recurrent Learning Using Columnar-Constructive Networks (2302.05326 - Javed et al., 2023) in Conclusions and Future Directions