Dice Question Streamline Icon: https://streamlinehq.com

Do Columnar-Constructive Networks Share RCC Representational Limitations?

Determine whether Columnar-Constructive networks suffer from representational limitations analogous to those proven for Recurrent Cascade-Correlation networks, specifically the inability to learn certain finite state automata with linear threshold and sigmoid activations, and assess whether the LSTM-based CCN architecture and parallel feature learning circumvent these limitations.

Information Square Streamline Icon: https://streamlinehq.com

Background

Prior work has shown that Recurrent Cascade-Correlation (RCC) networks, which are related to constructive approaches, cannot represent certain finite state automata when using linear threshold and sigmoid activations (Kremer, 1995). Because CCNs are constructive but employ LSTM units and learn multiple features in parallel, it is unclear whether these known RCC limitations apply to CCNs.

Clarifying whether CCNs inherit RCC representational constraints would help determine the scope of problems CCNs can solve and whether their architectural choices (LSTM cells and parallel, columnar feature learning) mitigate or eliminate the limitations identified for RCC networks.

References

It is not yet clear if CCNs suffer from similar problems; the argument used by Kremer (1995) might not extend to the complex LSTM architecture used in our networks.

Scalable Real-Time Recurrent Learning Using Columnar-Constructive Networks (2302.05326 - Javed et al., 2023) in Conclusions and Future Directions