Best approach to learning good neural representations

Determine the most effective approach for learning high-quality internal representations in neural networks, identifying training paradigms that reliably produce robust, organized representations rather than brittle or disorganized ones.

Background

The paper contrasts open-ended neuroevolution of CPPNs in Picbreeder, which often yields unified factored representations (UFR), with conventional objective-driven stochastic gradient descent (SGD), which tends to produce fractured entangled representations (FER). Despite demonstrating striking differences in internal representations for networks with identical outputs, the authors do not claim a single superior method. Instead, they highlight that the overarching question of which training paradigm best fosters good representations remains unresolved.

This open problem invites systematic comparison of training strategies (e.g., objective-driven SGD, open-ended search, architectural variations) with respect to representational quality, adaptability, and downstream capabilities such as generalization, creativity, and continual learning.

References

The question of the best approach to learning good neural representations remains open.

Questioning Representational Optimism in Deep Learning: The Fractured Entangled Representation Hypothesis (2505.11581 - Kumar et al., 16 May 2025) in Background (Section 2)