Dice Question Streamline Icon: https://streamlinehq.com

Scalability of physical local-learning methods to backpropagation-level performance

Ascertain whether local-learning-based training algorithms for physical neural networks—such as Forward-Forward training and Physical Local Learning that eliminate end-to-end gradient communication—can reproduce the performance of backpropagation when scaled beyond small laboratory demonstrations, and determine the conditions under which such scaling is feasible.

Information Square Streamline Icon: https://streamlinehq.com

Background

Local learning eliminates end-to-end gradient communication by training each layer or block with its own objective, offering attractive prospects for distributed and hardware-efficient training in PNNs. Recent experimental demonstrations include Forward-Forward training of optical neural networks and the Physical Local Learning framework across acoustic, microwave, and optical platforms.

Despite promising efficiency and compatibility with analog hardware, it is uncertain whether these local methods can match the task performance of backpropagation at scale. Establishing their scalability limits and performance conditions is critical for practical deployment of large PNNs.

References

While local learning has great potential to scale up in terms of hardware, it remains far from clear whether these methods can, at any scale above small laboratory demonstrations, reproduce the performance of backpropagation.

Training of Physical Neural Networks (2406.03372 - Momeni et al., 5 Jun 2024) in Section: Physical Local Learning