Metasurface-based all-optical diffractive convolutional neural networks
Abstract: The escalating energy demands and parallel-processing bottlenecks of electronic neural networks underscore the need for alternative computing paradigms. Optical neural networks, capitalizing on the inherent parallelism and speed of light propagation, present a compelling solution. Nevertheless, physically realizing convolutional neural network (CNN) components all-optically remains a significant challenge. To this end, we propose a metasurface-based all-optical diffractive convolutional neural network (MAODCNN) for computer vision tasks. This architecture synergistically integrates metasurface-based optical convolutional layers, which perform parallel convolution on the optical field, with cascaded diffractive neural networks acting as all-optical decoders. This co-design facilitates layer-wise feature extraction and optimization directly within the optical domain. Numerical simulations confirm that the fusion of convolutional and diffractive layers markedly enhances classification accuracy, a performance that scales with the number of diffractive layers. The MAODCNN framework establishes a viable foundation for practical all-optical CNNs, paving the way for high-efficiency, low-power optical computing in advanced pattern recognition.
Sponsor
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.