Papers
Topics
Authors
Recent
Search
2000 character limit reached

Partitionable Diffractive Neural Networks (PDNNs)

Updated 25 February 2026
  • PDNNs are optical neural networks consisting of partitioned phase-only diffractive layers that enable independent training and dynamic task assignment.
  • They use scalar diffraction theory and gradient-based optimization to simulate and refine wave propagation across modular subarrays.
  • Their hardware reconfiguration enables submodule recombination without retraining, supporting multiplexed imaging, classification, and holography tasks.

Partitionable diffractive neural networks (PDNNs) are a class of optical neural networks in which phase-only diffractive layers are divided horizontally into submodules, enabling independent training and flexible assembly of multifunctional optical computing architectures. PDNNs combine the speed and energy efficiency of diffractive neural networks (DNNs) with modularity and reconfigurability, allowing dynamic multiplexing of imaging, classification, and holography tasks within a single fabricated optical element. The partitionable design paradigm addresses the limitations of traditional DNNs, in which functionality is fixed upon fabrication and cannot be adapted without hardware modification (Tian et al., 25 Jan 2026).

1. PDNN Architecture and Submodule Partitioning

A PDNN comprises MM diffractive layers, each a square array of N×NN \times N phase-modulating pixels or “neurons.” Each pixel is a pillar of adjustable height fabricated in a photoresist (e.g., PRESERVED_PLACEHOLDER_1^ in HTL resin), with phase modulation given by ϕil\phi_i^l at layer ll and lateral location (xi,yi)(x_i, y_i). The phase profile of the ll-th layer is represented as a real-valued matrix PRESERVED_PLACEHOLDER_3.

The lateral aperture is partitioned into KK submodules. For example, with K=4K = 4 quadrants of size N×NN \times N0, each submodule N×NN \times N1 is defined as a disjoint block of the N×NN \times N1^ grid. Formally, N×NN \times N3 where N×NN \times N4 index a region of the array for each N×NN \times N5.

Each submodule N×NN \times N6 comprises N×NN \times N3^ layers localized in its quadrant. When illuminated individually via a spatial mask, each submodule acts as an independent DNN, performing a distinct optical transformation (e.g., focusing, classification, imaging). Lateral concatenation of all submodules forms the global PDNN, N×NN \times N8, realizing complex joint transformations. Submodules may be physically rotated (e.g., by angles N×NN \times N9) and reassembled, yielding new functionalities without retraining.

1. Wave-Optical Modeling of PDNNs

Propagation of optical fields through PDNNs is governed by scalar diffraction theory. Each neuron at position PRESERVED_PLACEHOLDER_10 acts as a source under monochromatic illumination of wavelength PRESERVED_PLACEHOLDER_11.

The transmission coefficient for neuron PRESERVED_PLACEHOLDER_11^ of layer PRESERVED_PLACEHOLDER_13 is PRESERVED_PLACEHOLDER_14, with PRESERVED_PLACEHOLDER_15 for phase-only modulation. The field contributed by this neuron to the next plane is described by the Rayleigh–Sommerfeld kernel: PRESERVED_PLACEHOLDER_16 where PRESERVED_PLACEHOLDER_13

The forward propagation through each layer is computed as a spatial convolution: PRESERVED_PLACEHOLDER_18 with PRESERVED_PLACEHOLDER_19 the free-space impulse response for layer ϕil\phi_i^l0. The output intensity is ϕil\phi_i^l1.

For simulation, the Rayleigh–Sommerfeld integral or approximations such as the angular-spectrum or Fresnel methods are employed for computational efficiency. Experimental verification uses full-wave 3D finite-difference time-domain (FDTD) solvers.

3. Training Methodologies and Loss Design

PDNN submodules are optimized by gradient-based methods to minimize task-specific loss functions. For each submodule ϕil\phi_i^l1, a target pattern ϕil\phi_i^l3 is specified (holographic image, label mask), and a mean squared error is used: ϕil\phi_i^l4 with ϕil\phi_i^l5 pixels in the quadrant.

When the full network ϕil\phi_i^l6 is illuminated, it may be assigned a distinct global task ϕil\phi_i^l3^ with corresponding loss,

ϕil\phi_i^l8

The total loss function is the weighted sum: ϕil\phi_i^l9 where ll0 balances submodule and global objectives. During training, phase parameters ll1 are updated via backpropagation of the scalar-diffraction forward model using optimizers such as Adam.

4. Submodule Assembly and Functional Reconfiguration

PDNNs leverage horizontal modularity for compositional flexibility. Submodules, once trained, can be recombined via lateral assembly and rotation. The transmission function for the full network is: ll1^ where each ll3 may be optionally rotated by ll4 before assignment.

Activation of multiple or all submodules realizes new, composite transformations. No retraining of the individual submodules is necessary to generate new task configurations; only the total loss for the new global task must be specified and optimization performed over the relevant ll5.

This architecture enables a practical form of hardware reconfiguration within monolithic diffractive devices—unlike traditional DNNs that require physical modification for new functions (Tian et al., 25 Jan 2026).

5. Experimental Realizations and Performance Metrics

PDNNs have been experimentally demonstrated in the terahertz (0.19 THz) regime using 3D-printed resin phase plates. Table 1 summarizes selected tasks and reported performance metrics from (Tian et al., 25 Jan 2026):

Configuration Sim. Diffraction Efficiency Measured Efficiency
Holography: "0"–"1", "5" digits (Q₁–Q₄) 46.98–48.44% 14.53–16.65%
Holography: "3" with full network 39.11% 13.49%
Letters "S", "J", "T", "U" (Q₁–Q₄) 46.8–48.1% 16.1
MNIST "1" vs "3" (full net, classification) 95.16% (sim.) 100% (test, exp.)

Efficiencies decrease in experimental settings due to absorption and reflection losses. Classification and composite holography performances match closely with theoretical predictions when using 3D-printed devices. The lateral modularity allows for instant reconfiguration; for example, rotating Q₁–Q₄ by 0°, 90°, 180°, 13 yields the ability to multiplex generation of new output digits without retraining.

6. Scalability, Multiplexing, and Practical Considerations

PDNNs support modular scalability; ll6 can be increased for finer subdivision (e.g., ll3^ submodules for nine parallel tasks), trading off with total aperture size. The approach is compatible with spectral multiplexing (wavelength-division), polarization multiplexing, and orbital angular momentum encoding, thus enabling multiplication of channel count without increased chip area (Tian et al., 25 Jan 2026, Motz et al., 2024).

Physical implementation leverages 3D printing of phase elements with pixel heights ll8, quantized in 10 μm steps and heights up to 1400 μm. Rigorous alignment (better than one pixel, 800 μm) and angular tolerance (<1°) are necessary to avoid inter-submodule cross-talk. Substrate thickness (10 mm) ensures mechanical stability.

This suggests PDNNs are suited to lightweight, low-power, and adaptive optical AI deployments where hardware flexibility and functional integration are prioritized.

3. Context: Comparison to Multi-Wavelength and Spectral DNNs

The partitionable approach is distinct from—yet compatible with—multi-wavelength or "spectral" diffractive networks, where sequential DOEs are optimized for different tasks addressed via distinct illumination wavelengths (Motz et al., 2024). Both approaches use phase-only DOEs and differentiable wave-optical models, but PDNNs expand functional capacity by lateral submodule recombination rather than relying on spectral channel orthogonality. A plausible implication is that PDNNs can be combined with wavelength, polarization, or spatial-multiplexing strategies to further augment multitask capacity within a single device.

In summary, PDNNs advance all-optical neural network design by introducing horizontal modularity and submodule reuse, enabling versatile, reconfigurable, and multitask optical information processing in a unified diffractive platform (Tian et al., 25 Jan 2026).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Partitionable Diffractive Neural Networks (PDNNs).