MASI: Scalable Optical Synthetic Aperture Imaging
- MASI is an optical synthetic aperture imaging system that digitally fuses coded sensor data to achieve super-resolved, full-field 3D reconstructions.
- It employs sub-pixel dithering and software-based phase synchronization to overcome traditional hardware limitations in optical imaging.
- Experimental results demonstrate MASI’s ability to resolve sub-micron features and expand imaging field-of-view by up to 16×.
The Multiscale Aperture Synthesis Imager (MASI) is a class of imaging system that achieves super-resolved, full-field, and three-dimensional imaging by coherently combining distributed measurements from a scalable array of independently operating optical sensors. MASI transforms classical synthetic aperture imaging—prevalent in radio astronomy and radar—into the optical domain using a specialized computational architecture that eliminates the need for precision interferometric synchronization and extensive hardware alignment, instead relying on software-based phase synchronization and multiscale ptychographic reconstruction. This enables MASI to resolve sub-micron features at centimeter or greater working distances, operate in a lensless configuration, and handle complex objects with large phase gradients, thus overcoming longstanding scalability and field-of-view limitations in optical synthetic aperture systems (Wang et al., 8 Nov 2025).
1. MASI Principle and Conceptual Motivation
MASI implements a generalization of synthetic aperture imaging for optical wavelengths, motivated by challenges unique to the optical regime. In classical synthetic aperture approaches (used in radio astronomy), resolution is increased beyond the diffraction limit of a single element by coherently fusing signals from widely spaced receivers. At optical wavelengths, extending these principles is complicated by the need for sub-wavelength phase stability and extensive hardware for beam combination or overlapping measurements (as in Fourier ptychography).
MASI's core innovation is to break the imaging task into many independently solvable subproblems by deploying an array of physically separated coded sensors. Each sensor operates locally, acquiring redundant coded intensity measurements via sub-pixel dithering and ptychographic diversity, such that the complex wavefield can be recovered for each sensor without reference beams or overlapping spatial coverage with other sensors. The resulting wavefields are phase-synchronized computationally in a global optimization step, constructing a synthetic aperture that exceeds the standalone sensor's diffraction limit. This paradigm shift translates the challenging problem of sub-wavelength hardware synchronization into a tractable software problem, enabling flexible layouts and long baselines (Wang et al., 8 Nov 2025).
2. Physical and Measurement Architecture
The MASI architecture consists of a 2D grid of independent sensors (often implemented as CMOS/CCD micro-cameras), each mated with a thin, pre-calibrated amplitude-and-phase coded surface:
- Coded Surface: A mask patterned at or below the wavelength scale acts as a deterministic ptychographic probe, encoding phase and low-frequency information for computational recovery.
- Sensor Placement: Each sensor sits at a unique lateral position and axial offset relative to the object, but with arbitrary spacing; no requirement exists for overlapping measurement regions, physical reference beams, or precise depth matching.
- Dithering: Piezo-controlled or otherwise precision micro-actuation stages introduce sub-pixel lateral dithers (1 µm), enabling multiple intensity measurements per sensor for robust phase retrieval.
- Operation: The object is illuminated by a coherent source. Each sensor records temporal stacks of coded diffraction patterns, forming the measurement data set .
Each sensor's intensity stack supports independent non-interferometric reconstruction, producing a local complex field over the sensor's support. No cross-sensor synchrony or calibration is required during data acquisition (Wang et al., 8 Nov 2025).
3. Computational Phase Synchronization and Wavefield Fusion
Once the individual complex wavefields are recovered (typically in size), MASI executes a multi-step computational protocol:
- Wavefield Modeling: Each sensor measures , the object field convolved with free-space propagation from the object plane.
- Cropping and Coded Modulation: The physically supported region is .
- Modeling Measurements: For dithering shift , the intensity is
where is the known mask and is the coded-surface-to-pixel distance.
- Numerical Back-Propagation: Each is zero-padded to the full array extent, then numerically propagated back to the object plane:
- Global Phase Alignment: Each is determined up to an unknown constant phase. MASI solves for maximizing the constructive interference in the fused object estimate:
Practically, one reference sensor is fixed (), the rest are optimized by coordinate descent.
This computational phase synchronization fuses all sensor data into a coherent super-resolved reconstruction, with only scalar phases to optimize, reducing the synchronization parameter space by many orders of magnitude compared to full-field interferometric approaches (Wang et al., 8 Nov 2025).
4. Diffraction-Based Field Expansion
MASI exploits the wave-optical property that back-propagating a detector-plane wavefield inherently expands the reconstructed object field:
- Zero Padding: By zero-padding the recovered to a region much larger than the sensor, MASI computationally reconstructs illumination from parts of the object not directly above the physical sensor.
- Physical Mechanism: Each sensor captures angular spread in its local measurement, which—under free-space propagation—maps to spatial information (including regions beyond the sensor's footprint).
- Mathematical Formalism:
where is the Fourier transform of the padded wavefield.
- Empirical Results: Single-sensor experiments demonstrate up to a field-area increase (e.g., expanding from mm to mm) as padding is increased (Wang et al., 8 Nov 2025).
This mechanism enables computational field-of-view expansion, revealing phase-contrast features and supporting natural data obfuscation outside the physical array.
5. Performance Metrics and Imaging Capabilities
Key performance characteristics of MASI include:
- Lateral Resolution: Determined by the effective synthetic aperture ,
where is the maximum sensor array extent. Experimental results show resolving nm line features at cm working distance (threefold improvement over a single sensor, which is limited to approximately m).
- Axial (3D) Resolution: Achieved through digital focusing; by propagating the reconstructed volume to candidate depths and optimizing a sharpness metric, m depth discrimination is demonstrated over centimeter depths.
- 3D Wavefield and View Synthesis: The recovered can be propagated to different axial planes or subjected to Fourier-domain pupil shifts to generate refocused or synthetic-angle images:
where is the angularly shifted pupil.
- Field Expansion: Systematically increasing the padding factor in detector space increases the accessible field area in the reconstruction, with area expanding up to documented (Wang et al., 8 Nov 2025).
6. Scalability and Computational Architecture
By isolating synchronization to a small set of scalar global phases, MASI sidesteps the need for hardware-based sub-wavelength path stabilization present in overlapping Fourier ptychography or interferometric methods:
- Algorithmic Simplicity: Synchronization problem reduces to phase offsets for sensors, as opposed to stabilizing a high-dimensional phase map.
- Physical Flexibility: Sensors may be placed with arbitrary spacings, at different heights, orientations, or even across physically separated platforms, subject only to basic geometric calibration.
- Computational Feasibility: The optimization over global phases is computationally lightweight (coordinate descent over variables), easily scalable to tens or hundreds of sensors.
- Comparison to Other Domains: MASI's fusion philosophy is analogous to techniques employed in the Event Horizon Telescope, which solves phase ambiguity computationally using atomic-clock-referenced timing; MASI replaces the hardware with computational phase locking (Wang et al., 8 Nov 2025).
- Scalability Limits: The approach remains robust and tractable as the number of sensors increases, with linear complexity in the number of sensor-phase parameters.
7. Experimental Results and Comparative Analyses
Selected experimental benchmarks from MASI deployments are:
| Experiment | Single Sensor | MASI Fused Result |
|---|---|---|
| Point-source PSF | FWHM ≈ m | FWHM ≈ m |
| Resolution chart ($2$ cm) | m resolution | Resolves $780$ nm lines |
| Fingerprint field expansion | mm area | mm area |
| 3D axial mapping | — | m depth |
- Data Hiding and Steganography: Regions not covered by any sensor are computationally inaccessible unless sufficient field expansion is enacted, naturally supporting information concealment.
- Phase Recovery Robustness: MASI is robust to objects with large phase gradients and discontinuities—regimes where conventional Fourier ptychography struggles.
- Comparison with Overlap-Based and Interferometric Methods: MASI eliminates the requirement for overlapped sampling or reference beams and scales to long baselines, making it substantially more practical for large-format and distributed synthetic aperture imaging (Wang et al., 8 Nov 2025).
MASI represents a paradigm for computationally scalable optical synthetic aperture imaging harnessing distributed arrays of legacy or custom sensors, providing a path to lensless, super-resolved, and three-dimensional imaging without strict mechanical or optical synchronization constraints.