Papers
Topics
Authors
Recent
2000 character limit reached

MASI: Scalable Optical Synthetic Aperture Imaging

Updated 15 November 2025
  • MASI is an optical synthetic aperture imaging system that digitally fuses coded sensor data to achieve super-resolved, full-field 3D reconstructions.
  • It employs sub-pixel dithering and software-based phase synchronization to overcome traditional hardware limitations in optical imaging.
  • Experimental results demonstrate MASI’s ability to resolve sub-micron features and expand imaging field-of-view by up to 16×.

The Multiscale Aperture Synthesis Imager (MASI) is a class of imaging system that achieves super-resolved, full-field, and three-dimensional imaging by coherently combining distributed measurements from a scalable array of independently operating optical sensors. MASI transforms classical synthetic aperture imaging—prevalent in radio astronomy and radar—into the optical domain using a specialized computational architecture that eliminates the need for precision interferometric synchronization and extensive hardware alignment, instead relying on software-based phase synchronization and multiscale ptychographic reconstruction. This enables MASI to resolve sub-micron features at centimeter or greater working distances, operate in a lensless configuration, and handle complex objects with large phase gradients, thus overcoming longstanding scalability and field-of-view limitations in optical synthetic aperture systems (Wang et al., 8 Nov 2025).

1. MASI Principle and Conceptual Motivation

MASI implements a generalization of synthetic aperture imaging for optical wavelengths, motivated by challenges unique to the optical regime. In classical synthetic aperture approaches (used in radio astronomy), resolution is increased beyond the diffraction limit of a single element by coherently fusing signals from widely spaced receivers. At optical wavelengths, extending these principles is complicated by the need for sub-wavelength phase stability and extensive hardware for beam combination or overlapping measurements (as in Fourier ptychography).

MASI's core innovation is to break the imaging task into many independently solvable subproblems by deploying an array of physically separated coded sensors. Each sensor operates locally, acquiring redundant coded intensity measurements via sub-pixel dithering and ptychographic diversity, such that the complex wavefield can be recovered for each sensor without reference beams or overlapping spatial coverage with other sensors. The resulting wavefields are phase-synchronized computationally in a global optimization step, constructing a synthetic aperture that exceeds the standalone sensor's diffraction limit. This paradigm shift translates the challenging problem of sub-wavelength hardware synchronization into a tractable software problem, enabling flexible layouts and long baselines (Wang et al., 8 Nov 2025).

2. Physical and Measurement Architecture

The MASI architecture consists of a 2D grid of SS independent sensors (often implemented as CMOS/CCD micro-cameras), each mated with a thin, pre-calibrated amplitude-and-phase coded surface:

  • Coded Surface: A mask patterned at or below the wavelength scale acts as a deterministic ptychographic probe, encoding phase and low-frequency information for computational recovery.
  • Sensor Placement: Each sensor sits at a unique lateral position (xs,ys)(x_s, y_s) and axial offset hsh_s relative to the object, but with arbitrary spacing; no requirement exists for overlapping measurement regions, physical reference beams, or precise depth matching.
  • Dithering: Piezo-controlled or otherwise precision micro-actuation stages introduce sub-pixel lateral dithers (\sim1 µm), enabling multiple intensity measurements per sensor for robust phase retrieval.
  • Operation: The object is illuminated by a coherent source. Each sensor records temporal stacks of coded diffraction patterns, forming the measurement data set {Is,j}\{I_{s, j}\}.

Each sensor's intensity stack supports independent non-interferometric reconstruction, producing a local complex field Ws(x,y)W_s(x, y) over the sensor's support. No cross-sensor synchrony or calibration is required during data acquisition (Wang et al., 8 Nov 2025).

3. Computational Phase Synchronization and Wavefield Fusion

Once the individual complex wavefields Ws(x,y)W_s(x, y) are recovered (typically m×nm\times n in size), MASI executes a multi-step computational protocol:

  • Wavefield Modeling: Each sensor measures Ws(x,y)=o(x,y)psffree(hs)W_s(x, y) = o(x, y) * \mathrm{psf}_{\rm free}(h_s), the object field convolved with free-space propagation from the object plane.
  • Cropping and Coded Modulation: The physically supported region is Wscrop(x,y)=Ws(xxs,yys)W_s^{\rm crop}(x, y) = W_s(x-x_s, y-y_s).
  • Modeling Measurements: For dithering shift (Δxj,Δyj)(\Delta x_j, \Delta y_j), the intensity is

Is,j(x,y)=[Wscrop(x,y)CSs(x,y)]psffree(d)2I_{s, j}(x, y) = \left| \left[ W_s^{\rm crop}(x, y) \cdot CS_s(x, y) \right] * \mathrm{psf}_{\rm free}(d) \right|^2

where CSsCS_s is the known mask and dd is the coded-surface-to-pixel distance.

  • Numerical Back-Propagation: Each WsW_s is zero-padded to the full array extent, then numerically propagated back to the object plane:

W~s(x,y)=[Wspad(x,y)]psffree(hs)\tilde{W}_s(x, y) = [ W_s^{\rm pad}(x, y) ] * \mathrm{psf}_{\rm free}(-h_s)

  • Global Phase Alignment: Each WsW_s is determined up to an unknown constant phase. MASI solves for {ϕs}\{\phi_s\} maximizing the constructive interference in the fused object estimate:

Orec(x,y)=s=1SeiϕsW~s(x,y)O_{\rm rec}(x, y) = \sum_{s=1}^{S} e^{i\phi_s} \tilde{W}_s(x, y)

{ϕs}=argmax{ϕs}x,yOrec(x,y)2\{ \phi_s \} = \underset{ \{ \phi_s \} }{ \arg\max } \sum_{x, y} | O_{\rm rec}(x, y) |^2

Practically, one reference sensor is fixed (ϕ0=0\phi_0 = 0), the rest are optimized by coordinate descent.

This computational phase synchronization fuses all sensor data into a coherent super-resolved reconstruction, with only SS scalar phases to optimize, reducing the synchronization parameter space by many orders of magnitude compared to full-field interferometric approaches (Wang et al., 8 Nov 2025).

4. Diffraction-Based Field Expansion

MASI exploits the wave-optical property that back-propagating a detector-plane wavefield inherently expands the reconstructed object field:

  • Zero Padding: By zero-padding the recovered WsW_s to a region much larger than the sensor, MASI computationally reconstructs illumination from parts of the object not directly above the physical sensor.
  • Physical Mechanism: Each sensor captures angular spread in its local measurement, which—under free-space propagation—maps to spatial information (including regions beyond the sensor's footprint).
  • Mathematical Formalism:

orecon(x,y)F1{W^s(kx,ky)eik2kx2ky2hs}o_{\rm recon}(x, y) \propto \mathcal{F}^{-1} \left\{ \widehat{W}_s(k_x, k_y) e^{i\sqrt{ k^2 - k_x^2 - k_y^2 } h_s } \right\}

where W^s\widehat{W}_s is the Fourier transform of the padded wavefield.

  • Empirical Results: Single-sensor experiments demonstrate up to a 16×16\times field-area increase (e.g., expanding from 4.6×3.44.6\times 3.4 mm2^2 to 16.6×15.416.6\times 15.4 mm2^2) as padding is increased (Wang et al., 8 Nov 2025).

This mechanism enables computational field-of-view expansion, revealing phase-contrast features and supporting natural data obfuscation outside the physical array.

5. Performance Metrics and Imaging Capabilities

Key performance characteristics of MASI include:

  • Lateral Resolution: Determined by the effective synthetic aperture DeffD_{\rm eff},

δxλ/Deff\delta x \approx \lambda / D_{\rm eff}

where DeffD_{\rm eff} is the maximum sensor array extent. Experimental results show resolving 780780\,nm line features at 22\,cm working distance (threefold improvement over a single sensor, which is limited to approximately 2.19μ2.19\,\mum).

  • Axial (3D) Resolution: Achieved through digital focusing; by propagating the reconstructed volume to candidate depths and optimizing a sharpness metric, 6.5μ\sim 6.5\,\mum depth discrimination is demonstrated over centimeter depths.
  • 3D Wavefield and View Synthesis: The recovered Orec(x,y)O_{\rm rec}(x, y) can be propagated to different axial planes or subjected to Fourier-domain pupil shifts to generate refocused or synthetic-angle images:

Oview(x,y)=F1{F{Orec}P(kx,ky)}O_{\rm view}(x, y) = \mathcal{F}^{-1} \left\{ \mathcal{F}\{ O_{\rm rec} \} \cdot P(k_x, k_y) \right\}

where PP is the angularly shifted pupil.

  • Field Expansion: Systematically increasing the padding factor in detector space increases the accessible field area in the reconstruction, with area expanding up to 16×16\times documented (Wang et al., 8 Nov 2025).

6. Scalability and Computational Architecture

By isolating synchronization to a small set of scalar global phases, MASI sidesteps the need for hardware-based sub-wavelength path stabilization present in overlapping Fourier ptychography or interferometric methods:

  • Algorithmic Simplicity: Synchronization problem reduces to SS phase offsets for SS sensors, as opposed to stabilizing a high-dimensional phase map.
  • Physical Flexibility: Sensors may be placed with arbitrary spacings, at different heights, orientations, or even across physically separated platforms, subject only to basic geometric calibration.
  • Computational Feasibility: The optimization over global phases is computationally lightweight (coordinate descent over SS variables), easily scalable to tens or hundreds of sensors.
  • Comparison to Other Domains: MASI's fusion philosophy is analogous to techniques employed in the Event Horizon Telescope, which solves phase ambiguity computationally using atomic-clock-referenced timing; MASI replaces the hardware with computational phase locking (Wang et al., 8 Nov 2025).
  • Scalability Limits: The approach remains robust and tractable as the number of sensors increases, with linear complexity in the number of sensor-phase parameters.

7. Experimental Results and Comparative Analyses

Selected experimental benchmarks from MASI deployments are:

Experiment Single Sensor MASI Fused Result
Point-source PSF FWHM ≈ 21.8μ21.8\,\mum FWHM ≈ 6.2μ6.2\,\mum
Resolution chart ($2$ cm) 2.19μ\sim 2.19\,\mum resolution Resolves $780$ nm lines
Fingerprint field expansion 4.6×3.44.6\times3.4 mm2^2 area 16.6×15.416.6\times15.4 mm2^2 area
3D axial mapping 6.5μ\sim6.5\,\mum depth
  • Data Hiding and Steganography: Regions not covered by any sensor are computationally inaccessible unless sufficient field expansion is enacted, naturally supporting information concealment.
  • Phase Recovery Robustness: MASI is robust to objects with large phase gradients and discontinuities—regimes where conventional Fourier ptychography struggles.
  • Comparison with Overlap-Based and Interferometric Methods: MASI eliminates the requirement for overlapped sampling or reference beams and scales to long baselines, making it substantially more practical for large-format and distributed synthetic aperture imaging (Wang et al., 8 Nov 2025).

MASI represents a paradigm for computationally scalable optical synthetic aperture imaging harnessing distributed arrays of legacy or custom sensors, providing a path to lensless, super-resolved, and three-dimensional imaging without strict mechanical or optical synchronization constraints.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Whiteboard

Topic to Video (Beta)

Follow Topic

Get notified by email when new papers are published related to Multiscale Aperture Synthesis Imager (MASI).