Lensless HoloVAM: Holographic Volumetric Systems
- Lensless HoloVAM is a holographic volumetric approach that replaces conventional optical elements with phase-only modulation and computed wavefront shaping.
- It enables applications in rapid additive manufacturing, air-quality monitoring, and near-eye augmented reality by integrating tailored phase masks and computational design.
- The system utilizes mathematical models like Fresnel propagation and FFTs along with hardware optimizations to deliver efficient, scalable, and high-throughput performance.
Lensless HoloVAM encompasses a class of holographic volumetric additive manufacturing platforms and sensor architectures that replace traditional imaging optics with direct phase manipulation and computational propagation. These systems, spanning applications such as 3D fabrication, air-quality monitoring, and augmented reality, exploit the fundamentals of computer-generated holography (CGH), inline coherent diffraction, and metasurface-based phase engineering to achieve lens-free wavefront shaping, spatially resolved depth control, and high-throughput volume access. Core enabling principles include phase-only spatial light modulation, diffractive sample illumination, and algorithmically controlled propagation, yielding robust, ultracompact, and optically efficient devices across diverse application classes (Madsen et al., 5 Dec 2025, Zhou et al., 30 Jul 2025, Bravo-Frank et al., 6 Sep 2024, Lan et al., 2019).
1. Principles of Lensless Holographic Volumetric Access and Manipulation
Lensless HoloVAM systems forgo refractive, relay, or imaging optics entirely, instead encoding all spatial and depth selectivity in the phase of coherent light delivered to the target domain. In manufacturing contexts, a phase-only spatial light modulator (SLM) shapes the illumination to directly project tomographic light fields into a photopolymer, while in sensing or display, diffractive or metasurface-based phase elements reconstruct wavefronts or virtual images at distant, application-specific planes. Propagation and focusing are governed by computed phase masks (e.g., digital Fresnel lenses, Pancharatnam–Berry phase gradients), with all lateral and axial structuring handled through computational design (Madsen et al., 5 Dec 2025, Lan et al., 2019). Typical mathematical models rely on the Huygens–Fresnel integral, angular-spectrum methods, and fast Fourier transforms (FFTs) for field propagation and hologram synthesis.
2. System Architectures and Variants
Volumetric Fabrication
In lensless HoloVAM additive manufacturing, the optical system comprises a 405 nm single-mode laser expanded to illuminate a digital SLM (e.g., Holoeye PLUTO‐2.1 UV), whose phase-encoded output is projected directly into a rotating vial containing photopolymer resin. Axial field shaping (e.g., Bessel-like PSFs) and Fourier relations between SLM and resin plane are imposed wholly via the phase mask φ(x,y), obviating the need for objectives or index-matching optics. The system supports build volumes up to 12 mm diameter and 10 mm height, with lateral voxel resolution of ≈100 µm and axial ≈200 µm, dictated principally by SLM pixel count and phase-tiling strategies (Madsen et al., 5 Dec 2025).
Volumetric Sensing
Lensless HoloVAM in air-quality monitoring adopts inline digital holography, leveraging a single collimated diode laser and a global-shutter CMOS sensor positioned directly at the sample chamber’s edge, enclosing a defined volumetric sampling region. Detected holograms contain interference between the reference beam and light scattered by particulate matter, with numerical reconstruction performed over a stack of propagation distances to localize and size particles in 3D. Deep neural networks (YOLOv5s backbone) enable real-time detection and classification with TPR >97%, surpassing traditional PM sensor throughput and coverage for large-diameter particulate (10–300 µm) (Bravo-Frank et al., 6 Sep 2024).
Near-Eye Display
Metasurface-based lensless HoloVAM displays utilize sub-micron-thick silicon nanobeam antenna arrays embedded in contact lenses. Each “pixel” imparts a spatially varying Pancharatnam–Berry phase (φ_PB) to incident circularly polarized light, projecting arbitrary virtual images onto the retina without auxiliary optics or electronics. Image prescription is achieved via Gerchberg–Saxton phase retrieval, and retinal image quality is determined by the meta-atom array size, fill factor, and phase quantization (Lan et al., 2019). This approach preserves near-total pupil transmission and eliminates refractive aberrations, though it is presently limited to static monochromatic imagery.
3. Mathematical Foundations and Computational Propagation
The underlying mathematical frameworks in lensless HoloVAM exploit the Fresnel and angular-spectrum propagation models to compute field evolution and dose distributions:
- Volumetric Additive Manufacturing: The SLM-encoded phase pattern creates a field , with the field at any resin voxel given by
(Madsen et al., 5 Dec 2025). Polymerization is induced where the local integrated intensity exceeds a resin-dependent threshold.
- Air-Quality Monitoring: The intensity recorded at the sensor follows
with forward modeling and numerical reconstruction utilizing FFT-based Fresnel transforms for 3D refocusing and segmentation (Bravo-Frank et al., 6 Sep 2024).
- Metasurface Displays: The field at the retina, given a phase profile φ(x,y), is calculated by
where (Lan et al., 2019).
4. Algorithmic and Hardware Optimizations
Complex field optimization and fast phase encoding are central to HoloVAM performance. For volumetric manufacturing, the wavefront is partitioned into tiles (HoloTile framework), each associated with specific Fourier and spatial parameters to enable multi-angle, dose-optimized fabrication. Efficient PSF shaping (e.g., Bessel beam phase ramps) extends voxel uniformity and feature aspect ratio. Hardware requirements are modest, with no relays or index-matching optics; a single SLM, laser, and rotation stage suffice (Madsen et al., 5 Dec 2025).
For sensor variants, fast GPU-based reconstruction pipelines support real-time voluminous data rates (e.g., 110 fps × 0.37 mL sample frame), and neural network segmentation yields robust size and depth assignments with <2% error (Bravo-Frank et al., 6 Sep 2024).
Near-eye display implementations use e-beam–patterned metasurfaces with nanometer alignment tolerances. The design achieves >70% cross-polarization efficiency at 543 nm and angular resolutions of 0.17–1.7 arcmin, with field of view up to ≈11.4° diagonal (Lan et al., 2019).
5. Performance and Benchmarking
Key quantitative metrics from lensless HoloVAM implementations are summarized according to modality:
| Domain | Lateral/axial res. | Throughput/FOV | Efficiency/PSNR | Latency | Special Features |
|---|---|---|---|---|---|
| Additive Manufacturing | 100 µm / 200 µm | ∅12 mm × 10 mm | η_total ≈8×10⁻⁴ J/mm³ | 25–30 s build | Bessel PSF, HoloTile tiling |
| Air-Quality Monitoring | 10 µm–300 µm objects | 26 L/min (1.56 L/s) | TPR 97%, FPR 0.6% | 110 fps, GPU | 3D refocus, deep detection |
| Near-Eye Display | 0.17–1.7 arcmin (retina) | 11.4° diag FOV | ×30 contrast | Passive, static | PB-phase metasurface |
Performance benchmarks for the near-eye display variant show PSNR gains of 4.2 dB and SSIM gains of 0.15 over competitive DIV2K image reconstructions. The display achieves pupil diameter invariance (PSNR variation within ±0.3 dB over 2–5 mm) and depth-cue enhancement in biplane scenes (Zhou et al., 30 Jul 2025). Experimental validation affirms resolved features down to 4.39 lp/mm (2 mm pupil) and system robustness under dynamic pupil conditions.
6. Applications, Limitations, and Future Prospects
Lensless HoloVAM unlocks multiple domains of optomechanical simplification and functional integration:
- Volumetric fabrication: Enables rapid (25–30 s), centimeter-scale 3D printing for biomedical scaffolds, micro-optics, and on-demand prototyping with high photon efficiency and minimal hardware (Madsen et al., 5 Dec 2025).
- Sensor networks: Air-quality monitoring configurations provide real-time, depth-resolved, and morphologically aware PM detection in distributed or resource-limited settings (Bravo-Frank et al., 6 Sep 2024).
- Augmented reality: Metasurface HoloVAM architectures promise ultracompact, see-through, passive AR overlay with minimal impact on natural vision, though limited by static, monochromatic capabilities (Lan et al., 2019).
Documented limitations include SLM pixel-density constraints, surface roughness from dead-space, restricted bandwidth (for metasurfaces), and static image rendering. Potential enhancements highlighted are high-resolution SLMs, broadband meta-atom designs, temporal and spatial PSF multiplexing, and closed-loop wavefront correction.
A plausible implication is that lensless HoloVAM may substantially lower the barrier to scalable, portable, and application-flexible volumetric manufacturing and environmental sensing, provided advances in phase encoding, device integration, and digital-hardware codesign continue apace. Broader impact spans biomedical, industrial, and consumer domains, supporting adaptive, field-deployable, and miniaturized volumetric solutions (Madsen et al., 5 Dec 2025, Bravo-Frank et al., 6 Sep 2024).