Comparing Machine Learning and Physics-Based Nanoparticle Geometry Determinations Using Far-Field Spectral Properties (2509.08174v1)
Abstract: Anisotropic metal nanostructures exhibit polarization-dependent light scattering. This property has been widely exploited to determine geometries of subwavelength structures using far-field microscopy. Here, we explore the use of variational autoencoders (VAEs) to determine the geometries of gold nanorods (NRs) such as in-plane orientation and aspect ratio under linearly polarized dark-field illumination in an optical microscope. We input polarized dark-field scattering spectra and electron microscopy images into a dual-branch multimodal VAE with a single shared latent space trained on paired spectra-image data, using a learnable linear adapter. We achieve prediction of Au NRs using only polarized dark-field scattering spectra input. We determine geometrical parameters of orientational angle and aspect ratio quantitatively via both dual-VAE and physics-based analysis. We show that orientational angle prediction by dual-VAE performs well with only a small (300 particle) training set, yielding a mean absolute error (MAE) of 14.4 and a concordance correlation coefficient (CCC) of 0.95. This performance is only marginally worse than the physics-based cos(2theta) fitting approach between the scattering intensity and the polarizing angle, which achieves MAE of 8.78 and CCC of 0.99. Aspect ratio determination is also similar for the dual-VAE and physics-based fitting comparison (MAE of 0.21 vs. 0.23 and CCC of 0.53 vs. 0.68). By learning a shared latent manifold linking spectra and morphology, the model can generate NR images with accurate orientation and aspect ratio with spectra-only input in the small-data regime (300 particles), suggesting a general recipe for inverse nano-optical problems requiring both structure and orientation information.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.