Papers
Topics
Authors
Recent
Search
2000 character limit reached

Static Physics-Informed BoF for SPM Imaging

Updated 16 January 2026
  • The static physics-informed BoF approach converts AFM/MFM images into fixed-length histograms using a learned dictionary of physical words.
  • It integrates descriptor calculations and energy-based weighting to capture nanoscale morphology and magnetic structures with high robustness.
  • The method serves as a bridge to autonomous multi-objective Bayesian optimization by mapping high-dimensional imaging data to interpretable surrogates.

The static physics-informed Bag-of-Features (BoF) representation is a computational methodology for converting atomic force microscopy (AFM) and magnetic force microscopy (MFM) images of combinatorial materials libraries into robust, interpretable, fixed-length feature vectors that encode physicochemically meaningful information. This approach enables quantitative, multi-objective structure–property analysis and autonomous exploration of complex materials landscapes by serving as the interface between high-dimensional imaging data and advanced optimization frameworks, such as multi-objective Bayesian optimization (MOBO) (Barakati et al., 9 Jan 2026).

1. Foundational Principles and Motivation

The static physics-informed BoF representation models an SPM image as a histogram over “physical words,” which are learned dictionary elements representing local patterns in the real/phase space of the image. Given the space of local patch descriptors XRd\mathcal X\subset\mathbb R^d, a dictionary {dk}k=1K\{d_k\}_{k=1}^K is constructed so that each image II is encoded as a feature vector hRKh\in\mathbb R^K, where

h=[h1,,hK]h = [h_1, \dots, h_K]

and hkh_k is the count (or weighted count) of patches most similar to physical word dkd_k. This encoding is translation-invariant and robust to scan offsets, resolution changes, and local noise, as it discards absolute spatial coordinates and aggregates local statistics, fulfilling key requirements for SPM data analysis.

The static property indicates that the BoF feature vector for a given image is fixed upon extraction, providing reproducible input for surrogate modeling and acquisition functions in MOBO. By employing physically motivated dictionaries and patch descriptors, the representation encapsulates essential nanoscale morphology and magnetic structure, directly supporting the optimization of competing objectives such as roughness, domain size, and contrast.

2. Feature Extraction Pipeline

The pipeline consists of sequential steps that map raw AFM/MFM images to collections of local descriptors and, subsequently, to the BoF histogram:

  1. Preprocessing: Each image is first flattened via plane subtraction

z(x,y)=z(x,y)(ax+by+c)z'(x,y) = z(x,y) - (a x + b y + c)

where (a,b,c)(a,b,c) minimizes least-squares error, followed by denoising using a Gaussian filter with kernel Gσ(u,v)G_\sigma(u,v).

  1. Patch Extraction: Filtered images zf(x,y)z_f(x,y) are divided into overlapping patches PiP_i, each yielding a descriptor xix_i.
  2. Descriptor Calculation: Five classes of features, encapsulated in Table 1 of the cited work, include:
    • Root-mean-square roughness RqR_q and height-distribution moments μk\mu_k for AFM,
    • Autocorrelation-derived correlation length λ\lambda,
    • Voronoi-based mean particle diameter DmeanD_{\rm mean},
    • MFM domain size LL (via peak frequency ff^* of the power spectrum),
    • Domain magnitude/contrast RmagR_{\rm mag}.

These descriptors are calculated according to physically and statistically justified formulas, e.g., for roughness:

Rq=1N(x,y)P(zf(x,y)zˉ)2,zˉ=1Nzf(x,y)R_q = \sqrt{\frac{1}{N}\sum_{(x,y)\in P}(z_f(x,y) - \bar z)^2},\quad \bar z = \frac{1}{N}\sum z_f(x,y)

3. Bag-of-Features Model Construction

BoF encoding consists of three main stages:

  • Dictionary Learning: All patch descriptors {xi}\{x_i\} are pooled across the dataset and clustered (typically by KK-means) to identify KK representative atoms {dk}\{d_k\}.
  • Assignment: Each descriptor xix_i is assigned either via hard nearest-neighbor or soft Gaussian-weighted assignment:

wik={1k=argminjxidj2 0otherwisew_{ik} = \begin{cases} 1 & k = \arg\min_j\|x_i - d_j\|_2 \ 0 & \text{otherwise} \end{cases}

or

wik=exp(xidk2/(2σ2))j=1Kexp(xidj2/(2σ2))w_{ik} = \frac{\exp(-\| x_i - d_k \|^2 / (2 \sigma^2))}{\sum_{j=1}^K \exp(-\| x_i - d_j \|^2 / (2 \sigma^2))}

  • Histogram Encoding: For each image, hk=i=1Mwikh_k = \sum_{i=1}^M w_{ik} is computed for k=1,,Kk=1,\dots,K.

A plausible implication is that the histogram hh quantifies the abundance and diversity of distinct local patterns in the image, thereby reducing thousands of pixel values to a tractable set of interpretable features.

4. Physics-Informed Constraints and Weighting

To align BoF features with underlying physical phenomena, patch assignment weights wikw_{ik} are modulated by energy-based quantities:

  • Exchange-Energy Weighting (MFM):

Wexch(x,y)=Am(x,y)2\mathcal W_{\rm exch}(x,y) = A \| \nabla \mathbf m(x,y) \|^2

where m\mathbf m is local magnetization and AA the exchange stiffness.

  • Surface-Energy Weighting (AFM):

Wsurf(x,y)=γszf(x,y)2\mathcal W_{\rm surf}(x,y) = \gamma_s \| \nabla z_f(x,y) \|^2

with surface tension γs\gamma_s.

The average weight over each patch PiP_i, W(Pi)\overline{\mathcal W}(P_i), is incorporated such that wikwik×W(Pi)w_{ik} \to w_{ik} \times \overline{\mathcal W}(P_i). This emphasizes domains that are physically relevant, modulating the dictionary-based encoding by the energetic landscape at the nanoscale.

5. Dimensionality Reduction and Normalization

After histogram construction, normalization and dimensionality reduction are performed:

  • L1L^1 normalization: h^=h/h1\hat h = h / \|h\|_1
  • L2L^2 normalization (optional): h~=h/h2\tilde h = h / \|h\|_2
  • Principal Component Analysis (PCA): Applied to the ensemble {h^(n)}\{\hat h^{(n)}\} to obtain compact representations h^PCA(n)=U(h^(n)μh)\hat h^{(n)}_{\rm PCA} = U^\top (\hat h^{(n)} - \mu_h ), facilitating efficient optimization in reduced feature space.

This step addresses feature scaling and redundancy, ensuring that the surrogate models operate on statistically stable, non-degenerate inputs.

6. Integration with Multi-Objective Bayesian Optimization

BoF vectors serve as inputs for surrogate models and acquisition functions in a MOBO workflow:

  • Surrogates: For each objective jj (e.g., RqR_q, λ\lambda, LL, RmagR_{\rm mag}), a Gaussian process fj(h)GPf_j(h) \sim \mathcal{GP} is fitted.
  • Acquisition: Candidate compositions are scored by the batch q-Expected Hypervolume Improvement (qEHVI) criterion:

α({hx}xXcand)=E[max{0,HV(P{f(hx)})HV(P)}]\alpha(\{h_x\}_{x \in \mathcal X_{\rm cand}}) = \mathbb E[ \max\{0,\, \mathrm{HV}(P \cup \{f(h_x)\}) - \mathrm{HV}(P) \}]

where HV()\mathrm{HV}(\cdot) is the dominated hypervolume.

  • Closed-Loop Loop: Iterative selection, measurement, encoding, and update steps allow rapid mapping of feature landscapes with minimal acquisition budget.

This approach turns static imaging data into actionable feedback, actively steering experimental exploration of combinatorial materials spaces.

7. Pareto-Front Mapping and Autonomous Discovery

Pareto mapping quantifies trade-offs among competing objectives at the feature level:

  • Pareto Dominance: A vector hah^a dominates hbh^b if j:fj(ha)fj(hb)\forall j: f_j(h^a) \ge f_j(h^b) and j:fj(ha)>fj(hb)\exists j: f_j(h^a) > f_j(h^b).
  • Front Construction: The Pareto front P={h:h that dominates h}P = \{ h: \nexists h' \text{ that dominates } h \} is maintained, with non-dominated solutions updated as new measurements are acquired.
  • Hypervolume Indicator: HV(P)\mathrm{HV}(P) is used for acquisition targeting, measuring the multi-objective coverage.

This methodology enables interpretable mapping of structure–property trends, identification of clusters and trade-off regimes, and selection of optimal candidate compositions within high-dimensional feature landscapes.

8. End-to-End Workflow Summary

The static physics-informed BoF approach is operationalized in autonomous probe microscopy workflows according to the following structured algorithm:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
1. Offline:
   • Acquire AFM/MFM scans, preprocess (flatten, filter), extract descriptors {x_i}, learn dictionary {d_k}.
2. Initialize:
   • Choose initial compositions {x_n}, perform measurement, extract BoF histograms h^{(n)}. 
   • Fit GP surrogates f_j(h).
3. Iterative Loop:
   a) Acquisition: Optimize qEHVI, select candidates.
   b) Measurement: Acquire, extract descriptors.
   c) Encoding: Compute assignment, BoF histogram h^{(t)}.
   d) Update: Augment data, retrain surrogates.
   e) Pareto Update: Maintain non-dominated vectors.
   f) (Optional) PCA/normalization refresh.
4. Termination: Stop when budget/convergence achieved.
5. Output: Report Pareto-optimal compositions and corresponding features.

This approach generalizes beyond specific modalities or materials systems, as demonstrated for Au-Co-Ni combinatorial films, and offers extensibility to diverse imaging and feature sets. The static, physics-informed BoF methodology thus underpins closed-loop, multi-objective discovery frameworks and advances the interpretability and efficiency of autonomous SPM research (Barakati et al., 9 Jan 2026).

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Static Physics-Informed Bag-of-Features (BoF) Representation.