Papers
Topics
Authors
Recent
2000 character limit reached

High-Throughput Computational Screening

Updated 27 November 2025
  • High-throughput computational screening is a computational paradigm that uses automated multi-stage pipelines integrating physics-based models and machine learning to rapidly assess vast candidate libraries.
  • It leverages multi-fidelity models where early low-fidelity surrogates and later high-fidelity ab initio methods balance speed and accuracy, significantly reducing computational cost.
  • Adaptive strategies and robust automation frameworks enhance precision and scalability, enabling efficient discovery of high-performing materials and molecules for targeted applications.

High-throughput computational screening (HTCS) is a paradigm in materials science and molecular discovery that enables the rapid evaluation of large candidate libraries for targeted properties using automated computational workflows. HTCS frameworks integrate physics-based models, surrogate predictors, machine learning, and database infrastructure to triage, prioritize, and rank candidates with substantially reduced labor and computational cost relative to traditional one-at-a-time simulations. The ultimate objective is to efficiently maximize the yield of "positives"—candidates meeting user-defined performance criteria—while minimizing overall computational expenditure, often subject to hard resource constraints.

1. Principles and Mathematical Foundations

The formal structure of HTCS pipelines comprises a sequential, multi-stage process, where the candidate library X\mathbb{X}, typically containing X104|\mathbb{X}| \gg 10^410810^8 entities (molecules, crystals, structures, defects, or mutants), is filtered through NN surrogate models of increasing fidelity and cost, S1S2SNS_1 \to S_2 \to \dots \to S_N. Each stage SiS_i is defined as a triplet (fi,λi,ci)(f_i, \lambda_i, c_i), where fif_i is a predictive model assigning a score yi=fi(x)y_i = f_i(x), λi\lambda_i is a threshold, and cic_i is the per-candidate computational cost. The compound filtering criterion is xXi:fi(x)λix \in \mathbb{X}_i: f_i(x) \geq \lambda_i, producing the final set of "positives" Y={xXN:fN(x)λN}\mathbb{Y} = \{x\in\mathbb{X}_N : f_N(x) \geq \lambda_N\}.

The central optimization metric in pipeline design is the return-on-computational-investment (ROCI), expressed as the expected yield r(λ)r(\lambda) per unit cost h(λ)h(\lambda), where

r(λ)=XP(f1λ1,,fNλN)r(\lambda) = |\mathbb{X}| \cdot P(f_1\geq\lambda_1,\dots,f_N\geq\lambda_N)

h(λ)=Xi=1Nciy1λ1,...,yi1λi1p(y1,...,yi1)dy1...dyi1h(\lambda) = |\mathbb{X}| \sum_{i=1}^N c_i \int_{y_1\geq\lambda_1,...,y_{i-1}\geq\lambda_{i-1}} p(y_1,...,y_{i-1}) dy_1...dy_{i-1}

with p(y1,...,yN)p(y_1,...,y_N) the joint surrogate score distribution. The constrained optimization solves

ψ=argmaxψ=[λ1,...,λN1]r([ψ,λN])subject toh([ψ,λN])C\psi^* = \mathrm{argmax}_{\psi = [\lambda_1,...,\lambda_{N-1}]} r([\psi, \lambda_N]) \quad \text{subject to} \quad h([\psi, \lambda_N]) \leq C

or, equivalently, a weighted unconstrained trade-off. Thresholds λi\lambda_i are tuned via grid or gradient-based search, using numerically estimated pp (e.g., EM-learned Gaussian mixtures) (Woo et al., 2021).

2. Multi-Fidelity Models and Adaptive Strategies

Multi-fidelity screening exploits predictors of differing accuracy/cost. Early stages typically involve rapid, low-fidelity surrogates (empirical force fields, ML-generated scores, or simplified physics), while later stages employ expensive, high-fidelity ab initio methods (DFT, high-level quantum chemistry, full molecular dynamics). The optimal ordering aligns predictors by cost, but the approach is robust to stage strength and joint score correlations.

Operational strategies allow dynamic adjustment of thresholds λi\lambda_i in response to budget or accuracy targets. By tuning a trade-off parameter α\alpha in the objective, pipelines interpolate between throughput maximization and cost minimization, accommodating real-time monitoring and re-optimization as empirical pass-rates and budget consumption evolve. Empirically, high inter-stage score correlation (e.g., ρ0.8 ⁣ ⁣0.9\rho \sim 0.8\!-\!0.9) yields near-maximal cost savings; even moderate correlation (ρ0.5\rho \sim 0.5) provides substantial gain over single-fidelity or naïve strategies. In realistic deployments (e.g., lncRNA classification, \sim50,000 molecules), adaptive four-stage pipelines achieved >44%>44\% cost savings at >96%>96\% accuracy (Woo et al., 2021).

3. Domain-Specific Workflows and Descriptor Design

HTCS methodologies span diverse domains. In materials science, workflows are tailored to specific property targets (e.g., thermal conductivity, thermoelectrics, ionic conductivity, catalysis, gas adsorption/selectivity, piezoelectricity, magnetic function):

  • Thermal Screening: Quasi-harmonic Debye (AGL) models compute Debye temperature ΘD\Theta_D, Grüneisen parameter γ\gamma, and lattice conductivity κl\kappa_l from DFT energy/volume curves. Screening is by ranking κl\kappa_l or ΘD\Theta_D; throughput is one to two orders faster than full BTE phonon calculations, with Pearson r0.88r\approx0.88 and Spearman ρ0.80\rho\approx0.80 to experiment (Toher et al., 2014).
  • Thermoelectrics: Effective mass and deformation potential–based electrical descriptor χ\chi and elastic-constant–based anharmonicity descriptor γ\gamma rapidly estimate power factor and lattice conductivity, bypassing full electron–phonon BTE (Jia et al., 2019).
  • Ion Conductors: The pinball model, a frozen-host electrostatic PES, allows automated molecular dynamics for Li-diffusion screening, drastically accelerating candidate evaluation relative to on-the-fly DFT-MD (Kahle et al., 2019).
  • Catalysis: For bimetallic catalyst discovery, DOS-based pattern similarity metrics replace d-band center and higher moment descriptors. Candidates are ranked via full slab DOS distance metrics, validated by cost-normalized productivity and selectivity benchmarks (Yeo et al., 2020).
  • MOF & Nanoporous Materials: Multi-stage screening begins with geometric and simple adsorption descriptors (PLD, LCD, void fraction, KHK_H), followed by GCMC or ML-predicted selectivity. Framework flexibility is increasingly addressed via MLIPs (e.g., PFP) to capture non-classical effects essential for trace gas separation in humid environments (Bonakala et al., 8 Sep 2025, Tan et al., 14 Feb 2025, Ren et al., 2022).

4. Machine Learning and Data Analytics Integration

With the rise of open, large-scale databases (e.g., CSD, Materials Project, AFLOWLIB), ML-driven HTCS now supports both pre-screening and surrogate property prediction. Descriptors include molecular fingerprints (MACCS, PubChem), structural metrics (PLD, LCD), atom- and bond-type fractions, electrochemical properties, and computed DFT observables. Feature importances and SHAP analyses clarify key factors (e.g., I2_2 Henry coefficient, ring-N content for iodine capture, surface DOS for catalysis). ML is leveraged to accelerate screening to hundreds of thousands or millions of candidates, with top-kk recall often exceeding 90% against full simulation (Tan et al., 14 Feb 2025, Ren et al., 2022, Afzal et al., 2019).

Active learning and closed-loop strategies employ iterative retraining and selection of high-uncertainty or high-value samples, optimizing simulation resources. Graph neural networks now enable electronic property prediction for complex systems (MOFs, perovskites, 2D materials), further reducing the need for costly ab initio computation (Bonakala et al., 8 Sep 2025, Ren et al., 2022).

5. Automation, Infrastructure, and Best Practices

HTCS relies on robust workflow orchestration (FireWorks, AiiDA, Maptool, Custodian, ASE), stringent data management (JSON/HDF5 checkpointing, metadata capture, database integration), and systematic error handling. Provenance tracking and checkpoint–restart protocols enable scalable campaigns on HPC infrastructure. Interoperability with tools such as pymatgen and VASPsol ensures consistency for interface systems and surface/ligand screening (Mathew et al., 2016).

Best practices encompass:

  • Structure and tolerance parameterization (max_area, max_mismatch, slab thickness, vacuum)
  • Automated failure recovery (electronic/ionic stability, elastic constant sampling)
  • Versioned storage and record-keeping for reproducibility
  • Parameter sweeps and exploratory runs at modest cost, with full-fidelity refinement reserved for top hits
  • Thermodynamic or Boltzmann averaging for compositionally disordered systems (Garcia et al., 2019).

6. Benchmark Achievements and Limitations

HTCS frameworks have led to pivotal advances across multiple materials and molecular domains:

  • Identification of Li10_{10}Ge2_2P4_4S24_{24} as a reference fast solid electrolyte; discovery of novel oxide halide Li5_5Cl3_3O (Kahle et al., 2019)
  • Iodine capture MOFs with six-membered aromatic rings and N-rich linkers, combining high KH_H and exothermic Qads_{ads} (Tan et al., 14 Feb 2025)
  • Catalytic alloys (e.g., Ni61_{61}Pt39_{39}) generated by DOS-based screening, achieving 9.5-fold cost-normalized productivity increase over Pd (Yeo et al., 2020)
  • Piezoelectric perovskite alloys with morphotropic phase boundaries, ranked via TET distortion interpolation and convex-hull stability (Armiento et al., 2013)
  • Two-dimensional ferroelectrics and altermagnets discovered by symmetry-driven screening of C2DB entries, with magnetic and switchable properties validated by DFT/NEB/MC analysis (Kruse et al., 2022, Sødequist et al., 11 Jan 2024)

Limitations remain, particularly in force-field or surrogate model accuracy (e.g., MOF flexibility effects, host–guest energetics, non-linear mixing enthalpies), database biases, and synthetic feasibility of in silico hits. Descriptor-driven pipelines necessarily trade detail for scale; predictions of absolute magnitudes may differ considerably from experiment, though ordinal ranking (hit identification) remains robust (Toher et al., 2014).

7. Outlook and Integration

HTCS continues to expand with the integration of generative design, inverse screening, robotic synthesis feedback, multi-objective optimization, and uncertainty quantification. Continuous improvement in ML, data infrastructure, and workflow automation promises orders-of-magnitude greater throughput and increasing reliability. The approach is now an established pillar of rational materials and molecular discovery, with continued development aimed at closing the loop between computational prediction, synthesis, and functional validation (Afzal et al., 2019, Ren et al., 2022).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (14)

Whiteboard

Follow Topic

Get notified by email when new papers are published related to High-Throughput Computational Screening.