Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 171 tok/s
Gemini 2.5 Pro 47 tok/s Pro
GPT-5 Medium 32 tok/s Pro
GPT-5 High 36 tok/s Pro
GPT-4o 60 tok/s Pro
Kimi K2 188 tok/s Pro
GPT OSS 120B 437 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Charge-Equilibrated TensorNet (QET)

Updated 11 November 2025
  • Charge-Equilibrated TensorNet (QET) is a neural network-based atomistic potential that combines equivariant tensor representations with an analytic charge equilibration scheme.
  • The model employs a tensor-based message-passing framework with strict E(3)-equivariance to efficiently compute energies, forces, and reactive electrostatics over large material systems.
  • QET achieves linear scaling by replacing cubic charge equilibration with local neighbor summations, enabling accurate simulations in energy storage, catalysis, and interfacial studies.

Charge-Equilibrated TensorNet (QET) is a neural network-based atomistic potential that achieves charge-awareness and strict equivariance with linear scaling, enabling efficient, self-consistent modeling of electrostatics and charge transfer in materials far beyond the computational reach of conventional approaches. QET solves the long-standing challenge of incorporating charge equilibration and reactive electrostatics into foundation potentials for large-scale simulations, allowing for accurate predictions in systems where electron redistribution and voltage effects are critical. The approach combines a tensor-based message-passing network with an analytic, environment-dependent charge equilibration scheme, facilitating applications in energy storage, heterogeneous catalysis, ionic liquids, and complex solid–liquid interfaces (Ko et al., 10 Nov 2025).

1. TensorNet Backbone and Feature Construction

QET extends the TensorNet equivariant graph neural network architecture, in which each atom is endowed with a rank-2 tensor feature hiRC×3×3h_i \in \mathbb{R}^{C \times 3 \times 3}. This representation jointly encodes scalar, vector, and higher-order geometric (e.g., quadrupolar) descriptors. The essential stages in a forward pass are:

  • Input: Atomic numbers ZiZ_i, coordinates Ri\mathbf{R}_i, and total system charge QtotQ_{\rm tot}.
  • Embedding: Each atom ii receives a tensor feature hi(0)h_i^{(0)} via a learned embedding.
  • Message Passing: Three interaction (update) blocks perform equivariant convolutional message passing over neighbor atoms within a fixed cutoff (5.0 Å), updating hi(l)hi(l+1)h_i^{(l)} \rightarrow h_i^{(l+1)} for l=0,1,2l = 0, 1, 2.
  • Readout: Each atomic tensor is reduced to a scalar feature vector hih_i through contraction and pooling.
  • Charge Equilibration (LQeq Block): A multi-layer perceptron (MLP) outputs environment-dependent electronegativity χi\chi_i and hardness ηi\eta_i for each atom, which are passed to the analytic charge solver.
  • Augmentation and Energy Head: The final atomic feature [hiqiVi][h_i \oplus q_i \oplus V_i] (where qiq_i is the partial charge and ViV_i a short-range Coulomb sum) is processed by a gated MLP to yield per-atom energy EiE_i. The total potential energy is Etot=iEiE_{\rm tot} = \sum_i E_i.

The architecture preserves exact E(3)E(3)-equivariance, permitting seamless modeling of bulk, molecular, and interfacial systems under arbitrary transformations.

2. Analytic and Linear-Scaling Charge Equilibration

Central to QET is its charge equilibration scheme, a modification of the classical Qeq formalism with analytic solutions that eliminate the cubic scaling bottleneck. The total energy for a set of partial atomic charges {qi}\{q_i\} is

EQeq=i=1N(χiqi+12ηiqi2)+12ijJijqiqjE_{\rm Qeq} = \sum_{i=1}^N \left( \chi_i\,q_i + \frac{1}{2}\,\eta_i\,q_i^2 \right) + \frac{1}{2}\sum_{i\neq j} J_{ij}\,q_i\,q_j

subject to i=1Nqi=Qtot\sum_{i=1}^N q_i = Q_{\rm tot}.

Here, JijJ_{ij} represents the (short-range) screened Coulomb interaction:

Jii=12σiπ,Jij=erf(Rij/(2γij))RijJ_{ii} = \frac{1}{2\,\sigma_i\,\sqrt\pi}, \qquad J_{ij} = \frac{\operatorname{erf}\left( R_{ij}/(\sqrt{2}\,\gamma_{ij}) \right)}{R_{ij}}

with γij=σi2+σj2\gamma_{ij} = \sqrt{\sigma_i^2 + \sigma_j^2} and RijR_{ij} the interatomic separation.

Minimizing EQeqE_{\rm Qeq} with a Lagrange multiplier λ\lambda leads to the linear system

Hq=λ1χ\mathbf{H} \mathbf{q} = \lambda\,\mathbf{1} - \boldsymbol\chi

where diagonal elements Hii=ηi\mathbf{H}_{ii} = \eta_i and off-diagonal elements Hij=Jij\mathbf{H}_{ij} = J_{ij}.

QET circumvents the standard matrix inversion (O(N3)\mathcal{O}(N^3) cost) by introducing an effective electronegativity:

χi=χi+jiJijqj\chi_i' = \chi_i + \sum_{j \neq i} J_{ij}\,q_j

so that

qi=χiληiq_i = -\frac{\chi_i' - \lambda}{\eta_i}

The charge sum constraint gives λ\lambda:

λ=Qtot+iχi/ηii1/ηi\lambda = \frac{Q_{\rm tot} + \sum_i \chi_i' / \eta_i}{\sum_i 1/\eta_i}

Thus, each qiq_i is updated in closed form using only local neighbor summations and two global reductions, all scaling as O(N)\mathcal{O}(N).

3. Training Regimen and Implementation

QET was trained on "MatQ," a dataset of approximately 114,000 structures spanning 86 elements, derived from DFT-PBE calculations with DDEC6 atomic charges. The data encompasses both near-equilibrium (strained crystals from the Materials Project) and out-of-equilibrium (AIMD and rattling) configurations. The loss is a weighted Huber function over energies, forces, stress tensors, and atomic charges. Typical weights for the foundational model were wE=1.0w_E = 1.0, wF=1.0w_F = 1.0, wσ=0.1w_\sigma = 0.1, wq=1.0w_q = 1.0, with different regimens for specialized and fine-tuned tasks.

Regularization includes a softplus transform on ηi\eta_i to maintain positive hardness and subtraction of elemental reference energies for enhanced convergence. Optimization is performed using AdamW (AMSGrad) with a cosine-annealed learning rate schedule. Early stopping guards against overfitting. The reference implementation is provided via PyTorch Lightning in the Materials Graph Library.

4. Benchmark Performance and Comparative Analysis

QET achieves comparable or superior accuracy to existing charge-aware and charge-agnostic models on standard benchmarks while maintaining linear inference cost. On small charged clusters (e.g., C10H2\mathrm{C}_{10}\mathrm{H}_2, Ag3±\mathrm{Ag}_3^{\pm}, Na9Cl8+\mathrm{Na}_9\mathrm{Cl}_8^+, Au2\mathrm{Au}_2MgO(001)\mathrm{MgO}(001)), QET exhibits mean absolute errors (MAE) that are best-in-class for energies (e.g., 0.64 meV/atom for C10H2\mathrm{C}_{10}\mathrm{H}_2), with charge prediction errors matching or surpassing conventional Qeq models.

For bulk crystals, QET-MatQ performs on par with foundation potentials trained on datasets fourfold or more larger. Structure metrics, bulk and shear moduli, and heat capacity estimates are comparable to state-of-the-art, charge-agnostic TensorNet and advanced alternatives.

System Energy MAE (meV/atom) Forces MAE (meV/Å) Charge MAE (milli-e)
C10H2\mathrm{C}_{10}\mathrm{H}_2 / C10H3+\mathrm{C}_{10}\mathrm{H}_3^+ 0.64 / 28.5 6.58 / 4.92 5.66
Ag3±\mathrm{Ag}_3^{\pm} 0.79 14.0 1.81
Na9Cl8+\mathrm{Na}_9\mathrm{Cl}_8^+ 0.58 27.3 11.5
Au2\mathrm{Au}_2MgO\mathrm{MgO} 0.51 4.58 15.5

A key distinction is that QET achieves this performance at a computational cost scaling strictly as O(N)\mathcal{O}(N), whereas conventional Qeq-based models exhibit cubic or near-quadratic scaling, limiting them to much smaller systems.

5. Unique Capabilities: Charge Transfer and Electrochemical Reactivity

QET's charge-aware architecture allows it to capture physical behaviors inaccessible to charge-agnostic models. In extensive MD simulations of NaCl\mathrm{NaCl}CaCl2\mathrm{CaCl}_2 ionic liquid (1,040 atoms, 100 ps, 1,200 K, NPT), only QET reproduced key experimental observables:

  • Structure factors S(Q)S(Q): QET accurately matched experimental peak positions and avoided spurious ordering at low QQ, in contrast to TensorNet and GRACE potentials.
  • Density: QET predicted the correct value within 1%, where TensorNet severely overestimated and GRACE underestimated.

These qualitative gains stem from QET’s ability to model non-local, self-consistent electrostatics driving liquid disorder.

For reactive solid–electrolyte interfaces (Li\mathrm{Li}/Li6PS5Cl\mathrm{Li}_6\mathrm{PS}_5\mathrm{Cl}), QET detects and maintains the correct phosphorus oxidation states and prevents unphysical bond formation (e.g., P\mathrm{P}P\mathrm{P} bonds). Under applied bias (±4 V), QET supports MD with dynamic, voltage-responsive charge distributions. Negative bias promotes reduction and continued interphase growth; positive bias hallmarks passivation and structural stasis, reflected in the evolving energy landscape and charge histograms over nanosecond-scale trajectories.

6. Limitations and Prospective Extensions

QET omits long-range Coulomb tail contributions beyond a fixed neighbor cutoff; explicit Ewald or particle-mesh summations can be reintroduced where necessary, at increased (but still sub-cubic) cost. The present model only treats monopolar charge transfer; learning multipole moments and environment-dependent charge widths (σi\sigma_i) would further enhance fidelity. All training references arise from a single DFT functional (PBE) and population scheme (DDEC6); the accuracy of QET is inherently limited by the underlying reference data.

Potential directions include incorporating learned Ewald terms, higher multipoles, or environmental charge distributions in an end-to-end differentiable, equivariant manner. Application-specific fine-tuning can relax the charge loss for situations where no reference partitioning is available.

7. Broader Implications: Quantum Many-Body, Categorical, and Charge Network Generalizations

In the broader context of generalized tensor networks, the original proposal for a "Charge-Equilibrated TensorNet" (QET) also arises as an open challenge in the charged-string (JLW) network formalism (Biamonte, 2017). There, a QET demands dynamic insertion/removal of charge pairs for contraction equilibration, movement beyond abelian fusion (parafermionic qudits), and local charge-conserving rewrite rules to make network contraction efficient and physically meaningful. Key technical barriers in both the quantum information and materials modeling domains include combinatorial proliferation of charge labels, managing phase factors from nontrivial braiding/isotopies, and the handling of non-Abelian or parafermionic structures.

In both materials science and tensor network quantum models, QETs represent a strategy to enforce and exploit local/global charge conservation in scalable simulations. The practical realization in atomistic ML potentials marries analytic, strictly local charge solutions to equivariant message passing, removing the computational bottlenecks that, until recently, limited predictive electrostatics, reactivity, and voltage control to small systems.


Charge-Equilibrated TensorNet establishes a scalable architecture for atomistic simulation, addressing the critical electrostatics bottleneck with applications in energy storage, catalysis, and fundamental studies of charge transfer. Its generalization to other domains and future enhancement via non-abelian extensions or end-to-end learned multipoles remains an active research frontier.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (2)
Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Charge-Equilibrated TensorNet (QET).